entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
2
817k
http://arxiv.org/abs/2306.12359v1
20230621160911
The isoperimetric problem for convex hulls and the large deviations rate functionals of random walks
[ "Vladislav Vysotsky" ]
math.PR
[ "math.PR", "math.OC", "Primary: 60F10, 49K05, secondary: 52A40, 60D05, 60G50" ]
.pdf, .jpg, .tif, .mps .eps, .jpg, .mps The isoperimetric problem for convex hulls and rate functionals]The isoperimetric problem for convex hulls and the large deviations rate functionals of random walks Vladislav Vysotsky, University of Sussex [email protected] We study the asymptotic shape of the most likely trajectories of a planar random walk that result in large deviations of the area of the convex hull of the first n steps of the walk, as n →∞. If the increments of the walk have finite Laplace transform, such a scaled limit trajectory h solves the inhomogeneous anisotropic isoperimetric problem for the convex hull, where the usual length of h is replaced by the large deviations rate functional ∫_0^1 I(h'(t)) dt with I being the rate function of the increments. Assuming that the distribution of increments is not contained in a half-plane, we show that the optimal trajectories are smooth, convex, and satisfy the Euler–Lagrange equation, which we solve explicitly for every I. Our solution resembles that of the isoperimetric problem in the Minkowski plane found by Busemann (1947). [2020]Primary: 60F10, 49K05; secondary: 52A40, 60D05, 60G50 [ Vladislav Vysotsky July 31, 2023 ====================== § INTRODUCTION Convex hulls of random walks is a natural type of random polyhedra, which have been intensively studied over the past decade. The main object of interest are either the quantitative characteristics of the convex hulls such as the intrinsic volumes (which include perimeter, volume, surface area, etc.) and the number of faces, or the so-called absorption probabilities that the convex hulls do not contain the origin. References can be found in <cit.> and <cit.>. Let (S_n)_n ≥ 1, where S_n = X_1 + … + X_n, be a random walk on the plane with independent identically distributed increments X_1, X_2, …. Denote by A_n the area of the convex hull of 0, S_1, …, S_n. In this paper we aim to identify the most likely paths of the walk resulting in atypically large values of this quantity. We always assume that X_1 has finite Laplace transform and is not supported on a line passing through the origin, that is e^u · X_1 <∞ and (u · X_1 =0)<1 for every u ∈^2. The latter assumption excludes the trivial case A_n=0. The typical behaviour of the area was described by Wade and Xu <cit.>. As a consequence of the invariance principle, they showed that A_n/n^a converge weakly to a non-degenerate limit as n →∞, where a=1 if μ := X_1 is zero and a=3/2 otherwise. As for atypically large values of A_n, Akopyan and Vysotsky <cit.> showed that the variables A_n/n^2 satisfy the large deviations principle with speed n and the certain rate function 𝒥_A. The scaling n^2 arises from the large deviations scaling S_n/n. To define 𝒥_A, we need the following notation. The cumulant generating function of X_1 is K(u):=log e^u · X_1, which is known to be convex. The rate function I of X_1 is the convex conjugate of K, that is I(v):= sup_u ∈^2 ( u · v - log e^u · X_1), v ∈^2. This is a convex function with values in [0, +∞] that satisfies I(μ)=0. Furthermore, denote by AC_0[0,1] the set of planar curves, i.e. functions h=(h_1,h_2) from [0,1] to ^2, that have absolutely continuous coordinates h_1, h_2 and satisfy h(0)=0. Denote by A(h) the area of the convex hull of the image of a curve h ∈ AC_0[0,1]. Finally, for a ≥ 0 define 𝒥_A(a):=min_h ∈ AC_0[0,1]: A(h) = a I_C(h), where I_C(h):= ∫_0^1 I(h'(t)) dt and the minimum is always attained at some h; see <cit.>. The function 𝒥_A satisfies 𝒥_A(0)=0 and is strictly increasing on its effective domain {a ≥ 0: 𝒥_A(a) <∞}. The function I_C is the rate function for large deviations of trajectories of the random walk in the space AC_0[0,1]. For brevity, we will refer to I_C as the rate functional or energy functional. The goal of this paper is to find the minimizers of 𝒥_A in (<ref>). Informally speaking, these minimizers define the asymptotic shape of the most likely trajectories of the random walk (S_n)_n ≥ 1 resulting in the large deviations events {A_n ≥ a n^2 } as n →∞; see <cit.> for a formal statement. The paper <cit.> explored the idea of finding the minimizers (and thus 𝒥_A) using isoperimetric-type inequalities for the area of the convex hull of planar curves. This turned out to be possible when the distribution of X_1 was either rotationally invariant (with μ=0) or a shifted standard Gaussian (with μ≠ 0). In this paper we use a different approach, which covers a much wider class of distributions. Our main result, Theorem <ref>, shows that in the case when X_1 is not supported on a line, the minimizers of 𝒥_A are obtained by shifting and rotation through π/2 of the bijective parametrizations g of arcs of the level sets of K that have their speed |g'| proportional to |∇ K(g)| and diametrically opposite endpoints, i.e. g(0)=-g(1). To show this, we first employ a geometric argument, based on the procedure of convexification of planar curves, to show that the minimum in (<ref>) is attained on convex curves; see Lemma <ref>. Hence by Green's formula, the constraint in (<ref>) can we written as ∫_0^1 (h_1 h_2' - h_1'h_2)dt = 2a. The question of finding 𝒥_A is now an isoperimetric-type problem of classical variational calculus. We show that if there is no half-plane containing the distribution of X_1, then the minimizers h(t) are smooth, and therefore satisfy the Euler–Lagrange equation; see Lemma <ref>. Unexpectedly, we were able to solve this equation explicitly for every I. It is often overlooked that for a general variational problem, there may be absolutely continuous minimizers which do not satisfy the Euler–Lagrange equation; see e.g. <cit.>. This equation is satisfied only under some a priori assumptions on a minimizer h, e.g. that h' is essentially bounded; see <cit.> for an illuminating discussion. This particular assumption is well-justified for a standard variational proof (by solving the Euler–Lagrange equation) of the classical isoperimetric inequality but we cannot assume this in our case since I is non-homogeneous. Thus, in general one cannot hope for finding all minimizers in AC_0[0,1] by solving the Euler–Lagrange equation. However, these difficulties disappear if one can ensure that the minimizers are regular enough, e.g. smooth (as customary assumed in many basic textbooks). This is a delicate issue, and it took us a serious effort to find right references. We therefore think that this technical part of the paper may be of special interest to the part of probabilistic community working on variational problems arising in the studies of large deviations. To place our results in a context, note that the variational problem (<ref>) resembles the classical Dido's isoperimetric problem, and also Chaplygin's problem (Akhiezer <cit.>) of finding a closed planar path of an airplane moving with constant speed in a constant wind field and encircling the maximum possible area at a unit time. These problems correspond to formally letting I(v)=|v-μ| with μ = 0 and μ≠ 0, respectively. Similarly, taking I=·_C, where ·_C(v):= inf{r>0: v ∈ r C} is the Minkowski functional (norm) of a centrally symmetric compact convex set C ⊂^2 such that 0 ∈ C, corresponds to the isoperimetric problem for the Minkowski metric, whose centred unit ball is exactly C. This problem was solved by Busemann <cit.>, who showed that the closed simple curves of a fixed Minkowski length ·_C that enclose maximal (Euclidean) area parametrize homothetic dilations of ∂ C^∘ rotated through ±π/2, where C^∘:= ∩_u ∈ C{v ∈^2: u · v ≤ 1 } is the polar set of C. To see that this matches our Theorem <ref>, note that the convex conjugate ·_C^* of ·_C is the convex-analytic characteristic function of C^∘, defined to be 0 on C^∘ and +∞ elsewhere. Then ∂ C^∘ is the boundary of the sub-level sets ·_C^*((-∞, α]) for any α≥ 0, which in our case corresponds to the level sets K^-1(α). For further isoperimetric inequalities in non-Euclidean metrics, we refer (without aiming to be comprehensive) to the surveys <cit.> and to quantitative-type results of <cit.>. We stress that all the above references concern only closed curves, unlike in our case (where, moreover, the “perimeter” functional is inhomogeneous). For convex hulls of curves, the Dido problem was solved by Moran <cit.>, and further inequalities are available in <cit.>. § THE MAIN RESULTS In order to state our result, we shall introduce more notation in the case when (X_1) is two-dimensional, which is of the main interest. Here we will additionally assume that 0 ∈(((X_1))); equivalently, there is no non-zero u ∈^2 such that u · X_1 ≥ 0 a.s. This implies that the cumulant generating function K of X_1 is unbounded in every direction, hence the convex function K is bounded from below and has bounded sub-level sets (see Rockafellar <cit.>). Put α_0:=min_u ∈^2 K(u). For any compact set C ⊂^2, denote by A(C) the area of the convex hull of C; then A(h)=A(h([0,1])) for h ∈ AC_0[0,1]. Denote by u^ a vector u ∈^2 rotated through π/2 counterclockwise about the origin. For any α≥α_0 and ℓ∈, denote by E(α):=A ( K^-1(α)), E^±(α, ℓ): = A ( K^-1(α) ∩{ u ∈^2: ± u ·ℓ^≥ 0 }) the area of the sub-level set K^-1((-∞, α]) and its “halves”. Clearly, E^±(α, ℓ) = 1/2 E(α) for every ℓ∈ if the distribution of X_1 is centrally symmetric. Furthermore, for α > α_0 put λ_α, ℓ^±:=∫_K^-1(α) ∩{ u ∈^2: ± u ·ℓ^≥ 0 }1/|∇ K(s)|σ(ds), where d σ stands for the one-dimensional Hausdorff measure on K^-1(α), and put λ_α, ℓ^±:=0 when α≤α_0. By the coarea formula (<cit.>), we have E^±(α, ℓ) =∫_-∞^αλ_β, ℓ^± d β. For every τ∈{+, - } and every α > α_0 and ℓ∈ such that the line ℓ intersects K^-1(α) at two points, denote by g_α, ℓ^τ the bijective C^1-smooth parametrization of the arc K^-1(α) ∩{ u ∈^2: τ u ·ℓ^≥ 0 } by [0,1] that has orientation τ and satisfies ∫_g_α, ℓ^τ([0,t])1/|∇ K(s)|σ(ds) = t λ_α, ℓ^τ, t ∈ [0,1]. Existence of such parametrization follows from the implicit function theorem, which applies since the gradient of K is non-zero on K^-1(α) and K is (infinitely) smooth on ^2. Note that by the change of variables formula, (<ref>) is equivalent to | (g_α, ℓ^τ)'(t)| = λ_α, ℓ^τ |∇ K(g_α, ℓ^τ(t))|, t ∈ [0,1]. We also have {g_α, ℓ^τ(0), g_α, ℓ^τ(1) }= K^-1(α) ∩ℓ. We are now ready to state the main result of the paper. Suppose that e^u · X_1< ∞ for any u∈^2. 1. Assume that ( (X_1)) = ^2. Then for any a>0, every minimizer h in (<ref>) is a C^∞-smooth curve of the form h(t)=-1/τλ_α, ℓ^τ(g_α, ℓ^τ(t) - g_α, ℓ^τ(0))^, t ∈ [0,1], where α>0, τ∈{+,-}, ℓ∈ are such that ∂/∂α√(E^τ(α, ℓ))= 1/2 √(a) and the two-points set K^-1(α) ∩ℓ is symmetric about the origin. Moreover, if the distribution of X_1 is centrally symmetric, i.e. X_1 d= -X_1, then α is the unique positive solution to (√( E(α)))'=1/√(2a) and every curve of the form (<ref>) with arbitrary τ and ℓ is a minimizer. 2. Assume that ( (X_1)) = {μ_1}×, where μ_1 ∈ is non-zero. Then for any a >0, the minimizers in (<ref>) are the two curves h_±(t)=(μ_1 t, 1/± 2u_a (K(0, ± u_a(2t-1)) - K(0,∓ u_a) )), t ∈ [0,1], where u_a is the unique positive solution to 4a=μ_1 E'(u_a) with E(u):=∫_-1^1 K(0,us)ds. Let us comment. * As usual with necessary conditions, in Part 1 the minimizers are of the form (<ref>), but some curves of this form may not be minizers. Therefore, the problem of finding 𝒥_A reduces to minimizing the energy functional ∫_0^1 I(h'(t)) dt over the curves of the form (<ref>). * The explicit examples for non-symmetrically distributed increments are available in <cit.> for shifted Gaussian X_1. In this case, the optimal curves parametrise elliptic arcs if X_1 is non-degenerate and parabolic arcs if the support of X_1 is one-dimensional. * For large deviations of random vectors, it is customary to work under the Cramér moment assumption that the Laplace transform of the increments of the random walk is finite merely in a neighbourhood of zero. By Proposition 4.1 in <cit.>, in this case the problem of finding the optimal trajectories of random walks reduces to solving an isoperimetric problem in the class of (possibly discontinuous) curves of bounded variation with a rate functional that is more complicated than I_C. This problem is solved in <cit.> for rotationally invariant distributions of the increments but the Euler–Largange approach of the current paper can no longer be applied. * The technical assumption ( (X_1)) = ^2, which means that there is no half-plane containing (X_1), is imposed to ensure that I is finite on ^2. This is needed to apply the results of variational calculus used in the proof. In some cases it is possible to remove this assumption using an approximation argument. We illustrate this by the following result, were we assume for simplicity that X_1 d= -X_1. It still allows us to find 𝒥_A although we no longer claim that all minimizers are as in (<ref>). Suppose that e^u · X_1< ∞ for any u∈^2, and that the distribution of X_1 is centrally symmetric and is not supported on a line. Put a_max:=( lim_β→∞ (√(2E(β)) )' )^-2. Then for any a ∈ (0, a_max), there is a minimizer h in (<ref>) that satisfies (<ref>). The proof of Theorem <ref> is based on two auxiliary Lemmas <ref> and <ref>, presented in the following sections. The first lemma uses a geometric argument, which allows us to convexify a minimizer in (<ref>) transforming it into a convex curve while preserving its optimality. The necessary geometric results are given in the Appendix. The second lemma uses results of variational calculus to show that a convex optimal trajectory satisfies the Euler–Lagrange equation, which implies, combined with Lemma <ref>, that all minimizers are convex. We will prove Theorem <ref> in the last section essentially by solving the Euler–Lagrange equation. § THE CONVEXIFICATION LEMMA Let us introduce additional notation. Recall that a planar curve h on [0,1] is convex if its image belongs to the boundary of a convex set and h(t_1)=h(t_2) for t_1 ≤ t_2 is possible only when t_1=t_2 or t_1=0, t_2=1. The boundary of the convex hull of a convex curve consists of the image of the curve and the line segment joining the end points. We say that a curve h ∈ AC_0[0,1] is a stopping convex curve if a) h belongs to the boundary of a convex set, and b) for any 0 ≤ t_1< t_2 ≤ 1 such that {t_1, t_2}≠{0,1}, the equality h(t_1)=h(t_2) implies that h=h(t_1) on [t_1, t_2]. Note that any stopping convex curve that is constant on no interval is injective and thus convex. For any h =(x, y) ∈ AC_0[0,1], consider the signed area Ã(h):= 1/2∫_0^1 (x y' - x'y)dt = 1/2∫_0^1 (h^· h') dt of the closed curve obtained by extending h from [0,1] to, say, [0,2] with any smooth parametrization of the line segment from h(1) to h(0) such that h(2)=h(0)=0. Since for such a parametrization the function x y' - x'y vanishes on [1,2], by Green's theorem (extended from piecewise C^1-smooth simple closed curves), we have A(h) = |Ã(h)|, h ∈ AC_0^c[0,1]. For a closed curve h ∈ AC^c[0,1], the sign of Ã(h) equals that of the orientation of h. Assume that K(u) is finite in a neighborhood of 0. Then for any h ∈ AC_0[0,1] satisfying A(h)>0 and I_C(h)<∞, there exists a convex curve g ∈ AC_0^c [0,1] such that A(h) ≤ A(g), |Ã(h)| ≤ A(g), and I_C(g) ≤ I_C(h). Moreover, if h has the property of being affine and non-constant on some non-empty subinterval of [0,1], then g can be chosen to satisfy the same property. Inequalities for the areas similar to the ones in Lemma <ref> are well-known; for example, one of them is used in a classical proof of the isoperimetric inequality on the plane. The main point here is to combine these inequalities with the energy estimate I_C(h^c) ≤ I_C(h) using the geometric arguments by Böröczky et al. <cit.>, Fáry and Makai <cit.>, and Pach <cit.>. We will use that the effective domain 𝒟_I:={u∈^2: I(u)<∞} of I satisfies (<cit.>) (( (X_1))) ⊂𝒟_I⊂(( (X_1))), where (X_1) is the topological support of the distribution of X_1 and stands for the relative interior (in the induced topology on the affine hull of (X_1)). If e^u · X_1 is finite for all u∈^2, then K is infinitely differentiable on ^2 and I is strictly convex on its effective domain 𝒟_I; see Vysotsky <cit.>. Consider the sequence of increasing partitions 𝒟_k:= {[i/2^k, (i+1)/2^k )}_i=0^2^k-1 of [0,1) into dyadic intervals, where k ∈. Define the piecewise linear functions (h_k)_k ≥ 1 by putting h_k(0):=0, h_k'(t) := 2^ k ∑_i =0^2^k-1[h ( i+1/2^k) - h ( i/2^k) ] _[i/2^k, (i+1)/2^k )(t), t ∈ [0,1), and h_k(1):=h(1), so that h_k is left-continuous at 1. In other words, h_k is the continuous function on [0,1] defined by linear interpolation between its values at the points i/2^k, where it coincides with h. Since h' ∈ L^1[0,1], the sequence h_k' is a regular martingale with respect to the filtration {σ(𝒟_k)}_k ≥ 1 on [0,1) equipped with the Lebesgue measure on the Borel σ-algebra, which coincides with σ(∪_k σ(𝒟_k)). That is, h_k'=E(h'| σ(𝒟_k)), where E is expectation over the above probability measure on [0,1). By Lévi's theorem, we have h_k' → h' a.e. and in L^1 as k →∞ and, moreover, the sequence h_k' is uniformly integrable. Then h_k → h uniformly on [0,1]. Since I is convex and I_C(h)=I(h')_L^1 <∞ by assumption of the lemma, by Jensen's inequality for conditional expectation, we have I(h_k') ≤ E(I(h')|σ(𝒟_k)) a.e. for every k ≥ 1. By taking expectation, this gives I_C(h_k)=E I(h_k') ≤ E[E(I(h')|σ(𝒟_k))] = I_C(h). On the other hand, by the fact that h_k' → h' a.e. as k →∞, lower semicontinuity of I, and Fatou's lemma, we have lim inf_k E I(h_k') ≤ E lim inf_k I(h_k') = E I(h), hence lim inf_k I_C(h_k) ≥ I_C(h), and thus I_C(h_k) → I_C(h). Moreover, since the convergence h_k → h is uniform on [0,1], we have P(h_k) → P(h), A(h_k) → A(h). Finally, Ã(h_k) →Ã(h) by h_k' → h' in L^1. Let (h_k^c)_ k ≥ 1 be the piecewise linear functions that satisfy h_k^c(0):=0 and (h_k^c)'(t) := 2^ k ∑_i =0^2^k-1[h ( σ_k(i)+1/2^k) - h ( σ_k(i)/2^k) ] _[i/2^k, (i+1)/2^k )(t), t ∈ [0,1], where σ_k is the permutation of the set {0, 1, …, 2^k-1} that arranges the vectors 2^k[h((i+1)/2^k) - h ( i/2^k)], which are values of the step function h_k'(t), in the following lexicographic order. First, order by increase of the angle (measured in the clockwise direction, with the convention that the angle of the zero vector is 0) between the vector and h(1) if h(1) ≠ 0 and any fixed direction, say, (1,0), if h(1)=0. Second, order by increase of the norm, and third, order by increase of the index i. Define h_k^c(1) by continuity, which gives h_k^c(1):=h(1). By the construction, each h_k^c is stopping convex curve and its image is a polygonal line, which is a convexification of the polygonal line (h_k). Then Lemma <ref> of the Appendix and Proposition <ref> apply, and we have A(h_k^c) ≥ A(h_k) and |Ã(h_k^c)| ≥ |Ã(h_k)|. On the other hand, I_C(h_k^c) = I_C(h_k) by the construction of (h_k^c)' from h_k', hence I_C(h_k^c) → I_C(h) as k →∞. By the same reasoning, the sequence (h_k^c)' is uniformly integrable since the sequence h_k' is so. By a sequential weak compactness criterion in L^1[0,1] (see the book by Buttazzo et al. <cit.>), there exist a subsequence k_i and an element of AC_0[0,1], which we denote by h^c and call a convexification of h, such that (h_k_i^c)' → (h^c)' weakly in L^1. Hence h_k_i^c → h^c pointwise on [0,1]. Moreover, w.l.o.g. we can assume that this convergence is uniform: the sequence h_k^c is equiabsolutely continuous since it is uniformly integrable (<cit.>), hence the sequence h_k^c is equicontinuous and the Arzelá–Ascoli theorem applies. Then P(h_k_i^c) → P(h^c) and A(h_k_i^c) → A(h^c) since h_k_i^c → h^c uniformly on [0,1], and we also have Ã(h_k_i^c) →Ã(h^c) by |Ã(h_k_i^c) - Ã(h^c)| ≤ |h(1)| ·h_k_i^c - h^c_∞ + 1/2| ∫_0^1 (h^c)^· (h_k_i^c - h^c)' dt |, where the integral term vanishes since (h_k_i^c)' → (h^c)' weakly in L^1. This gives A(h) ≤ A(h^c) and |Ã(h)| ≤ |Ã(h^c)| = A(h^c). Also, convexity and lower semi-continuity of I implies (<cit.>) that the functional defined by J(u):=∫_0^1 I(u) dt for u ∈ L^1 is sequentially lower semi-continuous in the weak topology on L^1. Hence I_C(h^c) ≤lim inf_i →∞ I_C(h_k_i^c) = lim inf_i →∞ I_C(h_k_i) = I_C(h). Thus, h^c satisfies the inequalities for P, A, and à required for g. Consider the curve h^c. We claim that (h^c) is contained in the boundary of its convex hull, which is genuinely two-dimensional since 0 < A(h) ≤ A(h^c). In fact, if our claim does not hold, then h^c(t_0) belongs to the interior of ((h^c)) for some t_0 ∈ [0,1]. By Carathéodory's theorem, there exist distinct t_1, t_2, t_3 ∈ [0,1] such that h^c(t_0) belongs to the interior of the triangle with vertices h^c(t_1), h^c(t_2), h^c(t_3). Then h_k_i^c(t_0) belongs to the interior of the triangle with vertices h_k_i^c(t_1), h_k_i^c(t_2), h_k_i^c(t_3) for all integer i large enough, contradicting convexity of h_k_i^c. Next we show that h^c moves along the boundary monotonously. Since ((h^c)) is convex and two-dimensional, we can pick a non-zero point O from the interior of this set. Then O is an interior point of ((h^c_k_i)) for all integer i that are large enough. Consider the function on [0,1], denoted by θ, defined as the angle between -O and h^c(t)-O measured in the clockwise direction, so that θ(0)=0. Similarly, define the angles θ_i(t) between -O and h^c_k_i(t)-O. Since each h^c_k_i is a stopping convex curve, each θ_i is non-decreasing on [0,1]. Then so is θ, hence h^c is a stopping convex curve, which may cease to be convex only because of being constant on some subintervals of [0,1]. Let us remove them as follows. Consider the set A:={t ∈ (0,1): h^c(t) ≡ const on (t-ε, t+ε) for some ε∈ (0, min(t, 1-t))}, which is either empty or consists of at most countable number of disjoint non-adjacent open intervals. Define the function L(t):=∫_0^t _A^c(s) ds on [0,1], which satisfies L(1)>0. Its left-continuous inverse L^-1(t):=inf{s≥ 0: L(s) ≥ t} is strictly increasing and satisfies (L^-1)' = 1 a.e. The function h^c ∘ L^-1 on [0, L(1)] has the following properties: its image coincides with that of h^c on [0,1]; it is convex since it is constant on no interval and thus injective up to possible equality of its endpoint values; and it is absolutely continuous. The latter property follows from h^c(L^-1(t))= ∫_0^L^-1(t) (h^c)'ds = ∫_A^c ∩ [0, L^-1(t)] (h^c)'(s) ds =∫_[0, t] (h^c)'(L^-1(u)) du = ∫_0^t (h^c ∘ L^-1)' du, for t ∈ [0, L(1)], where the third equality holds by the change of variables formula applied for the mapping L^-1: [0, L(1)] → A^c, which pushes forward the Lebesgue measure on [0,L(1)] to the Lebesgue measure restricted to the subset A^c of [0,1]. A similar computation gives I_C(h^c) = ∫_0^L(1)I (h^c(L^-1(u))' ) du + (1- L(1)) I(0). It remains to extend the function h^c ∘ L^-1 from [0, L(1)] to [0,1] preserving convexity and absolute continuity, which we will do differently in two cases. Case 1: μ≠ 0. Consider the absolutely continuous function on [0,1] defined by g_1 := h^c ∘ L^-1 on [0, L(1)] and g_1(t) := h(1) + (t-L(1)) μ on [L(1), 1], where, recall, h^c ∘ L^-1(1) = h(1) (see Figure <ref>.a). Then (h^c) = (h^c ∘ L^-1) ⊂(g_1), hence A(h^c) ≤ A(g_1), and we also have I_C(g_1) ≤ I_C(h^c) by (<ref>), where the inequality is strict unless L(1)=1. Since g_1 may be non-convex, we convexify it in two steps as follows. First pick a point t_0 ∈ [0,L(1)] such that h^c(t_0) belongs to one of the two support lines to ((h^c)) that are parallel to μ and chosen such that the curve g_2(t):={[ h^c(L^-1(t)), t ∈ [0, t_0]; h^c(L^-1(t_0))+(t-t_0)μ, t∈[t_0, t_0+1-L(1)]; h^c(L^-1(t-1+L(1)))+(1-L(1))μ, t∈[t_0+1-L(1), 1] ]. is convex on [0, t_0+1-L(1)]. The image of this curve is obtained by cutting that of h^c∘ L^-1 at time t_0 and inserting the segment (1-L(1))μ (see Figure <ref>.b). For example, if the curve h^c(t_0) is smooth, then either t_0 ∈{0,L(1)} or t_0 is a point where (h^c)'(t_0) is directed along μ. 5cm < g r a p h i c s > a) g_1 0.3cm 5cm < g r a p h i c s > b) g_2 0.3cm 5cm < g r a p h i c s > c) g Convexification of g_1. The values shown represent time. The second step is similar to the previous one, for - g_2(1) instead of μ. Find a point t_1 ∈ [0,1] such that the support line to ((g_2)) at g_2(t_1) is parallel to g_2(1) and g_2(t_1) is directed along -g_2(1). Cut the tail of the curve g_2 after t_1 and shift it to the beginning (see Figure <ref>.c). This gives a new curve, g, which is convex and absolutely continuous. We can say that g is a convexification of g_1 (and of g_2). From Proposition <ref> it follows by a discretization argument that A(g_1) ≤ A(g), hence A(h^c) ≤ A(g) by A(h^c) ≤ A(g_1). We also have |Ã(h^c)| ≤ |Ã(g)| since both h^c and g are convex. Lastly, we have I_C(g) = I_C(g_1) ≤ I_C(h^c). Combining these inequalities with the ones obtained earlier for h^c, we see that g is as required. Case 2: μ =0. Put g(t):=h^c ∘ L^-1(L(1)t) for t ∈ [0,1]. By (<ref>) and convexity of I, I_C(h^c) = L(1) ∫_0^1 I ((h^c ∘ L^-1(L(1)t))' /L(1) ) dt ≥ I_C(g), where the inequality is strict unless L(1)=1. By (g)=(h^c), we have A(g)=A(h^c), and also |Ã(g)|=|Ã(h^c)| since g and h^c are convex. Therefore, g is as required. It remains to prove the last assertion of the lemma. If h' ≡ u a.e. on a non-empty open interval J ⊂ [0,1] and a non-zero u ∈^2, then every approximating step function h_k' a.e. equals u on the interval J_k that is the union of all dyadic intervals of length 2^-k contained in J. Then h_k^c ≡ u on a certain translation J_k^c of the interval J_k. We could have chosen the subsequence k_i such that the centres of the intervals J_k_i^c converge as i →∞. Let J^c be the open interval of the same length as J centred at the limit of the centres. We have (h^c)' ≡ u a.e. on J^c. If μ =0, then g' ≡ L(1) u a.e. on J^c/L(1), otherwise g'≡ u on a shift of J^c, as required. § THE EULER–LAGRANGE LEMMA We now state the second auxiliary result needed to prove Theorem <ref>. Suppose that e^u · X_1< ∞ for any u∈^2. 1. Assume that ( (X_1)) = ^2. Then for any a>0, any minimizer h in (<ref>) is a C^∞-smooth convex curve satisfying the Euler–Lagrange equation λ h^(t) = ∇ I(h'(t)) - ∇ I(h'(0)) , t ∈ [0, 1] ∇ I(h'(1)) = -∇ I(h'(0)), for some non-zero real λ. 2. Assume that ( (X_1)) = {μ_1}×, where μ_1 ∈ is non-zero. Then for any a>0, any minimizer h in (<ref>) is a C^∞-smooth convex curve of the form h(t)=(μ_1 t, h_2(t)) with h_2 satisfying the Euler–Lagrange equation λμ_1 t = ∂ I/∂ y (h'(t)) - ∂ I/∂ y (h'(0)), t ∈ [0, 1] ∂ I/∂ y (h'(1)) = -∂ I/∂ y (h'(0)). for some non-zero real λ. It follows from (<ref>) and the assumptions of the lemma that the effective domain of the rate function I is ^2 in Part 1 and {μ_1}× in Part 2. Then from (<ref>) we see that 𝒥_A is finite on [0, ∞). For any a>0, we have 𝒥_A(a)=min_h ∈ AC_0[0,1]: A(h) ≥ a I_C(h) = min_h ∈ AC_0^c[0,1]: |Ã(h)| ≥ a I_C(h) = min_h ∈ AC_0[0,1]: |Ã(h)| = a I_C(h), where the first equality follows from (<ref>) and monotonicity of 𝒥_A, the second one is by (<ref>) and Lemma <ref>, and the last one is by Lemma <ref> and strict monotonicity of 𝒥_A. Denote by H_A(a) the set of minimizers in (<ref>). It is the set of the minimizers of the first minimum in (<ref>). Similarly, denote by H̃_A(a) the set of minimizers of the last minimum in (<ref>). By (<ref>) and Lemma <ref>, we have H_A(a) ∩ AC_0^c[0,1] = H̃_A(a) ∩ AC_0^c[0,1] ≠∅, a>0. We split the rest of the proof into two steps. Step 1. Every h ∈H̃_A(a) is smooth and solves the Euler–Lagrange equation. Finding the last minimum in (<ref>) and the sets of minimizers H̃_A(a) is a standard variational problem of minimizing the variational integral I_C under an isoperimetric constraint and a fixed starting point. We already know that there is a minimizer (at the end, this is due to the convexification Lemma <ref> and the fact that I_C is a tight rate function on C[0,1]); of course a priori this is not guaranteed. Consider two cases. Case 1. ( (X_1)) = ^2. Define the Lagrangian L_λ(q, p) := I(p) + 1/2λ (q_1 p_2 - q_2 p_1) , p, q ∈^2, λ∈. The Lagrangian L_0(q,p)=L_0(p)=I(p) corresponds to the rate function I_C. Let us explain that the assumption ((X_1))=^2 implies that the Lagrangian L_λ is C^∞-smooth and strictly convex on ^2. In fact, first, the assumption gives that I is finite on ^2 by (<ref>). And, second, X_1 is not supported on a straight line. This in turn yields, by a standard application of Hölder's inequality, that K is strictly convex on ^2. Also, K is differentiable on ^2. Therefore, by Theorem 26.5 in <cit.>, ∇ K: ^2 →^2 is a bijection (whose inverse is ∇ I). Therefore we have I(v)=u(v) · v - K(u(v)) for every v ∈^2, where u(v) is the unique solution to the equation ∇ K(u)=v. Hence I is C^∞-smooth because so is K. Let us explain why any minimizer h(t)=(x(t),y(t)) ∈ AC_0[0,1] of I_C under either of the constraints Ã(h) = ± a must satisfy the Euler–Lagrange equations ∂ L_λ/∂ q_i (h(t), h'(t)) = d/dt∂ L_λ/∂ p_i (h(t), h'(t)), a.e. t ∈ [0,1], i∈{1,2} together with the boundary conditions ∂ L_λ/∂ p_i (h(1), h'(1)) =0, i ∈{1,2}, which appear since the end point h(1) is not fixed. Our reference is Theorem 2.2.i in the book by Cesari <cit.>, on which we comment in detail since for a non-specialist, the matter appears to be complicated. Part (a) of the theorem states that the Euler–Lagrange equations hold for a.e. t ∈ [0,1] under the additional assumption that h' is essentially bounded, Part (i) covers the case of variable endpoints (needed since h(1) is not fixed), and Remark 2 after the theorem on p. 34 allows to reduce our problem with the integral constraint Ã(h) = ± a to the unconstrained problem by changing the Lagrangian L_0 to L_λ after introducing the Lagrange multiplier λ. The theorem assumes that the Lagrangian L_λ is C^1-smooth on its domain ^2 ×^2, which is true as explained above. Since the derivative of a minimizer a priori must not be essentially bounded (and, in general, may fail to be so), we need to check the additional conditions (S_i) of Section 2.7 in <cit.> to cover the case of unbounded h'. Namely, that there exists a continuous function R=R_h defined on ^2 such that R(h') is integrable and for some δ >0, the inequalities |∂ L_λ/∂ q_1 ((x_1(t),y(t)), p) | ≤ R(p) and |∂ L_λ/∂ q_2 ((x(t),y_1(t)), p) | ≤ R(p) hold for every p ∈^2, t ∈ [0,1], and functions x_1 and y_1 on [0,1] satisfying |x_1(t)-x(t)|≤δ and |y_1(t)-y(t)|≤δ for t ∈ [0,1]. We can take R(p)=1/2λ |p| since h ∈ AC[0,1]. This finishes justification of (<ref>) and (<ref>). Next, we apply a Tonelli-type regularity theorem to infer that any solution to the Euler–Lagrange equations (<ref>) is C^1-smooth, ensuring that h' exists on [0,1] and (<ref>) holds for every t ∈ [0,1] by continuity of ∇ I. We apply Theorems 2.6.i and 2.6.ii in <cit.>. There are two conditions to check, namely, that the mapping p ↦∇_p L_λ(h(t), p) on ^2 is injective for every t ∈ [0,1], and that |∇_p L_λ(h(t), p)| →∞ as |p| →∞ uniformly in t ∈ [0,1]. We use that ∇_p L_λ(h(t), p) = ∇ I(p) +1/2λ h^(t). Then the first condition is satisfied since the rate function I is strictly convex. To check the second condition, we use that ∇ I: ^2 →^2 is a bijection and ∇ K is its inverse (as we explained above, this follows from the facts that I is finite on ^2 and K is strictly convex and differentiable on ^2 using Theorem 26.5 in <cit.>). Then the sequence ∇ I(v_n) cannot be bounded when |v_n| →∞ as n →∞ since ∇ K ( ∇ I(v_n))=v_n but the continuous function ∇ K is bounded on compact subsets of ^2. Further, we apply Weierstrass's theorem <cit.>, which ensures that any solution to the Euler–Lagrange equations (<ref>) is C^∞-smooth since the Lagrangian L_λ is C^∞-smooth and strictly convex. With the above, we can write (<ref>) and (<ref>) as 1/2λ y'(t) = d/dt( ∂ I/∂ p_1 (h'(t))-λ/2 y(t)), -1/2λ x'(t) = d/dt( ∂ I/∂ p_2 (h'(t))+λ/2 x(t)), t ∈ [0,1], and ∂ I/∂ p_1 (h'(1))=λ/2 y(1), ∂ I/∂ p_2 (h'(1)) = -λ/2 x(1). Since h(0)=0 and h' is integrable, this simplifies to -λ h^(t) = ∇ I(h'(t)) - ∇ I(h'(0)), t ∈ [0,1], ∇ I(h'(1)) = -λ/2 h(1)^. We also see that ∇ I(h'(0)) = -∇ I(h'(1)) by comparing the first equality at t=1 with the second one. It follows that λ≠ 0, since otherwise ∇ I(h'(t)) = ∇ I(h'(0)) for every t ∈ [0,1], implying that h' is constant on [0,1], which contradicts the condition |Ã(h)| =a >0. It remains to substitute -λ for λ to obtain (<ref>) and hence finish the proof of Step 1 in Case 1. Case 2. ( (X_1)) =(μ_1, ) with μ_1 ≠0 (where μ=(μ_1, μ_2)). Simply repeat the above argument with the time-dependent Lagrangian L_λ(t, q, p) := Ĩ(p) + 1/2λ (μ_1 t p - μ_1 q ) , p, q, λ∈, where Ĩ(p):=I(μ_1, p). We omit the details, which result in the same conclusion as in Case 1. The additional condition (S_0) of Section 2.7 in <cit.> mentioned above is satisfied with R(p)=1/2λμ_1 |p|. Step 2. Every h ∈ H_A(a) is convex. We argue by contradiction. Assume that there exists a t_0 ∈ (0,1) such that h(t_0) ∈ (( h)). Define the nearest to t_0 times of exit from and entrance to the boundary of the convex hull: t_1:=max{t <t_0: h(t) ∈∂ (( h)) }, t_2:=min{t >t_0: h(t) ∈∂ (( h) ) }. We have t_0 ∈ (t_1, t_2) by continuity of h. We claim that h is affine and non-constant on the interval [t_1, t_2]. Otherwise, by strict convexity of I, the energy I_C(h) of h will decrease if we make h affine on [t_1, t_2]. Since this does not change the boundary of the convex hull of h and, consequently, the area A(h), we arrive at contradiction with the choice of h. By Lemma <ref>, there is a convex h^c ∈ H_A(a) such that (h^c)' is a non-zero constant a.e. on an interval. Then h^c ∈H̃_A(a) by (<ref>), hence h^c satisfies the Euler–Lagrange equations as shown in Step 1. In Case 1 this implies that λ h^c is constant on the interval, which is a contradiction since λ≠ 0 and h^c is convex and hence injective. In Case 2 this implies that λ x_0 t is constant on the interval, which again is a contradiction by λ≠ 0. Thus, the image of h is a subset of ∂ (( h)), and in order to obtain convexity of h it remains to prove its injectivity, except for the possible equality h(0)=h(1). Assuming that h is not injective, we find 0<t_1<t_2≤ 1 such that {h(s): t_1≤ s≤ t_2 }⊂{h(s): 0≤ s≤ t_1 }. By the same argument as above (where t_1 and t_2 were found for a given t_0), based on the observation that the convex hull of {h(s): 0≤ s≤ t_2 } will not change if we make h to be affine on [t_1, t_2], we arrive at a contradiction unless h is constant on this interval. In such a case h is a stopping convex curve, and it can be optimized as in the proof of Lemma <ref>, which is a contradiction with strict monotonicity of 𝒥_A. This completes the proof of Step 2. Note in passing that an application of the same argument combined with equality (<ref>) (which follows directly from (<ref>)) yields convexity of every h ∈H̃_A(a). Putting together the claims of Steps 1 and 2 and equality (<ref>) finishes the proof. § PROOFS OF THE MAIN RESULTS Case 1. Assume that ( (X_1)) = ^2. Then I is finite on ^2 by (<ref>). Since X_1 is not supported on a straight line, it follows by a standard application of Hölder's inequality that K is strictly convex on ^2. Also, it is differentiable on ^2. These properties of K and of its Legendre–Fenchel transform I ensure, by Theorem 26.5 in <cit.>, that ∇ K: ^2 →^2 is a bijection whose inverse is ∇ I. Then it follows from the Euler–Lagrange equations (<ref>) that ∇ K(λ h^(t) + ∇ I(h'(0))) =h'(t), t ∈ [0,1]. Take the scalar product of both sides of this equality with (h'(t))^ and integrate in t to conclude that K(λ h^(·) + ∇ I(h'(0))) = α on [0,1] for some constant α. Denote g:=λ h^(·) + ∇ I(h'(0)). From equalities h(0)=0 and (<ref>), we see that the curve g, which belongs to the level set K^-1(α), starts at ∇ I(h'(0)) and stops at - ∇ I(h'(0)). The set K^-1(α) can contain two points of opposite sign only when α≥ 0. To see that α=0 is actually impossible, note that in this case it must be that ∇ I(h'(0)) =0, i.e. h'(0)=μ. If μ=0, then K^-1(0)={0} and hence h≡ 0 by (<ref>), contradicting to A(h)=a>0. If μ≠ 0, then h is cannot be optimal. In fact, consider the curves h_ε given by h_ε(t):=t μ for t ∈ [0,ε] and h_ε(t):= εμ + h(t-ε) for t ∈ [ε,1], where ε∈ (0,1). Clearly, it holds I_C(h_ε) < I_C(h). Also, it is not hard to show that ( h) + εμ⊂( h_ε) for all ε>0 small enough. This follows from the observation that the tangent lines to h at points 0 and h(1-ε) intersect at the point -εμ/2 + o(ε^2) as ε→ 0, which can be checked by approximating K^-1(α) by the osculating circle at 0. Hence A(h) ≤ A( h_ε), contradicting to strict monotonicity of the rate function 𝒥_A (defined in (<ref>)). Thus, α >0 and ∇ I(h'(0)) is non-zero, hence g(0) is the point of intersection of K^-1(α) and ℓ for ℓ= ∇ I(h'(0))/|∇ I(h'(0))|. By Lemma <ref>, an optimal trajectory h is a convex curve, hence the curve g is also convex and, in particular, bijective. This curve parametrizes either of the arcs K^-1(α) ∩{ u ∈^2: ± u ·ℓ^≥ 0 } and by (<ref>), satisfies equality ∇ K(g(t))= -λ^-1 g'(t)^ for every t ∈ [0,1]. This equality, together with the orientation, defines uniquely the parametrization g and the constant λ. We have λ_α, ℓ^±∇ K(g_α, ℓ^±(t))= ∓(g_α, ℓ^±)'(t)^ for t ∈ [0,1] since first, the vectors in this equality have the same length by equivalent definition (<ref>) of g_α, ℓ^± and λ_α, ℓ^± and second, the vector ∇ K(g_α, ℓ^±(t)), normal to the smooth curve K^-1(α) at point g_α, ℓ^±(t), is co-directional with ∓ (g_α, ℓ^±)'(t)^. Therefore g=g_α, ℓ^τ and λ= τλ_α, ℓ^τ, with τ being the orientation of g. This is equivalent to (<ref>). Finally, since A(h)=a and A(h)=λ^-2 A(g), by definition (<ref>) we have a= (λ_α, ℓ^τ)^-2 E^τ(α, ℓ). By (<ref>), this is equivalent to 1/2 √(a) = λ_α, ℓ^τ/2√(E^τ(α, ℓ)) = ∂/∂α√(E^τ(α, ℓ)), as required. Lastly, assume that the distribution of X_1 is centrally symmetric. The last expression above then equals (√(E(α)/2))'. Note that for any α, β≥ 0 and t ∈ (0,1), we have E(t α + (1-t) β)= A (K^-1 (t α + (1-t) β )) > A (t K^-1(α) + (1-t) K^-1(β) ) by strict convexity of K, because if K(u_1) ≤α and K(u_2) ≤β for some u_1 ≠ u_2, then K(t u_1 + (1-t) u_2) < t K(u_1) + (1-t) K(u_2) ≤ t α + (1-t) β. On the other hand, by the Brunn–Minkowski inequality, √(A (t K^-1(α) + (1-t) K^-1(β) ))≥√( A(t K^-1(α))) + √( A ((1-t) K^-1(β) )), therefore the function √(E) is strictly concave. Hence the equation (√( E(α)))'=1/√(2a) admits a unique solution for any fixed 0<a<( lim_β→∞ (√(2E (β) ))')^-2. However, the latter limit must be infinite, because an optimal curve exists for every a>0 (indeed, since I is finite on ^2, for every a there is a curve h such that A(h)=a and I_C(h)<∞). Case 2. Assume first that ( (X_1)) =(μ_1, ) with μ_1 ≠ 0, where μ=(μ_1, μ_2). It follows from (<ref>) by an argument similar to the one used above in Case 1a) that the y-coordinate h_2(t) of an optimal curve h satisfies ∂ K/∂ y(u(2 t - 1))=h_2'(t), t ∈ [0,1], where we denoted u:=∂ I/∂ y (h'(1)). We have u≠ 0 since otherwise h_2 ≡ 0 and so A(h)=0, which contradicts to a>0. This gives h_2(t)=1/2u (K(0, u(2t-1)) - K(0,-u) ). Thus, an optimal curve is of the form given by (<ref>), as stated. Note that replacing u by -u does not change the endpoints of the curve h, reflecting the image of h about the midpoint of the interval joining 0 and h(1). This only changes the orientation of h (which equals the sign of u μ_1) but neither the value of the energy functional I_C nor the area of the convex hull. Therefore both curves in (<ref>) are optimal. It remains to find u using that A(h)=a. We have τμ_1^-1A(h) = 1/2 h_2(1) - ∫_0^1 h_2(t)dt, where τ is the sign of u since when say, u>0, the graph of h_2, i.e. the image of h contracted along the x-axis by factor μ_1, lies under the line segment joining 0 and h(1). Then a τ/μ_1 = 1/4u (K(0, u) + K(0,-u) - 2∫_0^1 K(0, u(2t-1)) dt ) = E'(u)/4, where E(u):=∫_-1^1 K(0,us)ds. Clearly, E is a symmetric function whose derivative vanishes at 0 and is continuous and strictly increasing on [0, ∞) by strict convexity of K(0, ·). Hence for every a>0 the above equation admits a unique positive solution u_a. Consider the random vector X_1^ε:= X_1 + √(ε) N, where ε >0 and N is a standard normal random vector on the plane independent of X_1. The distribution of X_1^ε is centrally symmetric and it satisfies ( (X_1^ε)) = ^2, therefore Theorem <ref> applies. The cumulant generating function K_ε of this vector satisfies K_ε(u)=K(u)+|u|^2/(2 ε), and its rate function I_ε is given by the infimal convolution I_ε (v)=inf{ I(v_1) + 12 ε |v_1|^2: v_1 + v_2 =v, v_1, v_2 ∈^2 } for v ∈^2; see Rockafellar <cit.>. This is the so-called Moreau–Yosida regularization of I. It holds 0 ≤ I_ε≤ I and I_ε(v) → I(v) as ε→ 0+ for any v ∈^2. Fix an a ∈ (0, a_max) and ℓ∈, and let h_ε be the minimizer in (<ref>) for X_1^ε corresponding to the curve g_α_ε, ℓ^+ with the unique α_ε given by (√( E_ε (α_ε)))'=1/√(2a), where E_ε(β):=A(K^-1_ε(β)). It is easy to see that for every real β, the convex compact sets K_ε^-1((-∞,β]) converge in the Hausdorff distance to K^-1((-∞,β]) as ε→ 0+. Hence we have pointwise convergence √(E_ε(β))→√(E(β)). Because these functions are (strictly) concave by the Brunn–Minkowski inequality, this implies locally uniform convergence of their derivatives by <cit.>. Since these derivatives are continuous and strictly decreasing, and by the assumption 1/√(2a) belongs to the interior of the range of (√(E(β)))', it follows that α_ε=[(√( E_ε))']^-1(1/√(2a)) → [(√( E ))']^-1(1/√(2a)) =: α and α>0. Hence, we have convergence of the endpoints g_α_ε, ℓ^+(t) → g_α, ℓ^+(t) for both t ∈{0,1} because all these points belong to ℓ. Moreover, by equality (<ref>), λ_α_ε, ℓ, ε^+ = √(E_ε(α_ε)) (√( E_ε (α_ε)))' →√(E(α)) (√( E (α)))' = λ_α, ℓ^+ , where the quantity in the l.h.s. of the first equality is defined as in (<ref>) with K_ε and α_ε substituted for K and α. Let now prove that g_α_ε, ℓ^+(t) → g_α, ℓ^+(t) for every t ∈ (0,1). The convex curves g_α_ε, ℓ^+ parametrize the arcs K_ε^-1(α_ε) ∩{ u ∈^2: ± u ·ℓ^≥ 0 }, which converge in the Hausdorff distance to the arc K^-1(α) ∩{ u ∈^2: ± u ·ℓ^≥ 0 } parametrized by g_α, ℓ^+. On there other hand, we have 0 ∈ K_ε^-1((-∞, α_ε)) ⊂ K^-1((-∞, α)) for all ε>0 that are small enough since α>0 and K_ε(0)=K(0)=0. Therefore, g_α_ε, ℓ^+(t) → g_α, ℓ^+(t) for every t ∈ [0,1] if and only if φ_ε(t) →φ(t) for every t ∈ [0,1], where φ(t) is the angle between g_α, ℓ^+(t) and g_α, ℓ^+(0) and φ_ε(t) is defined analogously. These angle functions are strictly monotone and satisfy φ_ε(0) = φ(0) and φ_ε(1) = φ(1)=π, therefore their pointwise convergence is equivalent to pointwise convergence of their inverses. For any angle θ∈ [0, π], consider the cone R^θ:={u ∈^2: u ·ℓ≥ |u| cosθ, u ·ℓ^≥ 0 }, and denote E^θ(β):=A(K^-1(β) ∩ R^θ) and E^θ_ε(β):=A(K_ε^-1(β) ∩ R^θ) for real β. The functions √(E^θ) and √(E^θ_ε) are strictly concave by the Brunn–Minkowski inequality. Then the same argument as above implies that for any θ∈ [0, π], we have φ_ε^-1(θ) λ_α_ε, ℓ, ε^+ = ∫_K_ε^-1(α_ε) ∩ R^θ1/|∇ K_ε(s)|σ(ds) →∫_K^-1(α) ∩ R^θ1/|∇ K(s)|σ(ds) = φ^-1(θ) λ_α, ℓ^+ as ε→ 0+; this extends (<ref>), which corresponds to θ=π. Hence φ_ε^-1(θ) →φ^-1(θ), as required. As explained above, this yields pointwise convergence of g_α_ε, ℓ^+ to g_α, ℓ^+. The derivatives of these functions converge pointwisely too because (g_α_ε, ℓ^+)'(t) = λ_α_ε, ℓ, ε^+ ∇ K_ε(g_α_ε, ℓ^+(t)) →λ_α, ℓ^+ ∇ K(g_α, ℓ^+(t)) = (g_α, ℓ^+)'(t) using that ∇ K_ε converge to ∇ K locally uniformly by <cit.>. Hence h_ε(t) → h(t) and h_ε'(t) → h'(t) for every t ∈ [0,1], where the function h is defined as in (<ref>) for the given α, ℓ, and τ =+. Finally, note that h'(t) = ∇ K(g_α, ℓ^+(t)) ⊂∇ K(K^-1(α)) ⊂∇ K (^2) ={v ∈^2: I(v)<∞}, where the last equality holds by <cit.> applied to the differentiable strictly convex function K, which is finite on ^2 (and thus is essentially smooth). This implies, using that the pointwise convergence of convex functions I_ε→ I is uniform on every compact subset of (𝒟_I) by <cit.>, that I_ε(h_ε'(t)) → I(h'(t)) for every t ∈ [0,1]. Then for any g ∈ AC_0[0,1] with A(g)=a, we have I_C(g) ≥(I_ε)_C(g) ≥(I_ε)_C(h_ε), hence by Fatou's lemma, I_C(g) ≥lim inf_ε→ 0+∫_0^1 I_ε(h_ε'(t))dt ≥∫_0^1 lim inf_ε→ 0+ I_ε(h_ε'(t))dt = I_C(h). Because h satisfies (<ref>), this means that h is a minimizer in (<ref>), as required. § APPENDIX: CONVEXIFICATIONS Here we present the planar geometric results used in our proof of Lemma <ref>. Let P be a (directed) polygonal line with vertices a_1,a_2,…, a_n passing through these points in the given order. A convex polygonal line b_1, …, b_n is a convexification of P if a_1=b_1 and there is a permutation π of {1, …, n-1} such that a_i+1 - a_i = b_π(i)+1 - b_π(i) for every 1 ≤ i ≤ n-1. This permutation can be obtained by ordering in the increasing order the angle between the vectors a_i+1 - a_i and the direction a_1-a_n if a_n ≠ a_1 or any fixed direction if a_n=a_1. Such a permutation is not unique because of the following reasons. First, taking π'(i):=π(n-1-i), that is measuring the angle in the opposite direction, always gives a different convexification of P (see Figure <ref>). We call it reverse to the one given by π; similarly, we can define the reverse of P. The convexification used in the proof Lemma <ref> corresponds to the top one in Figure <ref>. Second, P may have co-directional edges, and any permutation of these edges corresponds to a single edge in a convexification. Third, if the polygonal line is closed, one can replace π by its composition with a cyclic permutation, which corresponds to shifting the corresponding vertex of the convexification to a_1=a_n. It is easy to see that there are no other convexifications of P, and that all of them have the same area of their convex hulls. 3cm < g r a p h i c s > 2cm 3cm < g r a p h i c s > Two convexifications of a polygonal line. Let us prove the main proposition. Let P_c be a convexification of a polygonal line P. Then ( P) ≤( P_c). This proposition was stated in Böröczky et al. <cit.>, where it is also mentioned that the equality holds only if P is either convex or a self-intersecting trapezoid. The proof is by a slight modification of the one by Pach <cit.> for the isoperimetric problem for polygons. Since the proof of exactly this statement was not published we give it here for completeness of the proof of our probabilistic results. Let P be a polygonal line with vertices a_1,a_2,…, a_n. Without loss of generality we can assume that it is closed. Indeed, adding the (directed) edge a_na_1 to a convexification P_c of P results is a convexification of the closed polygonal line obtained by adding a_n a_1 to P. If P contradicts the assertion of the proposition, so does its closed modification. Furthermore, without loss of generality we may assume that P has a maximal area of its convex hull among all polygonal lines generated by permutations of edges of P. This is because permuting the edges does not change convexifications. The proof is by induction on the number of vertices n. The basis case n=4, when P is a triangle with a_4=a_1, is trivial. Denote a_0:=a_n-1. 1. Suppose that some point a_i, where 1 ≤ i ≤ n-1, is not a vertex of P. Drop this vertex a_i from P and connect the vertices a_i-1 and a_i+1 by an edge. Denote this new polygonal line by P', and let P'_c be any of its convexifications. Then (P'_c)≥ ( P')= ( P) by the assumption of induction. Replace the side P'_c corresponding to the edge a_i-1a_i+1 of P' by a pair of edges corresponding to a_i-1a_i and a_ia_i+1 taken in such an order that the added triangle (possibly degenerate) does not lie in the interior of P'_c. Denote the new polygonal line by P”, and note that its area is not less than the area of P (actually, it is strictly larger unless the triangle is degenerate). If P” is convex, then it is a convexification of P, which proves the induction step, and hence the proposition, since all convexifications of P have the same area of their convex hulls. If P” is not convex, then ( P”) > ( P), which contradicts the assumptions on P since P” is obtained by a permutation of edges of P. 2. Now, suppose all vertices of P are the vertices of its convex hull. We use the following result proved in <cit.>. Let Q be a strictly convex polygon. Suppose that the area of the triangles spanned by the vertices of Q is minimized on some vertices a,b,c. Then two sides of abc lie on the boundary of Q. Now choose a triangle a_ia_ja_k of the minimal area among the triangles spanned by the vertices of the convex hull of P. Note that by Lemma <ref>, a_i, a_j, a_k are “consecutive” vertices on the boundary of P, but they are not necessary consecutive for P. Again, drop the middle vertex a_j from P and connect the vertices a_j-1a_j+1 by an edge. Denote the new polygonal line by P' and let P'_c be any of its convexifications. Replace the side P'_c corresponding to the edge a_j-1a_j+1 of P' by a pair of edges corresponding to a_j-1a_j and a_ja_j+1 in a such an order that the added triangle lies in the exterior of P'_c. Denote the new polygonal line by P”. Again, applying the induction assumption to P', we get (P”)≥ (P'_c)+( a_j-1a_ja_j+1)≥ ( P')+( a_ia_ja_k)=(P). As in the first part of the proof, if P” is convex, then we get the required inequality. If not, then the first inequality above is strict and so ( P”) > ( P), contradicting the assumptions on P. The following result is a version of Proposition <ref> for signed areas proved by I. Fary and E. Makai in <cit.>. Recall that the signed area of a parametrized curve was defined in (<ref>). Since this quantity clearly does not depend on the parametrization and therefore is a characteristic of the image of the curve, we can consider the signed area of a (directed) polygonal line P. We will use the same notation Ã(P). Let P_c be a convexification of a directed polygonal line P. Then |Ã(P)| ≤ |Ã(P_c)|. The actual assertion of <cit.> is stated without the absolute values but we can always change the signs of Ã(P) and Ã(P_c) replacing P and P_c by their reverses (which, recall, are obtained by passing the edges in the reverse order but in the same direction). In fact, the reverse of P has the same convexifications as P. § ACKNOWLEDGEMENTS I thank Arseniy Akopyan for his tremendous help with the geometric aspect of the paper. This work was supported in part by Dr Perry James (Jim) Browne Research Centre. plain
http://arxiv.org/abs/2306.10460v1
20230618030952
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
[ "Ajay Jaiswal", "Shiwei Liu", "Tianlong Chen", "Ying Ding", "Zhangyang Wang" ]
cs.LG
[ "cs.LG", "cs.CV" ]
[ Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models equal* Ajay Jaiswalyyy Shiwei Liuyyy Tianlong Chenyyy Ying Dingyyy Zhangyang Wangyyy yyyUniversity of Texas at Austin Ajay [email protected] Machine Learning, ICML 0.3in ] Large pre-trained transformers have been receiving explosive attention in the past few years, due to their wide adaptability for numerous downstream applications via fine-tuning, but their exponentially increasing parameter counts are becoming a primary hurdle to even just fine-tune them without industry-standard hardware. Recently, Lottery Ticket Hypothesis (LTH) and its variants, have been exploited to prune these large pre-trained models generating subnetworks that can achieve similar performance as their dense counterparts, but LTH pragmatism is enormously inhibited by repetitive full training and pruning routine of iterative magnitude pruning (IMP) which worsens with increasing model size. Motivated by the recent observations of model soups, which suggest that fine-tuned weights of multiple models can be merged to a better minima, we propose Instant Soup Pruning (ISP) to generate lottery ticket quality subnetworks, using a fraction of the original IMP cost by replacing the expensive intermediate pruning stages of IMP with computationally efficient weak mask generation and aggregation routine. More specifically, during the mask generation stage, ISP takes a small handful of iterations using varying training protocols and data subsets to generate many weak and noisy subnetworks, and superpose them to average out the noise creating a high-quality denoised subnetwork. Our extensive experiments and ablation on two popular large-scale pre-trained models: (unexplored in pruning till date) and across multiple benchmark vision and language datasets validate the effectiveness of ISP compared to several state-of-the-art pruning methods. Additionally, we show that ISP can be easily modified with minimal overhead to produce benefits comparable to model soups, without the prerequisite to generate multiple candidates fine-tuned models. Codes are available at: <https://github.com/VITA-Group/instant_soup>. § INTRODUCTION Large-scale transfer learning has recently become show-stealer in modern deep learning, and transformer-based pre-trained models <cit.> are now achieving state-of-the-art performance for a wide array of real-world computer vision <cit.> and natural language processing <cit.> applications. With the astonishing explosion of parameter counts (millions to billions) in the past few years, while chasing performance gains, fine-tuning these large pre-trained models with non-industry standard hardware is becoming seemingly impossible, in addition to expensive inference and steep environmental cost. In the hustle of building gigantic models, a parallel and growing field of model compression has been exploring the prospects to compress these enormous models at the cost of marginal/no sacrifice in performance, effectively reducing their computational and memory footprints. Among many efforts for compressing models and accelerating inference <cit.>, network pruning eliminates unnecessary weights to generate smaller subnetworks in place of dense networks attaining similar performance, stands out as one of the most effective techniques. Lottery Ticket Hypothesis (LTH) <cit.> and its variants, reveal that dense, randomly-initialized networks contain small subnetworks which can match the test accuracy of original networks. Despite their insightful findings, there still exists a large gap in the practicality of these methods because of the fully dense training routine of IMP, which exacerbates with an increase in model capacity. Recently proposed EarlyBird routine <cit.> which attempts to draw the winning tickets early in training, and pruning at initialization techniques have shown some promise in mitigating search through an expensive and tedious iterative process, yet their effectiveness for the large-scale pre-trained network is highly under-explored or they tend to have sub-standard performance at non-trivial sparsities. In this work, we ask: Does there exist a principled and cheaper approach for fastly drawing high-quality lottery tickets in large pre-trained models within a limited computational budget, while preserving its performance and transferability? To this end, we explore the feasibility of obtaining cheap tickets for popular large pre-trained models CLIP<cit.> and BERT<cit.> constrained by no multi-pass repetitive full-training, to meet the permissible computational budget. One straightforward approach is to curtail per round training cost of iterative magnitude pruning (IMP) with early stopping, but we found that such non-careful pruning approach often produces highly variable and substandard tickets, presumably due noisy and unstable state of the network, when pruned. Our close analysis of the pruned mask generated by LTH and cheap pruning methods (one-shot pruning) found that large-scale pre-trained transformers are highly overparameterized and pruning them at trivial sparsities does not necessarily require an expensive LTH paradigm, which encourages us to start pruning nonchalantly at trivial sparsities and become scrupulous at non-trivial sparsities. In addition, recently several works <cit.> investigate the intriguing phenomenon of “model soups", and have shown that weights of multiple dense fine-tuned models can be merged together into better solutions lying in low error basins. In the sparse setting, a very recent attempt <cit.> reused the byproduct of IMP, and showed that tickets generated at each iteration of IMP could be superposed into a stronger subnetwork. However, its observations are limited to small-scale networks, and more importantly, the algorithm does not contribute to the computational efficiency of either finding lottery tickets or (re-)training networks. Motivated by these observations, we are interested to investigate: if we can leverage the soup observations to eliminate noise induced by early pruning in IMP iteration, effectively leading to the stable sparse subnetwork and reduced cost. We propose Instant Soup Pruning (ISP), a model soup-inspired perspective dedicated to generating lottery ticket quality subnetworks, using a fraction of the original IMP cost. More specifically, ISP uses a miniature random subset of training data to generate many weak and noisy subnetworks, and superpose them to average out the noise creating a high-quality denoised subnetwork. Similar to traditional IMP, ISP repeats the denoising routine following well-managed training iterations till the desired sparsity is reached, eliminating the need of IMP to perform a full pass of training before every pruning routine. Our experiments on CLIP (unexplored in pruning literature till date) and BERT across multiple datasets illustrate that ISP can find sparse subnetworks with better quality than LTH, with an affordable cost no more than a single pass of IMP. In addition to dense-to-sparse paradigm, interestingly, our ISP routine can be bluntly incorporated in dense-to-dense paradigm to incorporate the benefits of model soups in dense pre-trained models at almost negligible training cost at the pre-training stage, ultimately leading to better-fine tuning performance. Our contributions can be summarized as: ▪ We propose Instant Soup Pruning (ISP), a novel pruning strategy that seamlessly integrates the “model soup" idea to significantly reduce the computational cost of IMP. ISP replaces the expensive intermediate pruning stages of IMP with computationally-cheap weak mask generation and denoising, while outperforming IMP for large pre-trained models. ▪ ISP naturally provides a “self-denoising" ability to eliminate the necessity of generating high quality/expensive masks (presumably “good solution basin”) at each pruning stage, by instead generating multiple computationally inexpensive weak masks and averaging them out to reduce their solution noise. ▪ In the dense-to-dense paradigm, ISP can be adapted to Instant Model Soup, to inject the benefits of model soups in dense pre-trained models at marginal training cost, thereby improving fine-tuning performance comparable to model soups without the need to generate multiple fine-tuned models. ▪ Our extensive experiments on two popular large-scale pre-trained models (CLIP & BERT) across multiple benchmark vision & language datasets validate the effectiveness of ISP wrt. several SOTA pruning methods. § METHODOLOGY §.§ Revisiting LTH and Pre-trained Vision and Language Model Compression In the past few years, scaling neural networks to improve information absorption has been pivotal for good optimization and generalization performance, but this unbounded parameter growth has made them computationally expensive with excessive memory requirements. The trend undoubtedly continues with the recent forefront of transformers, where more and more layers are stacked with dense attention blocks (eg. T5 has ∼10+ billion parameters) calling for expensive computational resources and prolonged training or fine-tuning time. Recently, some work <cit.> have explored Lottery Ticket Hypothesis (LTH) to understand the parameter redundancy in the current prevailing large scale transformer models, and attempted to compress it to non-trivial sparsities by repetitive initialization-training-pruning operation. Despite their success to find high-quality compressed models on a range of downstream tasks, it is impossible to ignore the cost of finding these subnetworks, since winning tickets can only be identified by pruning unimportant connections after fully training a dense network in a conventional LTH paradigm, which worsens significantly with increasing model size. For example, <cit.> explored pruning BERT to matching subnetworks at 40% to 90% sparsity across multiple tasks, on the other hand, <cit.> explored LTH for compressing large pre-trained VL models while preserving its performance. Recently, <cit.> explored the EarlyBird <cit.> idea and proposed jointly training BERT and some sparsity-inducing coefficients which can be used to draw the subnetworks followed by fine-tuning, but its joint training step again is as expensive as normal BERT training, and experimentally we found that its performance becomes sub-standard compared to LTH in non-trivial sparsity range. In this work, we propose a novel pruning strategy based on strong insights of the inherent benefits of gigantic size and model soups, which can be equivalent to or even better than LTH and its variant LTH-Rewind. We, for the first time, explore model compression for a recent extremely popular open vocabulary network CLIP along with BERT, to illustrate our approach benefits by using merely the computational cost equal to a single pass of conventional LTH. §.§ Instant Soup Pruning: A novel cost-effective pruning perspective In this section, we introduce a novel pruning algorithm, named Instant Soup Pruning (ISP) which primarily aims to reduce the computational overhead of conventional LTH while searching for lottery tickets in large-scale pre-trained transformers, facilitating benefits from both performance and computation perspective. ISP is motivated from the following three observations: * Firstly, large-scale pre-trained transformers are highly over-parameterized, and pruning them at trivial (eg. 10%, 20%, etc. depending on task and model size) sparsities does not require sophisticated pruning methods like LTH or LTH-Rewind to get high-quality sparse subnetworks. We surprisingly observed that at trivial sparsities, the sparse mask generated by LTH and cheap one-shot magnitude pruning is significantly similar which thereby reflects in the test performance of the subnetworks. For example, Figure <ref>(a) illustrates the performance of subnetworks obtained by LTH and one-shot pruning on CLIP () and Figure <ref>(b) illustrates the cosine similarity between the binary prune mask identified by LTH and one-shot pruning. It can be clearly observed that at trivial sparsities such as 10%, 20%, and 30%, both unreasonably cheap one-shot pruning and expensive LTH identify approximately similar masks with 96.27%, 98.75%, and 94.32% cosine similarity score. It conveys a strong message to save the computation budget of full training passes of LTH, which are seemingly unnecessary. * Secondly, Early-Bird <cit.> tickets, although limited to small architectures (ResNets, Vgg16, etc), conveyed a strong yet highly overlooked message that high-quality tickets can emerge at a very early training stage by pruning networks trained at much earlier points (before the accuracies reach their final top values). Recently, <cit.> showed that this observation holds true for BERT, but it cannot recover the full performance of LTH. To this end, complementary to our first observation, our work extends the early-bird findings by proposing to look more carefully while searching tickets at non-trivial sparsity (high sparsity regime) compared to trivial sparsity by progressively increasing training steps over each call to pruning routine in the mask finding stage. * Lastly, to mitigate the issue of pre-mature pruning to generate sub-standard pruning masks, we borrow inspiration from the intriguing phenomenon of “model soups”, which illustrate weights of large-scale independently fine-tuned models can be merged together into a better solution. Our work proposes a novel approach of mask soups by superposing multiple multiple cheap pruning masks, to attenuate the noise within them due to the presumably unstable state of the network while pruning, giving a high-quality denoised pruning mask. Algorithm Overview Consider a dense, pre-trained network f(x;θ), as shown in Figure <ref>, LTH trains f to achieve minimum validation loss f_loss using E epochs with a test accuracy f_acc, when optimized with Adam optimizer on a training dataset D. Once, f_acc is achieved, the fine-tuned network f(x;θ_E) is pruned using magnitude pruning to generate a subnetwork f(x; m ⊙θ_E), with a mask m ∈{0,1}. This process is repeated for k iterations till the desired sparsity S% is achieved, generating a subnetwork f_LTH(x;m ⊙θ_𝒪(k · E)) with accuracy f^LTH_acc. In contrast, our proposed approach ISP aims to generate f_ISP(x;m'⊙θ_𝒪(E)) with accuracy f^ISP_acc, such that f^ISP_acc≥ f^LTH_acc with sparse mask m' ∈{0,1} and sparsity S%. Given the computational budget of E epochs, which translates to T steps with batch size B, ISP is composed of two distinct phases: mask generation phase and fine-tuning phase which uses M and (T-M) steps respectively to produce a high-quality fine-tuned subnetwork with desired sparsity S%, usually outperforming LTH. §.§.§ Mask Generation Stage We will first discuss our novel, computationally efficient, and high-quality sparse mask generation steps for ISP incorporating the aforementioned motivation. Note that similar to IMP, ISP is also an iterative train-prune-retrain procedure, but it uses a highly optimized train/re-train subroutine starting from the parameter state obtained from the previous iteration. Given the computational budget of M steps, the mask generation stage of ISP is carefully designed to start in a relaxed fashion (spending few learning steps) while pruning at trivial sparsities and gradually become meticulous while approaching non-trivial sparsity regime, acknowledging the sensitivity of pruning while operating in high sparsities. More specifically, we introduce a hyperparameter t, which is defined as a small initial seed step count usually equates to the number of steps required to look ∼10% of training data with batch size B, before the first call to pruning routine. At i-th call to the pruning routine, we use t × (i + 1) training steps for calibrating (training) the network before pruning. For example, at the 0-th iteration when the network is dense, ISP uses steps equivalent to 10% of training data while at 5-th iteration, it uses 60% of training data in the re-training. To ensure that ISP does not produce sub-standard quality mask at each pruning iteration, we next provide details of our novel denoised pruning routine which is inspired by recently proposed “model soups" phenomenon, to average out noise induced due to pre-mature pruning. Denoised Pruning: Recently, several works <cit.> have validated the intriguing phenomenon of model soups, which suggest training multiple models with various hyperparameters and average the weights of models fine-tuned independently at no cost to achieve comparatively high performance. Motivated by their argument that fine-tuned models optimized independently from the same pre-trained initialization lies in the same basin of the error landscape and averaging them improves generalization, we are tempted to ask an unexplored question: Can we generate extremely cheap pruning masks using varying hyperparameters and average them out to improve the quality? To this end, we propose a novel Denoised Pruning Procedure, which explores the superimposition of computationally cheap pruning mask obtained by magnitude-based one-shot pruning of the network with marginal look-ahead training. Algorithm 2 illustrates the details of our denoising procedure which perform look-ahead training of the network using varying training protocol for a random subset of training samples, merely using 10-100 training steps, to generate N candidate binary masks. In order to superimpose them, we found that a simple union of these N binary masks is sufficient to improve the quality, facilitating ISP to achieve comparable/even better performance than expensive LTH. We hypothesize (and later experimentally validate) that our denoising procedure significantly helps eliminate any induced noise due to pre-mature pruning during ISP iterations, specifically at non-trivial sparsities. §.§.§ Fine-tuning Stage Our proposed method (ISP), is a single pass pruning algorithm that generates a high-quality fine-tuned subnetwork with desired sparsity S% using the computational budget equivalent to a single pass of LTH (T steps). As mentioned before, ISP is composed of the mask generation phase (M steps) followed by fine-tuning the obtained subnetwork for the remaining (T-M steps). Note that, unlike conventional LTH, ISP does not restart the network training from the initial pre-trained weight; instead, as explained in Algorithm 1, it simply fine-tunes the network state (unchanged optimizer, learning rate, etc.) obtained immediately after pruning S% of parameters for (T-M) steps. §.§ Instant Model Soup: A sparsity-inspired extension to dense training In the conventional setting, to improve the generalization performance of a model, it is highly recommended to train multiple models and use their ensemble on the test set, but it comes at a heavy inference and training cost. Recently, several works <cit.> have illustrated that, unlike conventional ensembles, the weights of multiple fine-tuned large pre-trained models can be merged together by simply averaging their weights (aka. model soup) to beat the performance of ensembles without incurring any additional inference or memory cost. While this approach effectively reduces the additional inference overhead of ensembles, it still requires expensive fine-tuning of multiple large pre-trained models with varying hyperparameters. Inspired by our idea of Instant Soup Pruning, which illustrates that binary masks originated from cheap training with different training protocols can be superimposed/denoised to improve quality, we are enticed to explore: Can we denoise the initial pre-trained weights of dense large transformers using sparse cheap training to inject the model soup benefits early during fine-tuning, eliminating the need to generate multiple fully fine-tuned models for model soups? Our primary objective is to ripe the benefits of model soups without the need to generate multiple fully fine-tuned models, thereby contenting the hurdle of high training overhead of model soups. Algorithm <ref> provide details of the pseudocode of our new Instant Model Soup (IMS) approach that uses cheap sparse training for injecting model soup benefits at marginal cost early before fine-tuning to save the hurdle of multiple finetuning and averaging. More specifically, IMS first creates multiple subnetworks with varying sparsities from the pre-trained dense model and train them independently for a few iterations (∼100 iterations) using different hyperparameter configuration, and data subsets. Next, all the weakly trained subnetwork weights are merged together with the initial pre-trained model weights using linear interpolation following <cit.>. Our proposed usage of sparse subnetworks for denoising significantly reduces the computational cost of the dense training step, thereby making it more efficient. Our experiments on CLIP () and BERT- impart an interesting finding that pre-trained models denoised by IMS can achieve performance comparable to model soups, without expensive full fine-tuning and then averaging. § EXPERIMENTS AND ANALYSIS §.§ Network, Dataset, and Settings In our experiments, we adopted the official CLIP implementation provided by <cit.> as our starting point for our experiments using pre-trained vision transformer <cit.> () models. For fine-tuning CLIP, we use the frozen final classification layer output by CLIP's text tower to eliminate the necessity of introducing any learnable parameters while in the case of BERT, we add a final task-specific classification layer(∼ 3% of total parameter count). For our BERT-related experiments, we use the HuggingFace <cit.> pre-trained weights of BERT_ transformer blocks and hidden state size 768. Additional necessary details of our hyperparameters required for fine-tuning are provided in Table <ref>. Note that during pruning, we prune the key trainable part of the network (key, query, value, dense) ignoring embeddings for simplicity. We consider a diverse set of image classification tasks from <cit.>: and downstream NLP tasks from GLUE <cit.> benchmark: ; to thoroughly investigate the effectiveness of our proposed approaches wrt. state-of-the-art pruning benchmarks. In addition to Lottery tickets, we have compared Instant Soup Pruning (ISP) against several recently proposed pruning methods such as Lottery Pools <cit.>, Early bird <cit.>, two popular pruning at initialization methods (SNIP <cit.>, GraSP <cit.>), Progressive pruning (Iterative pruning and training), and one-shot magnitude pruning. §.§ Performance comparison of Instant Soup Pruning wrt. SOTA pruning methods In this section, we conduct a systematic and extensive study to understand the performance benefits of our proposed Instant Soup Pruning in terms of fine-tuning accuracy vs. pruning ratios by comparing against multiple state-of-the-art pruning methods. We first test our approach on recently proposed CLIP (ViT-B32) <cit.> (unexplored for pruning till date) pre-trained with contrastive supervision from image-text pairs. For effective comparison and simplicity, all baselines and our proposed approach ISP are trained with similar optimizer and training settings provided in Table <ref>. Our EarlyBird <cit.>, SNIP <cit.>, and GraSP <cit.> baselines closely follow the implementation provided by their official GitHub repositories. In addition, to distinguish ISP from conventional progressive pruning, our progressive pruning baseline is implemented as periodic pruning using magnitude-based pruning followed by retraining. Lottery pools <cit.> is an interesting way to merge LTH by-product tickets. To ensure that the sparsity ratio remains comparable to ISP, we further prune the merged tickets obtained by pooling to the required sparsity, for reporting performance. Our results for CLIP are summarized in Table <ref>. We first observe that among all baselines for CLIP compression, LTH consistently performs better, and rewinding helps in further improving its performance. We found that among recent pruning at initialization methods (SNIP and GraSP), SNIP has comparatively better performance than GraSP and they tend to be slightly better than one-shot magnitude pruning. It can be clearly observed that ISP can beat expensive LTH (including rewinding) as well as all other baselines for almost all benchmark datasets and sparsity ratios. Very interestingly, for CIFAR-10 and CIFAR-100, we found that performance improvement of ISP increases with the sparsity ratio. For example, ISP surprisingly outperforms LTH-rewind by ∼3.92 and ∼5.38% on CIFAR-10 and CIFAR-100 respectively, while consuming merely fine-tuning cost equivalent to one single pass of LTH without the necessity of bookkeeping the rewinding weights of LTH-rewind. Our experimental results for BERT-base are summarized in Table <ref>. For BERT-related experiments, we have replicated the setting in <cit.> and reported the performance of ISP across various GLUE benchmarking datasets at the sparsity level where LTH is able to identify winning tickets. Note that ISP is able to comfortably outperform LTH across 8 out of 9 tasks (noticeably for QNLI where ISP beat LTH by >1%). §.§ Analysis of Denoising Iterations in ISP Our proposed approach ISP is augmented by a novel idea of mask denoising, which explores the superimposition of computationally cheap pruning mask obtained by magnitude-based one-shot pruning of the network with marginal look-ahead training. In this section, we try to investigate the implication of our denoising iterations in improving ISP performance. Table <ref> summarizes the performance comparison of ISP with/without the mask denoising while keeping the training settings exactly the same. Across both candidate architectures (CLIP and BERT), it can be clearly observed that ISP performance is significantly boosted by replacing the simple one-shot pruning with our denoise pruning routine. In addition, we also investigated how the number of denoising iterations will impact the ISP performance (see Table <ref>) and found that 4-5 denoising steps are sufficient for the denoised pruning, and increasing them beyond that does not provide a very noticeable performance gain. For consistency, in CLIP-related experiments, we have used 5 denoising iterations while for BERT, our results are reported using 4 denoising iterations. §.§ Understanding the benefits of Instant Model Soups for pre-trained models In this section, we discuss the benefits of our sparsity-inspired extension, Instant Model Soup (IMS), and experimentally validate its surprising ability to improve the quality of pre-trained models at marginal cost. Unlike model soups, IMS provides a unique opportunity to eliminate the requirement to generate multiple fully fine-tuned models to average, thereby restricting the computational complexity equivalent to the cost of fine-tuning a single model. Table <ref> illustrates the performance comparison of IMS with respect to two model soup variants (uniform and greedy) proposed in <cit.>. Note that uniform and greedy soups results are generated using the amalgamation of 8 independent models fine-tuned till the final accuracy with different hyperparameters. Our experiments across CLIP and BERT illustrate that by carefully fine-tuning IMS, it is surprisingly possible to comfortably beat the model soup variants significantly. The denoised pre-trained model generated by IMS has the ability to converge to equivalent (even better) performance than model soups. Adhering to the theme of ISP, IMS also conveys a strong message that it is not necessarily important to wait till model convergence to ripe the benefits of soup, but astonishingly soup benefits are available to ripe early during the fine-tuning at a marginal computational cost. § RELATED WORK Linear interpolation of neural network weights has recently achieved significant attention, but due to numerous nonlinear activations within a neural network, it is still debatable if linearly interpolating between two sets of weights can result in a high accuracy solution. Recently, <cit.> have studied the interpolation of deep networks and validated performance benefits when training starts from a common initialization or some segment of the optimization trajectories are shared. While <cit.> focused on mergability in the case of models trained on a single task, <cit.> found that weight interpolation can not only benefit fine-tuning tasks but also under distribution shift. More specifically, they average zero-shot and fine-tuned models, finding improvements in- and out-of-distribution. Recently, <cit.> used Fisher-weighted averaging of language models before and after fine-tuning on downstream tasks. They merged models with the same pre-trained initialization that are fine-tuned on different text classification tasks. In the late phases of training, <cit.> studied making copies of a subset of the neural network parameters and proposed to independently optimize them, followed by averaging. Moreover, <cit.> proposed to average fine-tuned models across independent runs with hyperparameter diversity, modifying all the weights of the network, and showing significant performance benefits. In addition to model weight averaging, <cit.> explored the idea of model stitching, where given two trained and frozen models A and B, a “stitched model" formed by connecting the bottom-layers of A to the top-layers of B, with a simple trainable layer between them. <cit.> studied deep ensembles which have empirically shown promise for improving the accuracy, uncertainty and out-of-distribution robustness of deep learning models. § CONCLUSION In this work, we introduced Instant Soup Pruning, a model soup-inspired perspective dedicated to generating LTH quality subnetworks, using a fraction of the original IMP cost. ISP is augmented by a denoising pruning module which helps in replacing the expensive intermediate pruning stages of IMP with computationally efficient weak mask generation and aggregation routine. Additionally, we present Instant Model Soup, which provides an opportunity to inject the benefits of model soups in dense pre-trained models at marginal training cost, thereby improving fine-tuning performance comparable to model soups. Our future work will aim for a more theoretical understanding of the role of our denoisers in providing experimental benefits. § ACKNOWLEDGEMENT The research is based upon work supported in part by the Intelligence Advanced Research Projects Activity (IARPA) under Contract No. 2022-21102100004. We also acknowledge support from the National Science Foundation AI Center Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin. icml2023
http://arxiv.org/abs/2306.11476v1
20230620120022
A Model Fusion Distributed Kalman Filter For Non-Gaussian Observation Noise
[ "Xuemei Mao", "Gang Wang", "Bei Peng", "Jiacheng He", "Kun Zhang", "Song Gao" ]
eess.SP
[ "eess.SP" ]
1]Xuemei Mao 2]Gang Wang 1]Bei Pengcor [email protected] 1]Jiacheng He 1]Kun Zhang 1]Song Gao [cor]Corresponding author [1]organization=The School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, addressline=Sichuan, city=Chendu, postcode=611731, country=China [2]organization=The School of Information and Communication Engineering, University of Electronic Science and Technology of China, addressline=Sichuan, city=Chendu, postcode=611731, country=China The distributed Kalman filter (DKF) has attracted extensive research as an information fusion method for wireless sensor systems(WSNs). And the DKF in non-Gaussian environments is still a pressing problem. In this paper, we approximate the non-Gaussian noise as a Gaussian mixture model and estimate the parameters through the expectation-maximization algorithm. A DKF, called model fusion DKF (MFDKF) is proposed against the non-Gaussain noise. Specifically, the proposed MFDKF is obtained by fusing the sub-models that are built based on the noise approximation with the help of interacting multiple model (IMM). Considering that some WSNs demand high consensus or have restricted communication, consensus MFDKF (C-MFDKF) and simplified MFDKF (S-MFDKF) are proposed based on consensus theory, respectively. The convergence of MFDKF and its derivative algorithms are analyzed. A series of simulations indicate the effectiveness of the MFDKF and its derivative algorithms. Wireless sensor networks Information fusion Distributed Kalman filter Gaussian mixture model Expectation maximization Interacting multiple model § INTRODUCTION Wireless sensor networks (WSNs) are widely used in technical applications, such as environmental monitoring, target tracking, and multiagent system <cit.>. Fusion of multiple sensor data for state estimation is one of the fundamental functions of WSNs that has sparked significant study due to its enhanced performance and practicality <cit.>. Centralized Kalman filter (KF) is a straightforward fusion architecture that collects data from all sensor nodes into a central node for processing <cit.>. Centralized KF has the benefit of being able to use all available information to obtain the best estimation in the minimum mean square error (MMSE) sense. Its burdensome computational and communication requirements make it challenging to expand, which is a drawback. And when a node fails, the estimations of the entire network are adversely affected. To solve the problems, distributed Kalman filter (DKF) is proposed for complex sensor networks <cit.>. The distributed architecture sets an independent local Kalman filter (LKF) at each node and uses point-to-point communication with neighboring nodes only. This enhances the scalability, robustness, and flexibility of the system. Moreover, consensus is an important issue, especially for collaborative sensor networks <cit.>. Therefore, some consensus algorithms are applied to the distributed architecture to further improve the system's robustness by reducing the difference between the estimations of each node <cit.>. The above studies are conducted assuming that the system noise satisfies the Gaussian distribution, and most of them are based on the MMSE criterion. In practical applications, the observation noise often presents non-Gaussian characteristics due to the complex environment, disturbances, and other uncertainties, <cit.>. It leads to poor robustness of the DKF state estimation based on the MMSE criterion <cit.>. To address the issue, a lot of research focuses on state estimation in non-Gaussian environments. Particle filtering is regarded as an effective approach to solve the state estimation of nonlinear and non-Gaussian systems, which can track the state of any noise distribution <cit.>. Some distributed particle filters (DPFs) are proposed based on different fusion methods <cit.>. The common challenge of DPFs is the heavy computational burden generated by the massive number of particles. In <cit.>, the DKF algorithms are proposed by assuming that the process and observation noise obey the Student’s t distribution, which has less computation complexity. Such methods perform well in heavy tail noise, but in complex non-Gaussian environments, their estimation accuracy degrades due to fixed parameters of Student's t distribution. Recently, methods in information-theoretic learning (ITL) are introduced to the state estimation under the non-Gaussian environment <cit.>. The DKF algorithms in <cit.> are proposed based on the maximum correntropy criterion (MCC) <cit.>. When the observation noise is non-Gaussian, they outperform the conventional DKF (CDKF) for it can reflect the higher-order moment information of the error data and adjust the estimation correspondingly. This kind of DKF based on ITL can flexibly adapt to different non-Gaussian noise environments. In ITL, the minimum error entropy (MEE) <cit.> criterion is better than MCC in dealing with complicated non-Gaussian noise. The distributed minimum error entropy Kalman filter (DMEEKF) <cit.> is proposed based on the MEE criterion to promote the algorithm's robustness in a non-Gaussian environment. Nevertheless, their performance may degrade when exposed to strongly impulsive noise. In background-impulse Kalman filter (BIKF) <cit.>, another solution for non-Gaussian noise is proposed. It approximates the complex non-Gaussian noise as a combination of a small Gaussian noise and a large one, which offers a more accurate noise estimation for the KF. The excellent performance of BIKF under the non-Gaussian environment proves that the noise obeys non-Gaussain distribution can be well approximated as a combination of Gaussian components. It inspires us to model the observation noise as several Gaussian components with different variances. To enhance the estimation accuracy of DKF applied in some WSNs, the contribution of this work is shown as follows. * In this work, a model fusion DKF (MFDKF) is proposed and its computational complexity is analyzed. We approximate the observation noise as a Gaussian mixture model (GMM) <cit.> and estimate its parameters utilizing the expectation maximization (EM) algorithm <cit.>. Thus, the observation model of LKF at each node can be built as composed of sub-models with different fusion noises and probabilities. With the help of interacting multiple model (IMM) <cit.>, the sub-models are fused, which improved the estimation of DKF against the non-Gaussian noise. * To be applicable to some special WSNs, two derived algorithms of MF-DKF are proposed by consensus theory. Some WSNs, such as multi-agent systems have high requirements for node consensus. Some WSNs have limitations in communication due to the specific working environment or other unavoidable reasons, such as underwater WSNs. Consensus MFDKF (C-MFDKF) and simplified MFDKF are proposed for consensus enhancement and communication reduction, respectively. * Some analyses are given to illuminate the performance of the proposed MFDKF and its derived algorithms, including mean error analysis and mean-square error analysis. The rest of this paper is shown as follows. Section <ref> gives some preparatory knowledge of MFDKF and explains the design motivation. Section <ref> covers the implementation of the MFDKF algorithm and the analysis of computational complexity. Two derived algorithms of MFDKF called C-MFDKF and S-MFDKF are presented in Section <ref>. The performance analysis of the algorithms is presented in Section <ref>. Simulations and conclusions are revealed in Section <ref> and Section <ref>, respectively. § PRELIMINARIES §.§ Analysis of DKF A linear system can be described as x( k ) = A( k - 1)x( k - 1) + w( k - 1), z_n( k ) = H_n( k )x( k ) + v_n( k ). Here, x( k ) ∈ℝ^p × 1 is the unknown state vector at time k, z_n( k ) ∈ℝ^q × 1 is the observation vector of node n, which is available in the moment k. The known A( k ) ∈ℝ^p × p and H_n( k ) ∈ℝ^q × p are severally stand for the state transition matrix and the observation matrix of node n. Supposing w( k ) ∈ℝ^p × 1 and v_n( k ) ∈ℝ^q × 1 are uncorrelated white Gaussian noises, their corresponding covariance matrices are Q( k ) and R_n( k ), respectively. The point of DKF is to take the information from neighboring nodes into consideration. Assume there is a node 1 in the WSN, whose neighbors are node 2,...,n. Define the following matrix [ z_c( k ) = col( z_1( k ),z_2( k ),...,z_n( k )),; H_c( k ) = col( H_1( k ),H_2( k ),...,H_n( k )),; v_c( k ) = col( v_1( k ),v_2( k ),...,v_n( k )), ] where col(·) is an operator that put all the elements into a column vector to formulate a new matrix. Then, a fusion LKF observation model of node 1 can be described as z_c( k ) = H_c( k )x( k ) + v_c( k ), and the covariance matrix of fusion observation noise v_c( k ) is obtained by R_c( k ) = diag( R_1( k ),R_2( k ),......,R_n( k )), where diag(·) is an operator to create a diagonal matrix. By applying the conventional KF to model (<ref>) and (<ref>), the iterations of LKF is summarized as x̅( k ) = A( k - 1)x̂( k - 1), P( k ) = A( k - 1)M( k - 1)( A( k - 1))^T + Q( k - 1), S_c( k ) = H_c( k )P( k )( H_c( k ))^T + R_c( k ), K_c( k ) = P( k )( H_c( k ))^T( S_c( k ))^ - 1, v̅_c( k ) = z_c( k ) - H_c( k )x̅( k ), x̂( k ) = x̅( k ) + K_c( k )v̅_c( k ), M( k ) = ( I - K_c( k )H_c( k ))P( k ). Where, M( k ) is the error covariance matrix of state estimation x̂( k ) and P( k ) is the error covariance matrix of one step prediction x̅( k ). ( ·)^T represents the transpose operation. Utilizing equation (<ref>), (<ref>) and the matrix inversion lemma, equation (<ref>) is rewritten as x̂( k ) = αx̅( k ) + βz_c( k ), α = M( k )( P( k ))^ - 1, β = M( k )( H_c( k ))^T( R_c( k ))^ - 1. From equation (<ref>) we know that the estimation of state can be simplified as a combination of the one-step prediction x̅( k ) and the fusion observation z_c( k ). Equation (<ref>) and equation (<ref>) show that the combination coefficient α and β are dependent on the error covariance matrix of one step prediction P( k ) and the covariance matrix R_c( k ). The above analysis reveals the principle of DKF. A large covariance matrix of observation noise suggests an inaccurate observation and should be weighted less in the state estimation. Similarly, a small covariance matrix of observation noise means a precise observation and should be weighted more in the state estimation. The accuracy of state estimation suffers in CDKF because the covariance matrix of observation noise is unchanging and its value deviates substantially from the true value when impulsive noise exists. §.§ Non-Gaussian Observation Noises While the observation noises are non-Gaussian, the performance of CDKF decreases due to the inaccurate R_n( k ). In other words, the statistic R_n( k ) used to calculate the weight matrix β differs greatly from the true value, which results in the poor state estimation of CDKF. Take the mixed Gaussian noise shown in the Fig. <ref> as an example, it obeys the distribution v ∼ 0.9 N( 0 ,0.01^2) + 0.1 N( 0 ,100^2 ). The statistic valve of the covariance matrix is R_n( k ) ≈10^3, which is larger than 0.01^2 and smaller than 100^2. When there is no impulsive noise occurs, the observation is reliable, and the weight in the matrix β should be large, but a relatively smaller weight is used due to the relatively large covariance matrix of observation noise R_n( k ). When the impulsive noise occurs, the observation becomes inaccurate, and the weight in the matrix β should become smaller, but remains the relatively larger one due to the unchanged covariance matrix of observation noise R_n( k ). In a word, as the inaccurate R_n( k ) is used, the performance of the CDKF is poor. In the DMCKF <cit.> and DMEEKF <cit.>, the error between observation and predicted observation v̅_n( k ) (the calculation method refers to equation <ref>) are used to detect the occurrence of impulsive noise and adjust the weights accordingly. The issue is that even when no impulsive noise occurs and no weight adjustment is required, the weights are inaccurate. The reason is that the statistical estimation of the covariance matrix R_n( k ) (which is used to calculate the weights) is inaccurate due to the presence of low-probability impulsive noise. Therefore, a more accurate covariance matrix of observation noise is expected in the non-Gaussian environment. It motivates us to approximate the non-Gaussian noise as a combination of several Gaussian components, which can help us to estimate the covariance matrix of observation noise in the non-Gaussain environment. §.§ Approximation of non-Gaussian noise The GMM is a linear combination of Gaussian noise, it allows the overall distribution of the observation noise to be divided into several Gaussian components. The probability density function is p( v( k )) = ∑_i = 1^κγ ^if( v( k )|μ^i( k ),R^i( k )) ,∑_i = 1^κγ ^i = 1, f( v( k )) = exp( 1/2( v( k ) - μ^i( k ))^T( R^i( k ))^ - 1( v( k ) - μ^i( k )))/( 2π)^D/2| R^i( k )|^1/2, where v( k ) is a D dimensional noise vector, μ^i and R^i( k ) severally denote the mean and covariance matrix of the i-th component. γ ^i is the probability of the component i and | ·| is the operator that calculates the determinant of a matrix. Suppose there are ρ samples, the GMM parameters can be obtained from the following EM algorithm. * Initialize γ ^i( t ), μ^i, and R^i( k ), at t=0. * Expectation step w ^i( k ) = γ ^i( t )f( v( k )|μ^i( t ),R^i( t ))/∑_j = 1^κγ ^j( t )f( v( k )|μ^j( t ),R^j( t )), * Maximization step γ ^i( t + 1) = 1/ρ∑_k = 1^ρw^i( k ) , μ^i( t + 1) = ∑_k = 1^ργ ^i( k )v( k )/∑_k = 1^ργ ^i( k ), R^i( t + 1) = ∑_k = 1^ρw^i( k )( v( k ) - μ^i( t ))( v( k ) - μ^i( t ))^T/∑_k = 1^ρw^i( k ), * Go back to step 2 until convergence. In this paper, v( k ) is the samlpe of observation noise, k=1,2,...,ρ. κ is the number of the Gaussian components, i.e. after the EM algorithm, the observation noise of each node can be divided into κ different Gaussian components. The covariance matrices of the Gaussian components at node n are denoted by [ {R_n^1,R_n^2,...,R_n^κ},; R_n^1 < R_n^2 < ... < R_n^κ . ] And their probabilities are denoted as {γ _n^1,γ _n^2,...,γ _n^κ},∑_i = 1^κγ _n^i = 1. § PROPOSED MODEL FUSION DISTRIBUTED KALMAN FILTER In this part, a model fusion DKF is designed. We first approximate the observation noise of each node as a GMM, and the specific parameters of the model are obtained by the EM algorithm described in Section <ref>. Then, the DKF observation model of node n can be approximated as a fusion of multiple sub-models. With the help of the IMM, a MFDKF is obtained by fusing sub-estimations given by sub-models. Besides, we modify the algorithm to address the anomalies in the application of the IMM method. Finally, the computational complexity of MFDKF is discussed. §.§ Proposed MFDKF Since the noise of each node is divided into κ Gaussian components, the observation model of each node can be divided into κ observation sub-models. Next, the observation model of DKF needs to be considered, which is obtained by fusing the observation models of each node and its neighbors. Akin to research in <cit.>, a WSN with N nodes can be modeled as an undirected graph G. Neighboring nodes refer to nodes that can communicate with each other. The adjacency matrix A = [ a_mn] ∈ℝ^N × N is used to describe the neighbouring relations between nodes. If node m and node n can communicate with each other, then a_mn = 1, otherwise a_mn = 0. Self-edge is permitted, i.e., a_nn = 1. Define the degree of node n as d_n = ∑_n = 1^N a_nm representing the number of edges associated with node n. The LKF observation model of node n is represented as Y_n( k ) = C_n( k )x( k ) + U_n( k ), where, [ Y_n( k ) = [ y_1( k ),y_2( k ),...,y_d_n( k )]^T,; C_n( k ) = [ c_1( k ),c_2( k ),...,c_d_n( k )]^T,; U_n( k ) = [ u_1( k ),u_2( k ),...,u_d_n( k )]^T, ] the covariance matrix of fusion observation noise U_n( k ) is denoted as B_n( k ) = diag( β_1( k ),β_2( k ),...,β_d_n( k )). The element in the above set is obtained from the neighbors of node n by the following operations. * Initialize: m=1,t=1. * Find the neighbor of node n: if a_mn=1, then [ y_t( k ) = z_m( k ),; c_t( k ) = H_m( k ),; u_t( k ) = v_m( k ),; β_t( k ) = R_m( k ),; m = m + 1,t = t + 1, ] else m = m + 1. * If m ≤ N , return to step( <ref>), else finish the operations. Through the EM algorithm, each noise u_t( k ) in the fusion observation model is divided into κ Gaussian components, which build the LKF observation sub-model as [ Y_n^j( k ) = C_n( k )x( k ) + U_n^j( k ),; U_n^j( k ) = [ u_1^j1( k ),u_2^j2( k ),...,u_d_n^jd_n( k )]^T, ] and the covariance of the j-th fusion noise U_n^j( k ) is B_n^j( k ) = diag( β_1^j1( k ),β_2^j2( k ),...,β_d_n^jd_n( k )). The symbols j1,j2,...,jd_n are used to denote the Gaussian component of the observation noise, and its value is an integer ranging from 1 to κ. For instance, u_n^jt( k ) represents the jt-th Gaussian components of the noise u_n( k ), whose covariance matrix and probability are β_n^jt( k ) and γ _n^jt( k ), respectively. Y_n^j( k ) is the j-th LKF observation sub-model of node n. It's necessary to note that the Gaussian noise u_n^jt( k ) is not a real noise but a component obtained by approximating the real noise as GMM. Its covariance matrix and probability have been calculated by the EM algorithm. Accordingly, the LKF observation sub-model is not the true DKF observation model but a component acquired by fusing the observation sub-models of the node and its neighbors. The IMM is a method that can deal with a system that has multiple models. Here, the observation model of LKF is approximated as multiple sub-models. In order to deal with these observation sub-models separately, the IMM is applied to the model (<ref>) and (<ref>). The number of LKF observation sub-models of node n is L_n = κ^d_n. The probabilities of the observation sub-models are calculated by [ α _n^j = γ _1^j1γ _2^j2...γ _d_n^jd_n,; j = 1,2,...,L_n. ] The Markov transition probability matrix of node n is, P̃_n = [ [ α _n^1 α _n^2 ... α _n^L_n; α _n^1 α _n^2 ... α _n^L_n; ... ... ... ...; α _n^1 α _n^2 ... α _n^L_n ]]. The element of P̃_n denoted as P̃_n^ij is the transition probability of node n from sub-model i to sub-model j. Then, the steps of MFDKF run at each node are summarized as * Initialization: [ χ _n^i( k - 1) = α _n^i,; x̂_n^i( k - 1) = x( 0 ),; M_n^i( k - 1) = M( 0 ), ] where χ _n^i( k - 1) is the probability of the sub-model i at the instant k-1. * Calculate the fusion probability: [ χ _n^ij( k - 1) = P̃_n^ijχ _n^i( k - 1)/c̅_n^j,; c̅_n^j = ∑_i = 1^L_nP̃_n^ijχ _n^i( k - 1) . ] * Prepare the j-th fusion state based on sub-model j: x̂_n^pre_j( k - 1) = ∑_i = 1^L_nx̂_n^i( k - 1)χ _n^ij( k - 1) . * Prepare the error covariance matrix of the j-th fusion state: [ M_n^pre_j( k - 1) = ∑_i = 1^L_nχ _n^ij( k - 1){M_n^i( k - 1). +; . [ x̂_n^i( k - 1) - x̂_n^pre_j( k - 1)][ x̂_n^i( k - 1) - x̂_n^pre_j( k - 1)]^T}. ] * Perform the conventional KF for each sub-model: x̂_n^j( k ) = x̅_n^j( k ) + K_n^j( k )U̅_n^j( k ), where [ x̅_n^j( k ) = A( k - 1)x̂_n^pre_j( k - 1),; U̅_n^j( k ) = Y_n^j( k ) - C_n( k )x̅_n^j( k ),; K_n^j( k ) = P_n^j( k )( C_n( k ))^T( S_n^j( k ))^ - 1,; P_n^j( k ) = A( k - 1)M_n^pre_j( k - 1)( A( k - 1))^T + Q_n( k ),; S_n^j( k ) = C_n( k )P_n^j( k )( C_n( k ))^T + B_n^j( k ). ] * Obtain the error covariance of x̂_n^j( k ): M_n^j( k ) = [ I - K_n^j( k )C_n( k )]P_n^j( k ). * Calculate the likelihood function matched to sub-moel j: Λ _n^j( k ) = c̅_n^j/√(2π| S_n^j( k )|)exp( - 1/2( U̅_n^j( k ))^T( S_n^j( k ))^ - 1U̅_n^j( k )). * Update the model probability: χ _n^j( k ) = Λ _n^j( k )/∑_i = 1^L_nΛ _n^i( k ). * Obtain the fusion state estimation of node n: x̂_n( k ) = ∑_j = 1^L_nχ _n^j( k )x̂_n^j( k ) . §.§ Modification of the MFDKF In the steps above, the likelihood function calculated from equation (<ref>) may go to zero, when most of the neighboring nodes are subjected to strongly impulsive noise. As a result, the denominator in equation (<ref>) tends to be zero, which leads to the collapse of iterations. A simple solution is to use χ _n^j( k ) = c̅_n^j, when the strongly impulsive noise occurs. But, this modification will cause the algorithm to be unusually unstable, and the performance of its root mean square errors (RMSEs) are shown in Fig. <ref>. Then, we suppose that all the neighboring nodes are suffered from the most impulsive noise when this abnormality occurs. At this point, the predicted fusion observation noise U̅_n^j( k ) is utilized to obtain a more precise fusion covariance matrix B_n^j( k ), and the equation (<ref>) is replaced by [ f∑_j = 1^L_nΛ _n^j( k )∼ 0:; {[ χ _n^j = 3pt⌢ j( k ) = 1,χ _n^j = 3pt⌢ j( k ) = 0,; B_n^j = 3pt⌢ j( k ) = U̅_n^j = 3pt⌢ j( k )( U̅_n^j = 3pt⌢ j( k ))^T,; 3pt⌢ j = j_max (B_n^j( k )). ].; else:χ _n^j( k ) = Λ _n^j( k )/ . -∑_j = 1^L_nΛ _n^j( k ). ] Fig. <ref> shows the RMSEs of the improved algorithm, which indicate that the corrective actions worked in eliminating the abnormal estimations. If there is only one node exists in the communication net, i.e., the sensor node has no neighbor, the distributed algorithm is the same as the conventional KF. And the proposed MFDKF is reduced to BIKF when the parameter κ=2 and no communication occurs in the WSN. §.§ Computational Complexity In this part, the computational complexity of MFDKF is analyzed from the perspective of floating-point operations. Table <ref> list the computational complexities of equations, where d denotes the degree of node n, and L denotes the number of observation sub-models. The computational complexity of the CDKF depends on equations (<ref>)-(<ref>) and that of the MFDKF depends on equation (<ref>), (<ref>)-(<ref>). Correspondingly, the computational complexity of CDKF and MFDKF are respectively, S_CDKF = 6p^3 + 6dp^2q + 4d^2pq^2 + dpq - p + O( d^3q^3), S_MFDKF = ( [ 6p^3 + 6dp^2q + 4d^2pq^2 + 2d^2q^2; + ( 4L - 1)p^2 + dpg + 3Lp + dq; + q + 4L + d + 3 + O( d^3q^3) ])L - p. O(.) is the same term infinitesimal of the variable. From <cit.>, We can obtain the computational complexity of DMCKF [ S_DMCKF = ( 2τ + 8)p^3 + ( 4τ + 6)p^2dq + ( 2τ - 1)p^2; + ( 4τ + 2)pd^2q^2 + ( 3τ - 1)pdq + ( 4τ - 1)p; + 2τd^3q^3 + 2τdq + τO( p^3) + 2τO( d^3q^3), ] where τ represents the average fixed-point iteration number of node n. According to <cit.>, the computational complexity of DMEEKF is [ S_DMEEKF = ( 8τ + 8)p^3 + 8τq^3 + ( 22τ + 6)p^2q; - ( 2τ + 1)p^2 + ( 18τ + 2)pq^2 + τq + ( 5τ - 1)p; + ( 10τ - 1)pq + τO( p^3) + τO( q^3) + τO( pq). ] L is calculated by equation (<ref>), therefore the computational complexity of DMEEKF depends on κ and d. It's easy to see that the computation complexity of MFDKF is larger than CDKF. It is difficult to compare the computational complexity of MFDKF with that of DMCKF and DMEEKF directly. When the values of τ and L are of the same order of magnitude, it can be said that their computational complexity is of the same order of magnitude. We run these algorithms using MATLAB R2020a on an i5-12490F and 3.0 GHz CPU. The simulation results can be found in Fig.<ref> and Table <ref>. From the average simulation time listed in Table <ref>, we can conclude that the computational complexity of the proposed MFDKF is slightly higher than that of DMEEKF in this example. § C-MFDKF AND S-MFDKF BASED ON CONSENSUS FUSION In the MFDKF, the information exchanged between non-neighboring nodes is forbidden. There is no centralized information center to coordinate, and each node implements the LKF to obtain its state estimation. WSNs with high consensus requirements face challenges when applying MFDKF because nodes' estimations may not be consistent (close to each other). For improvement, a C-MFDKF is proposed to reinforce the consensus between nodes. The consensus fusion rules are [ φ_n( k ) = A( k - 1)x̂_n( k - 1),; x̂_n^c( k ) = x̂_n( k ) + η _n∑_m ∈ Nei_n( φ_m( k ) - φ_n( k )) , ] where η _n is the fusion gain of the node, x̂_n^c( k ) is the final estimation of node n after the consensus fusion. The form of the fusion gain is η _n( k ) = ξ/d_n, where, ξ is a constant and 0 ≤ξ < 1. Here, the estimations of the neighboring nodes at the previous moment are used for local coordination at this moment, with the aim of converging the estimations of all the nodes in the WSN. The C-MFDKF is simply summarized as * LKF at each node: implement MFDKF as proposed in section <ref>. * Consensus fusion: update the estimations through the consensus fusion rules in equation (<ref>), where the parameter η _n is calculated by equation (<ref>), * Go back to step (<ref>) continue the MFDKF steps. When ξ=0, the consensus term in C-MFDKF does not work and degenerates to MFDKF. Actually, MFDKF and C-MFDKF are not suitable for WSNs with some limits in the information exchange between neighboring nodes, such as the WSN in the underwater environment. To reduce the information interaction in the algorithm, a simplified version of MFDKF is proposed based on the aforesaid consensus fusion rules. In the S-MFDKF, each node implements the LKF with its own observation only. This means that no communication occurs during the LKF process. The consensus fusion rule described above is then applied to each node's estimation. This method intends to improve the accuracy of each LKF estimation by fusing the previous moment estimations of its neighbors. The S-MFDKF step is summarized as * LKF at each node: implement the proposed MDFKF in section <ref> in a communication-free way, i.e. [ Y_n( k ) = z_n( k ),; C_n( k ) = H_n( k ),; B_n( k ) = R_n( k ). ] * Consensus fusion: Update the estimations through the consensus fusion rules in equation (<ref>), where the parameter η _n is calculated by equation (<ref>), * Go back to step (<ref>) continue the MFDKF steps. The communication contents of the proposed MFDKF and its derived algorithms are listed in Table <ref>, from which we can conclude that the S-MFDKF has the least amount of information interaction with its neighbors. Consequently, C-MFDKF is a good choice when the WSN communication capability is good. If a high level of consensus is not necessary, the MFDKF is the best option. And when the WSN communication capability is limited S-MFDKF should be applied. The MFDKF, C-MFDKF, and S-MFDKF are equivalent when communication between nodes is prohibited. § PERFORMANCE ANALYSIS In this part, several analyses are conducted to illuminate the performance of the proposed MFDKF and its derived algorithms, including mean error and mean-square error. The derivation is on the basis of the following assumptions: * The transition matrix and the observation matrix of the system are time-invariant, thus A( k ) = A,C_n( k ) = C_n. * State noise w( k ) and observation noise v( k ) are independent of each other, and their expectations are equal to zero. §.§ Mean Error Analysis The performance of mean error is reflected in the expectation of the estimation error E{ε_n( k )}. Here, the discussion is divided into two parts, one is the essential MFDKF algorithm, and another is the consensus fusion rules that are applied in C-MFDKF and S-MFDKF. Firstly, the MFDKF algorithm is considered. The estimation error of node n with the observation model j is calculated by ε_n^j( k ) = x( k ) - x̂_n^j( k ). According to equation (<ref>), the LKF estimation error of node n at time k is calculated by [ ε_n( k ) = x( k ) - x̂_n( k ); = x( k ) - ∑_j = 1^L_nχ _n^j( k )x̂_n^j( k ); = ∑_j = 1^L_nχ _n^j( k )ε_n^j( k ) . ] Substitute equation (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) into equation (<ref>) obtain [ ε_n^j( k ) = f_n^j( k )A∑_i = 1^L_nχ _n^ij( k - 1)ε_n^i( k - 1); + f_n^j( k )w( k - 1) - K_n^j( k )ṽ_n^j( k ), ] where [ f_n^j( k ) = I - K_n^j( k )C_n; = I - P_n^j( k )( C_n)^T( C_nP_n^j( k )( C_n)^T + B_n^j( k ))^ - 1C_n; = [ ( P_n^j( k ))^ - 1 + ( C_n)^T( B_n^j( k ))^ - 1C_n]^ - 1( P_n^j( k ))^ - 1. ] Then, the expectation of ε_n^j( k ) is E{ε_n^j( k )} = f_n^j( k )A∑_i = 1^L_nχ _n^ij( k - 1)E{ε_n^i( k - 1)} . It is known that [ ∑_i = 1^L_nχ _n^ij( k - 1)E{ε_n^i( k - 1)}≤; max[ E{ε_n^1( k - 1)},...,E{ε_n^L_n( k - 1)}], ] and denote the right of it as E{ε_n^m( k - 1)} = max[ E{ε_n^1( k - 1)},...,E{ε_n^L_n( k - 1)}]. Apply the inequation into equation (<ref>) we have E{ε_n^j( k )}≤f_n^j( k )AE{ε_n^m( k - 1)}. Cause the matrices P_n^j( k ) and ( C_n)^T( B_n^j( k ))^ - 1C_n are semi-definite, thus f_n^j( k ) is stable. The transition matrix A is time-invariant, so the estimation of model j, ε_n^j( k ) is stable. From equation (<ref>), the total estimation error of node n is E{ε_n( k )} = ∑_j = 1^L_nχ _n^j( k )E{ε_n^j( k )} , which indicates that the estimation error of node n, ε_n( k ) is stable. Hence, the proposed MFDKF can work stably and in an unbiased manner. Then, the performance of consensus fusion is considered. After the consensus fusion, the expectation of estimation error is E{ε_n^c( k )} = E{x( k ) - x̂_n^c( k )}. Substitute equation (<ref>) (<ref>) into equation (<ref>) we have E{ε_n^c( k )} = E{[ ε_n( k ) -; C_nA∑_a_mn = 1( ε_m( k - 1) - ε_n( k - 1)) ]}. Since C_n is a constant, A is time-invariant, and ε_n( k ) has previously been shown to be stable, ε_n^c( k ) is stable. §.§ Mean-Square Error Analysis Firstly, the basic MFDKF is analyzed. The performance of mean-square error is determined by the estimation error covariance matrix, which is given by E{ε_n( k )( ε_n( k ))^T} = ∑_i = 1^L_n∑_j = 1^L_nχ _n^i( k )/χ _n^j( k )E{ε_n^i( k )( ε_n^j( k )^T)} , Supposed that, E{ε_n^i( k )( ε_n^j( k )^T)} = 0,i j, and denote that, [ Ω_n^i( k ) = E{ε_n^i( k )( ε_n^i( k )^T)},; Ω_n( k ) = E{ε_n( k )( ε_n( k )^T)}. ] Equation (<ref>) could be expressed as Ω_n( k ) = ∑_j = 1^L_nΩ_n^j( k ) . From the representation in equation (<ref>), Ω_n^j( k ) is calculated by Ω_n^j( k ) = Γ_n^j( k )∑_i = 1^L_nΩ_n^i( k - 1)( Γ_n^j( k ))^T + Θ_n^j( k ), where, [ Θ_n^j( k ) = f_n^j( k )Q( k )( f_n^j( k ))^T + K_n^j( k )B_n^j( k )( K_n^j( k ))^T,; Γ_n^j( k ) = f_n^j( k )A. ] Assume that Q( k ), and B_n^j( k ) are time-invariant, f_n^j( k ), Θ_n^j( k ), and Γ_n^j( k ) are time-invariant. As a result, Ω_n^j( k ) are convergent. The time-invariant variables can be summarized as [ lim_k →∞Ω_n^j( k ) = Ω_n^j,; lim_k →∞Γ_n^j( k ) = Γ_n^j,; lim_k →∞Θ_n^j( k ) = Θ_n^j. ] By taking the limit of the equation (<ref>), Ω_n^j = L_nΓ_n^jΩ_n^j( Γ_n^j)^T + Θ_n^j. According to the rules of matrix vectorization as follows [ vec( ς_1ς_2ς_3) = ( ς_3^T ⊗ς_1)vec( ς_2),; vec( ς_1 + ς_2) = vec( ς_1) + vec( ς_2), ] equation (<ref>) can be rewritten as vec( Ω_n^j) = L_nΓ_n^j ⊗Γ_n^jvec( Ω_n^j) + vec( Θ_n^j), where ⊗ is the Kronecker product. One can obtain the closed-form solution for equation (<ref>) vec( Ω_n^j) = ( I - L_nΓ_n^j ⊗Γ_n^j)^ - 1vec( Θ_n^j). From equation (<ref>) we know that the estimation error covariance matrix Ω_n( k ) is the sum of the sub-models estimation error covariance matrix Ω_n^j( k ). Thus, there exists a closed solution for the estimation error covariance matrix Ω_n( k ). Then the mean-square error of consensus fusion is considered. According to equation (<ref>) the estimation error covariance matrix is written as [ E{ε_n^c( k )( ε_n^c( k ))^T} = Ω_n( k ) +; A∑_a_mn = 1[ Ω_m( k - 1) + Ω_n( k - 1)]A^T. ] Its closed-form solution depends on Ω_n( k ), whose closed-form solution exists. § SIMULATIONS In this part, simulations are executed to validate the performance of the proposed MFDKF with its derived algorithms. Simulations in Section <ref> are executed considering a linear system. To validate the MFDKF algorithm, the parameter is first set as its minimum value κ=2, while compared with CDKF <cit.>, DMCKF <cit.>, DMEEKF <cit.>, and BIKF <cit.>. Then, the value of parameter κ is in discussion. In Section <ref>, a linear target tracking model is considered to further verify the performance of MFDKF. Specific scenarios are considered in Section <ref>, where the performance of derived C-MFDKF and S-MFDKF are shown. Without special instructions, each simulation in this paper is implemented with 500 independent Monte Carlo runs, and 1000 samples are utilized to calculate the RMSE of the estimations. The RMSE to measure the accuracy is calculated by RMSE = √(∑_k = 1^N_1( x̂( k ) - x( k )^2)/N_1) , where x̂( k ) and x( k ) severally denotes the estimated state and true state in k-th run. N_1 is the number of Monte Carlo runs. Simulations are implemented under Gaussian noise and some non-Gaussian noises (mixed Gaussian, α-stable) as below. * The noise obeying Gaussian distribution is described as v∼ N( μ ,σ), where N( μ ,σ) represents Gaussian distribution with mean μ and variance σ. * The noise obeying mixed Gaussian distribution is described as v∼λ N( μ ,σ _1) + ( 1 - λ) N( μ ,σ _2), where λ is the mixed coefficient of two Gaussian noises. The noise obeys this kind of mixed Gaussian distribution is denoted as v∼ M(λ ,μ ,σ _1,σ _2). * The α-stable distribution is described as following characteristic function ϕ( t ) = exp{jϖ t - ζ| t |^a[ 1 + jbsign( t )ω( t,a)]}, with ω( t,a) = {[ tan( aπ/2),a 1,; 2/πlog| t |,a = 1. ]. a demotes the characteristic exponent, b is the symmetry parameter, ζ and ϖ severally represent the dispersion parameter and the location parameter. | ·| in this function denotes the absolute value operation. The noise that follows this kind of distribution is expressed as v∼ S( a,b,ζ,ϖ). The following simulations are played on the WSN whose communication typology is shown in Fig. <ref>. §.§ A linear system In this section, the following linear system is considered <cit.> x( k ) = [ [ cos( θ) - sin( θ); sin( θ) cos( θ) ]]x( k - 1) + w( k - 1), z_n(k) = [ [ 1 1 ]]x( k ) + v_n( k ), where, θ = π/ . -18. w( k ) and v_n( k ) severally denotes the transition noise and observation noise. The initial values of x( 0 ), x̂( 0 ) and M( 0 ) are set to [ x( 0 ) = I^p × 1,; x̂( 0 ) = x( 0 ),; M( 0 ) = I^p × p. ] The process noise obeys Gaussian noise as bellow w( k ) ∼ N( 0,0.1). The node 4 whose neighbors are node 3,5,6 was considered to compare the performance of MFDKF with CDKF, DMCKF, DMEEKF, and BIKF. First, the MFDKF with minimum parameter value κ= 2 is considered. We set the observation noise of each node as following α-stable noise v( k ) ∼ S( 1.2,0,2,0). The kernel bandwidth of DMCKF and DMEEKF is set to σ _DMCKF = 1 and σ _DEEKF = 0.5 respectively, which works relatively better in the situation. The RMSE performance of the algorithms is shown in Fig. <ref>, from which we can conclude that the MFDKF accomplishes the estimation with the minimum RMSEs and the best stability. The second best is the DMEEKF, which has a large jitter in the RMSEs. As for the RMSE performance of the rest algorithms, they are BIKF, DMCKF, and CDKF in descending order. The specific value of the simulations is listed in Table <ref>. Subsequently, the observation noise is set as following mixed Gaussian distribution v( k ) ∼ M( 0.9,0,100^2,1). The kernel bandwidth of DMCKF and DMEEKF are set in a proper value σ _DMCKF = 1 and σ _DEEKF = 0.5, respectively. Fig. <ref> shows the MFDKF works stably with the best accuracy while some abnormal estimations occur in the DMEEKF. And the detailed values are presented in Table <ref>. The simulations mentioned above verified the B-MFDKF outperforms the CDKF, MDCKF, DMEEKF, and BIKF while the observation noise are obey some non-Gaussian distribution. Then, the following Gaussian noise is considered v( k ) ∼ N( 0,1). Fig. <ref> shows the RMSEs of the five algorithms, while σ _DMCKF = σ _DEEKF = 100. It's easy to see that the performance of MFDKF is as good as CDKF, DMCKF, and DMEEKF, while BIKF performs less outstandingly. More details are shown in Table <ref>. To prove the improvement of the proposed MFDKF, we consider different non-Gaussian noises (including the α-stable noise and the mixed Gaussian noise) as the observation noise in this linear model, the results are shown in Table <ref>. We can see that MFDKF outperforms other algorithms under some non-Gaussian observation environments. The performance of different nodes in the WSN is considered when the observation noise obeys the distribution as (<ref>). Table <ref> shows the RMSE of different nodes in the WSN as shown in Fig. <ref>. By analyzing Table <ref>, it can be concluded that the more neighbors a node have, the more accurate its estimation is. Note that the more neighbors a node have, the larger its degree value and the more observation sub-models the MFDKF needs to fuse. That is, the more neighbors there are, the larger L_n is, and the greater the computational complexity of the MFDKF. At the end of this section, we compare the performance of MFDKF with different values of parameter κ. The observation noises are set as an alpha-stable noise denoted by (<ref>). And the results of EM algorithms with different κ for noise approximation are shown in Table <ref>. As shown in Table <ref>, increasing κ improves the performance of MFDKF. It is due to the increasing κ improved the noise estimation from the EM algorithm. In other words, the increasing κ enables the fusion observation model to be approximated more precisely by observation sub-models. However, the performance of the proposed MFDKF cannot be improved indefinitely by increasing the parameter κ. As κ increases, the performance of the MFDKF improves less and less, and the algorithm fails to converge when κ=6. Because, with the increase of κ, the estimation of noise by the EM algorithm approaches its limit, i.e., the approximation of the fusion observation model tends to its limit. As shown in Table <ref>, the value of L_n increases exponentially with the increase of κ, which means the computation complexity increases exponentially. If a particularly high estimation accuracy is not required, κ=2 is the best choice. Therefore, in the following simulations, we only consider κ=2 for the MFDKF and its derived algorithms. §.§ Target tracking system To further validate the MFDKF algorithm, the following target tracking system <cit.> is considered [ x( k ) = Ax( k - 1) + Gw( k - 1),; z_n( k ) = Hx( k ) + v_n( k ), ] where, A = [ [ 1 1 0 0; 0 1 0 0; 0 0 1 1; 0 0 0 1 ]],G = [ [ 1/2 1 0 0; 0 0 1/2 1 ]],H = [ [ 1 0 0 0; 0 0 1 0 ]] The process noise is set as a Gaussian noise w( k )∼ N( 0,Q),Q = [ [ 0.1^2 0; 0 0.1^2 ]]. Here, x( k ) = [ [ x_1 x_2 x_3 x_4 ]]^T is the state vector of the target, x_1 and x_3 represent the position in the x and y directions, x_2 and x_4 represent the velocity in the x and y directions. And the initial values of x( 0 ), x̂( 0 ), and M( 0 ) are set to [ x( 0 ) = [ 500,10,500, - 10]^T,; M( 0 ) = I^p × p,; x̂( 0 ) = x( 0 ). ] Similar to Section <ref>, we compare the performance of MFDKF (κ=2) with CDKF, DMCKF, DMEEKF, and BIKF by considering node 4 in the WSN as shown in Fig. <ref>. Firstly, the α-stable noise denoted by (<ref>) is considered. Fig. <ref> shows trajectories of the position estimation with the five algorithms when the kernel bandwidths of DMCKF and DMEEKF are σ _DMCKF = 1 and σ _DEEKF = 1.5 respectively. The trajectories show the proposed MFDKF is able to achieve a more accurate target location estimation. Fig. <ref> shows RMSEs in the x and y directions. One can manifest that the proposed MFDKF works with the smallest steady-state error and perform stably, while the DMEEKF and BIKF come second. The RMSE of DMCKF is slightly larger than DMEEKF and that of CDKF is largest. The details can be found in Table <ref>. Secondly, the mixed Gaussian noises denoted by (<ref>) are considered as the observation noises. And the kernel bandwidth of DMCKF and DMEEKF remains unchanged. The RMSE results are shown in Table <ref>, which shows that the proposed MFDKF works with the smallest state error of all the five algorithms. Subsequently, we consider the observation noises are Gaussian distribution as (<ref>). The target tracking error of the five algorithms in x and y directions are shown in Tabel <ref>. One can conclude that the proposed MFDKF can almost work as well as CDKF, while the BIKF and DMEEKF are working with larger errors. And the DMCKF even cannot work without adjusting the kernel bandwidth. To prove the improvement of the proposed MFDKF in target tracking (CV model), different non-Gaussian noises (including the α-stable noise and the mixed Gaussian noise) are considered as the observation noise. The results are shown in Table <ref>, when the kernel bandwidth of DMCKF and DMEEKF remain as σ _DMCKF = 1 and σ _DEEKF = 1.5. Within expectation, the CDKF cannot handle some non-Gaussian noise as shown in Table <ref>. Comparatively, the DMCKF and DMEEKF perform better but are unstable because the parameters σ _DMCKF and σ _DEEKF need to be adjusted accordingly. The proposed MFDKF performs best with all cases in the table, which indicate its capability and stability. §.§ Simulation of C-MFDKF and S-MFDKF The simulations above indicated the improvement of the proposed MF-DKF. Here, we discuss the performance of its derived algorithms under specific situations (take κ=2 as an example). The disagreement between nodes in WSN is represented by [ δ = √(∑_n = 1^N_2x̂_n( k ) - μ( k )^2) ,; μ( k ) = 1/N_2∑_n = 1^N_2x̂_n( k ) , ] where N_2 indicates the total number of nodes. It reflects the consensus of all nodes in the WSN, and a smaller value indicates a more consistent result between nodes. In the first situation, consider a WSN that has a high demand for consensus between each sensor node. Assume that there are no restrictions on communication between neighboring nodes so that all information interactions in C-MFDKF can be satisfied. Take the linear system in Section <ref> as an example, and the observation noises are set as the α-stable noise denoted by (<ref>). The RMSEs and disagreements of the basic C-MFDKF (κ=2) with different values of ξ are shown in Fig. <ref> and Fig. <ref>. From the performances, we can conclude that C-MFDKF can effectively reduce the disagreement between nodes while ensuring the accuracy of estimation. The larger the value of ξ, the better the consistency of C-MFDKF estimation while improving accuracy. The details of the simulation are shown in Table <ref>. Considered a situation that the WSN has limitations in terms of information interaction between neighboring nodes. Assume that the communication capability between neighboring nodes is only sufficient to exchange LKF results with each other. In these circumstances, the MFDKF cannot work since it needs to use the observations of its neighboring nodes. Take the target tracking model in Section <ref> as an example, and set the observation noise as the mixed Gaussian distribution (<ref>). The performances of S-MFDKF with different parameters ξ are compared. Table <ref> shows that both the accuracy and the consensus are improved compared to BIKF without communication. § CONCLUSION In this work, a MFDKF has been proposed for the WSNs in which the observation noise exhibits non-Gaussian characteristics. The proposed MFDKF approximates the observation noise as GMM utilizing the EM algorithm. Based on this, the observation model of DKF can be modeled as a combination of sub-models with different Gaussian components. The final estimation is achieved by fusing all sub-models with the IMM. Considering WSNs with high consensus requirements and low communication capabilities, consensus rule-based C-MFDKF and S-MFDKF were proposed, respectively. The mean error analysis and mean-square error analysis are given to verify the convergence of the proposed MFDKF and its derived algorithms. Simulations on the simple linear system, and target tracking system (cv model), validated the desirable performance of the proposed MFDKF and its derived algorithms. The limitation of our proposed MFDKF is that the computational complexity increases exponentially as the number of its neighbors increases. It means that the proposed MFDKF and its derived algorithms are unsuitable for large WSNs. Additionally, adequate and accurate samples of observation noise are essential for the proposed algorithms, which is the basis for noise approximation before the state estimation. In the future, the proposed MFDKF can be considered for derivation to nonlinear systems. § ACKNOWLEDGES This study was founded by the National Natural Science Foundation of China under Grant No.51975107, the Sichuan Science and Technology Major Project No.2022ZDZX0039, No.2019ZDZX0020, and the Sichuan Science and Technology Program No.2022YFG0343. elsarticle-num
http://arxiv.org/abs/2306.08964v1
20230615085658
When Hyperspectral Image Classification Meets Diffusion Models: An Unsupervised Feature Learning Framework
[ "Jingyi Zhou", "Jiamu Sheng", "Jiayuan Fan", "Peng Ye", "Tong He", "Bin Wang", "Tao Chen" ]
cs.CV
[ "cs.CV" ]
When Hyperspectral Image Classification Meets Diffusion Models: An Unsupervised Feature Learning Framework Jingyi Zhou^*, Jiamu Sheng^*, Jiayuan Fan, Member, IEEE, Peng Ye, Tong He, Bin Wang, Senior Member, IEEE, and Tao Chen, Senior Member, IEEE^*Jingyi Zhou and Jiamu Sheng contributed equally to this work. July 31, 2023 ================================================================================================================================================================================================================= When Hyperspectral Image Classification Meets Diffusion Models: An Unsupervised Feature Learning Framework Jingyi Zhou^*, Jiamu Sheng^*, Jiayuan Fan, Member, IEEE, Peng Ye, Tong He, Bin Wang, Senior Member, IEEE, and Tao Chen, Senior Member, IEEE^*Jingyi Zhou and Jiamu Sheng contributed equally to this work. July 31, 2023 ================================================================================================================================================================================================================= Learning effective spectral-spatial features is important for the hyperspectral image (HSI) classification task, but the majority of existing HSI classification methods still suffer from modeling complex spectral-spatial relations and characterizing low-level details and high-level semantics comprehensively. As a new class of record-breaking generative models, diffusion models are capable of modeling complex relations for understanding inputs well as learning both high-level and low-level visual features. Meanwhile, diffusion models can capture more abundant features by taking advantage of the extra and unique dimension of timestep t. In view of these, we propose an unsupervised spectral-spatial feature learning framework based on the diffusion model for HSI classification for the first time, named Diff-HSI. Specifically, we first pretrain the diffusion model with unlabeled HSI patches for unsupervised feature learning, and then exploit intermediate hierarchical features from different timesteps for classification. For better using the abundant timestep-wise features, we design a timestep-wise feature bank and a dynamic feature fusion module to construct timestep-wise features, adaptively learning informative multi-timestep representations. Finally, an ensemble of linear classifiers is applied to perform HSI classification. Extensive experiments are conducted on three public HSI datasets, and our results demonstrate that Diff-HSI outperforms state-of-the-art supervised and unsupervised methods for HSI classification. Hyperspectral image classification, denoising diffusion probabilistic model, unsupervised feature learning. § INTRODUCTION HYPERSPECTRAL image (HSI) classification plays a crucial role in remote sensing <cit.>, as it aims to distinguish each pixel's category in hyperspectral data by using dense and detailed electromagnetic spectral information <cit.>. The main objective is to effectively discriminate objects of interest by capturing subtle variations in the spectral signatures associated with their pixels <cit.>. To this end, a large number of methods are developed, relying on hand-crafted features <cit.>. However, these works are designed to extract features manually, leading to limited discriminative information extraction from the high spectral variability <cit.>. The rapid advancement of deep learning has led to the emergence of supervised approaches, a typical category of HSI classification methods, specifically utilizing deep convolutional neural networks (CNNs) <cit.> and transformers <cit.>. Although these previous supervised methods have already achieved promising results in HSI classification, they still have limitations in spectral-spatial feature extraction. Firstly, previous supervised methods train the CNN or transformer network to learn explicit mappings which leads to poor performance to model complex spectral-spatial relations. Secondly, supervised methods tend to prioritize features from labeled areas while HSIs have large unlabeled regions that contain label-agnostic information. They result in a preference for learning labeling priors and limited general representation generation that characterize spectral-spatial relations. Additionally, the limited availability of labeled samples in HSI datasets poses constraints on previous supervised methods. As another category of HSI classification methods, <cit.> is designed to learn spectral-spatial features from HSIs in an unsupervised manner. Typically, these methods leverage an encoder-decoder network trained in an unsupervised manner for HSI reconstruction and extract the spectral-spatial feature from the intermediate feature maps, depicted in Fig. <ref>. Although these methods mine implicit spectral attributes and spatial relations through unsupervised feature learning, the reconstruction networks they designed mainly capture low-level reconstruction features such as texture and edge, lacking high-level features which are necessary for discriminative representations to model the spectral-spatial relations comprehensively <cit.>. Learning rich representations and modeling the complex spectral-spatial relations from high dimensions are significant and should be considered for HSI classification. Most recently, diffusion models <cit.> have emerged as new generative models with superior performance in generation and reconstruction tasks, and have started to be explored in other computer vision tasks like semantic segmentation <cit.>. Diffusion models have the ability in the reconstruction task, so they are well-suited as a framework for unsupervised feature learning. Furthermore, there exist two appealing properties of using diffusion models <cit.> for unsupervised feature learning of hyperspectral data: 1) diffusion models learn implicit mappings, which have superior representation ability on complex problems via simple expressions and extract both high-level and low-level features from images; 2) the stepwise reverse denoising process of diffusion models are modeled as an iterative optimization process optimized by Langevin dynamics. This process brings a large number of degrees of freedom and tries to predict further information conditioned on the given noise-destructed data at every timestep, leading to better generalization to complex problems. When applying diffusion models to unsupervised feature learning of HSI data, it not only learns both high-level and low-level features but also extracts spectral-spatial features in the dimension of timestep t, as shown in Fig. <ref>. Naturally, it is crucial to explore how to exploit abundant timestep-wise features more effectively and efficiently. On the one hand, the timestep-wise features are diverse and informative and may focus on different information. The shallow timestep-wise features are more informative for fine details, while deeper ones are more concerned about high-level semantics and global information <cit.>. On the other hand, HSIs from various datasets acquired by different sensors exhibit distinct spectral characteristics, leading to variations in the spectral representation patterns of timestep-wise features extracted from different HSI datasets. Meanwhile, the abundant timestep-wise features pose a challenge in terms of computation overheads. In addition, manually selecting timestep-wise features to characterize spectral-spatial features effectively is struggle. Therefore, a dynamic feature fusion module that adaptively fuses features of different timesteps for different HSI datasets is desired. In view of these, we propose the unsupervised spectral-spatial feature learning framework based on the diffusion model for HSI classification, named Diff-HSI. More specifically, the proposed framework first pretrain a diffusion model with unlabeled HSI patches for unsupervised feature learning, which is able to automatically learn label-agnostic knowledge and mine the connotation of unlabeled data that reveals the complex spectral-spatial dependencies compared to conventional supervised methods. Then, we extract features from different hierarchies from the pretrained denoising U-Net decoder to construct the timestep-wise feature bank, which contains various informative features from different timestep t. To effectively harness the features from the timestep-wise feature bank and softly learn the proper timestep-wise feature combination for different HSI datasets, we propose a dynamic feature fusion module. This module is designed to adaptively fuse hierarchical features from different timesteps, generating multi-timestep representations enriched with comprehensive spectral-spatial information. Ultimately, an ensemble of linear classifiers is employed to leverage these dynamic representations for accurate HSI classification. To summarize, our contributions are listed as follows. 1) As a pioneer work, we introduce diffusion models to HSI classification and propose a diffusion-based unsupervised feature learning framework called Diff-HSI, which can implicitly learn both high-level and low-level features to model complex spectral-spatial relations. 2) To leverage abundant timestep-wise features effectively and efficiently, we construct a timestep-wise feature bank with hierarchical features from different timesteps, and design a dynamic feature fusion module to fuse these timestep-wise features adaptively. The above process learns the soft timestep-wise feature combination for each labeled patch of different HSI datasets and constructs comprehensive multi-timestep representations. 3) Compared with several state-of-the-art supervised and unsupervised methods, experimental results demonstrate that our proposed method achieves superior classification accuracy on three public HSI datasets. The remainder of this paper is organized as follows. Section II describes related work. In Section III, our proposed Diff-HSI is introduced in detail. Section IV conducts extensive experiments on three HSI datasets to demonstrate the effectiveness of the proposed method. Finally, some conclusions are drawn in Section V. § RELATED WORK §.§ Supervised-based HSI Classification Methods Due to the remarkable breakthroughs achieved by deep learning in various computer vision tasks, many progressive deep learning-based networks have been widely utilized for supervised HSI classification methods, e.g., autoencoders (AEs) <cit.>, deep belief networks (DBNs) <cit.> and recurrent neural networks (RNNs) <cit.>. Among these, CNNs draw significant attention with their excellent feature extraction capability and become mainstream in HSI classification <cit.>. CNNs can effectively extract spectral-spatial features through their ability to extract spatially structural information and locally contextual information. As the vision transformer rises in the field of computer vision, researchers begin to explore the value of the transformer in HSI classification and find it outperforms CNN-based methods in solving the problem of long-term spectral information dependencies <cit.>. Although these supervised methods have achieved promising performance, they still require large amounts of labeled HSI samples, and are limited to modeling complex relations and capturing label-agnostic information. §.§ Unsupervised-based HSI Classification Methods Unsupervised feature learning is a feature extraction paradigm to automatically learn feature representations from the input data without any annotated information. Leveraging unsupervised feature learning on unlabeled HSI patches is a solution to the limited labeled samples of HSI datasets. Typically, the commonly used unsupervised feature learning methods in HSI classification are based on the encoder–decoder paradigm, where an autoencoder-like network encodes the input HSI patches into a refined feature and then reconstructs the feature to initial HSI data by a decoder. Mou et al. <cit.> first design a fully 2D Conv–Deconv network in an end-to-end manner for unsupervised feature learning of HSI classification. Similarly, Mei et al. <cit.> design a 3D convolutional autoencoder (3D-CAE) for unsupervised feature learning of HSI classification. To alleviate the insufficiency of geometric representation and exploit the multi-scale features, Zhang et al. <cit.> design a multi-scale CNN-based unsupervised feature learning framework, with two branches of decoder and clustering optimized by the error feedback of image reconstruction and pseudo-label classification. In the aforementioned approaches, the feature representations learned from the unlabeled HSI patches contain more spectral-spatial information than supervised methods; however, they mainly focus on low-level visual concepts and ignore high-level feature extraction for HSI classification, which is solved in our methods. §.§ Diffusion Models Diffusion models are a class of probabilistic generative models that progressively inject a standard Gaussian noise, then learn a model to reverse this process for sample generation <cit.>. Current research on diffusion models is mostly based on three formulations: denoising diffusion probabilistic models (DDPMs) <cit.>, score-based generative models <cit.>, and stochastic differential equations <cit.>. DDPMs are the mainstream diffusion models, and a large number of recent works based on DDPMs have made DDPMs increasingly powerful in terms of generative quality and diversity over other generative models <cit.>. Meanwhile, DDPMs have been widely used in several applications, including super-resolution <cit.>, inpainting <cit.>, and point cloud generation <cit.>. However, to the best of our knowledge, few works successfully adopt them to HSI classification. When applying diffusion models to HSI classification, it can not only implicitly learn both high-level and low-level features but also extract spectral-spatial features in the dimension of timestep t, leading to improvement in modeling complex spectral-spatial relations. § METHOD Our proposed Diff-HSI aims to learn effective spectral-spatial representations in an unsupervised manner via denoising diffusion probabilistic model (DDPM) for HSI classification. The framework is shown in Fig. <ref>. Our method consists of two key steps. The first step is to pretrain the DDPM with unlabeled patches to learn complex spectral-spatial distributions and dependencies of HSI data. After pretraining, we freeze the parameters of the pretrained DDPM. The second step of our Diff-HSI is a supervised learning process aiming to fully extract and dynamically fuse the timestep-wise features from the pretrained denoising U-Net for classification. In order to capture the characteristics of different objects in HSI data, we first extract abundant spectral-spatial features from different timestep t to construct the timestep-wise feature bank. Then, we design a dynamic feature fusion module that fuses the features in the feature bank dynamically and generates multi-timestep representations with discriminative information for classification. Finally, the representations are fed into an ensemble of lightweight classifiers to predict the label of the center pixel. In this section, we introduce our proposed Diff-HSI in detail, including the unsupervised diffusion-based pretraining step as well as the supervised finetuning step with multi-timestep representation generation. §.§ Diffusion-based Unsupervised Spectral-Spatial Feature Learning In order to learn complex spectral-spatial relations and label-agnostic information of HSI data, we pretrain a diffusion model in an unsupervised manner in the first step of our Diff-HSI. We first give a brief review of DDPM and introduce the basic principles to make our method easier to understand. Then, we explain our unsupervised feature learning procedure in detail through diffusion-based pretraining with HSI data. §.§.§ A Brief Review of DDPM Denoising diffusion probabilistic models are a class of likelihood-based models that reconstruct the distribution of training data via an encoder-decoder denoising model. The denoising model is trained to remove noise from the training data destructed by Gaussian noises step-by-step. These models consist of a forward noising process and a reverse denoising process. In the forward process, Gaussian noise is added to the original training data x_0 ∼ q(x_0) step by step over T time steps, which follows the Markovian process: q(x_t|x_t-1)=𝒩 (√(1-β_t)x_t-1, β_tI) where 𝒩(.) is a Gaussian distribution, and the Gaussian variances {β_t}_t=0^T that determines the noise schedule are either be learned or scheduled. The above formulation leads that an arbitrary noisy sample x_t for each timestep t is obtained directly from x_0: x_t = √(α_t)x_0 + √((1 - α_t))ϵ, ϵ∼𝒩(0,I) where α_t=1 - β_t, and α_t=∏_s=1^t α_s. Then in the reverse process, DDPM also follows a Markovian process to denoise the noisy sample x_T to x_0 step by step. Under large T and small β_t, the reverse transitions probability is approximated as a Gaussian distribution and is predicted by a learned neural network as follows: p_θ(x_t-1|x_t)=𝒩(x_t-1;μ_θ(x_t,t),σ_θ(x_t,t)) where the reverse process is re-parameterized by estimating μ_θ(x_t,t) and σ_θ(x_t,t). σ_θ(x_t,t) is set to σ_t^2 𝐈, where σ_t^2 is not learned. In practice, rather than predicting μ_θ(x_t,t) directly, predicting the noise ϵ in Eq. <ref> via a U-Net works best, and the parameterization of μ_θ(x_t,t) is derived as follows: μ_θ(x_t,t)=1/√(α_t)(x_t-1-α_t/√(1-α_t)ϵ_θ(x_t,t)) The U-Net denoising model ϵ_θ(x_t,t) is optimized by minimizing the following loss function: ℒ(θ)=E_t,x_0,ϵ[(ϵ-ϵ_θ(√(α_t)x_0)+√(1-α_tϵ),t)^2] In our work, improved DDPM <cit.> is adopted and has been proven to bring some improvements to the above DDPM. In detail, learned variances σ_θ(x_t,t) and an improved cosine noise schedule proposed in <cit.> lead to enhanced distribution learning ability. §.§.§ Unsupervised Hyperspectral Diffusion Pretraining Using the training skills and optimization objectives mentioned above, our DDPM is trained by unlabeled hyperspectral data. Suppose that S is the set of all samples in a dataset, D is a denoising diffusion probabilistic model, and E is an ensemble of lightweight classifiers. The target of the unsupervised pretraining step is to learn label-agnostic spectral-spatial features on unlabelled data sampled from S so that the features can be leveraged in the finetuning step to train ensemble lightweight classifiers using a small amount of labeled data sampled from S_labeled∈ S for classification. In order to learn effective spectral-spatial features in an unsupervised manner, we first pretrain a denoising diffusion probabilistic model with unlabeled patches randomly cropped from the HSI dataset. Before training, the data is pre-processed by principal components analysis (PCA) and patch cropping operation. Then, patches are randomly sampled from compressed HSI for DDPM pretraining. In detail, given an unlabeled patch x_0 ∈ℛ^H × H, we gradually add Gaussian noise to the HSI patch according to the variance schedule {β_t}_t=0^T in the diffusion process where T is the total number of the timestep. Then, in the reverse process, a denoising U-Net is trained to predict the noise added on x_t-1 taking noisy patch x_t and timestep t as inputs. And x_t-1 is calculated by subtracting the predicted noise from x_t. In other words, the model tries to capture some useful information of x_t to reconstruct x_t-1 that is closer to x_0. In each step of the training process, the timestep t is randomly sampled from 0 to T. In the pretraining procedure with unlabeled hyperspectral data, the DDPM is able to automatically learn the label-agnostic prior of HSI data that reveals the complex dependencies and relations in the spectral and spatial dimensions. At different timesteps, different input noisy patches and their optimization goals allow the DDPM to attend to varied information of the data, which is demonstrated to be critical to HSI classification. §.§ Multi-Timestep Representation for Supervised Finetuning After unsupervised pretraining, in order to explore the rich information learned by the pretrained DDPM, the second step of our method is to extract effective features from the pretrained DDPM and construct multi-timestep representations for HSI classification. We first extract abundant features and construct a timestep-wise feature bank. Then, a dynamic feature fusion module is designed to automatically fuse the features into multi-timestep representations. Finally, ensemble lightweight classifiers are trained for classification. §.§.§ Timestep-Wise Feature Bank In the finetuning step, the parameters of the pretrained DDPM are frozen. The denoising U-Net is able to capture meaningful information from the input data in the reverse process. Leveraging the rich and diverse information captured in the reverse steps is crucial for accurate classification. Therefore, timestep-wise spectral-spatial features are extracted from the intermediate hierarchies of DDPM at different timesteps to construct effective representations that contain diverse information of input HSI. Given an input patch x_0 ∈ℛ^H × H randomly cropped from HSI and a timestep t, Gaussian noise is gradually added to x_0 according to the variance schedule through diffusion process q(x_t|x_0). The noisy patch x_t can be directly obtained by equation <ref>. Then, the noisy patch x_t is fed into the pretrained denoising U-Net to obtain hierarchal features from the U-Net decoder, which implies rich spectral information as well as multi-scale spatial information, as shown in Fig. <ref>. The features from different layers of the decoder are jointly upsampled to H × H and then concatenated to get the feature f_t ∈ℛ^H × H × d at timestep t. The above process is defined as a function ℱ and formulated as follows: f_t = ℱ(D,x_t,t) where D is the pretrained denoising U-Net. For each feature f_t_i∈ℛ^H × H × d, we reserve only the vector corresponding to the center pixel indexed as (H/2,H/2) to obtain the pixel-wise feature h_t_i∈ℛ^1 × 1 × d, which largely reduce computational cost with fewer parameters. Following the above process, we draw m sets of features to construct the timestep-wise feature bank: ℬ = { h_t_i | i ∈{1,...,m}, h_t_i = center(f_t_i) } where t_1, ..., t_m are sampled from [0, T] at equal intervals. Features extracted at different timestep t tend to capture different information of HSI data since the reverse process is a step-by-step denoising process that tries to predict further information conditioned on the given destructed data at every different timestep. To further demonstrate this, we visualize the features in 2-dimensional space using t-distributed stochastic neighbor embedding (TSNE) to show the diversity of features at different timesteps t. The visualization of the original spectral feature space is shown in Fig. <ref>(a), and the features in the timestep-wise feature bank at timestep 100, 300, 500, 700, and 900 are visualized in Fig. <ref>(b)-(f) respectively. As observed, the features in the timestep-wise feature bank show great diversity. Furthermore, with the increase of timestep t, the clusters of the features tend to be more loosely structured, and features of the same class tend to be formed into a few large clusters rather than dispersed into multiple smaller clusters. Features at smaller t tend to capture stochastic variation, and features at larger t tend to focus on high-level information, both of which are key to HSI classification with different categories of land covers. Thus the abundant features in the feature bank contain diverse and multi-level information of the input HSI data, exhibiting diverse properties and concerns. §.§.§ Dynamic Feature Fusion Module For each input HSI patch x_0, a timestep-wise feature bank is obtained with abundant features containing varied characteristics of the patch. However, the features contain a wealth of information, but not all of it is essential. Using all the features for classification will bring high computational costs and redundant information, and manually selecting timestep-wise features from the feature bank is difficult to characterize spectral-spatial features effectively and robustly. Meanwhile, HSI data have different spectral characteristics for different HSI datasets, resulting in different spectral representation laws of timestep-wise features. Also, data on the same dataset is very different due to its position. Therefore, the dynamic feature fusion module is crucial to be proposed, and is capable of adaptively fusing features of different timesteps from the DDPM to learn effective multi-timestep representations with key information. To obtain dynamic multi-timestep representations, we design the dynamic feature fusion module to predict the soft dynamic weight of multi-timestep representations. The architecture of the dynamic feature fusion module is illustrated in Fig. <ref>. Specifically, Let L = [h_t_1 ...h_t_m] denotes the concatenated features from the timestep-wise feature bank as the input of the dynamic weight network. The dynamic weight network consists of two linear layers and outputs the n sets of dynamic weights w with a SoftMax function: w = ℳ(L,W) = SoftMax(a(L,W)) = SoftMax(W_2 δ(W_1 L)) where M is the dynamic weight network, W is the linear weight of M, δ is ReLU function, w ∈ℛ^m × 1 × 1 × n and L ∈ℛ^m × 1 × 1 × n. Inspired by the design of the multi-head attention mechanism in <cit.>, we generate multiple sets of dynamic weights, which allows the representations to focus on different information from different positions and spectral bands. Each dynamic weight determines the proportion of each feature in the corresponding representation. Then, the dynamic weights are used to weight the pixel-wise features by matrix multiplication to obtain n multi-timestep representations: r_i = w^T_iL where r_i is the i-th representation, i∈1,...,n. The multi-step representations will be concatenated into the final representation to be classified. Thus, the most appropriate and relevant representation is automatically learned from the feature bank without human intervention, saving time cost and reducing the impact of redundant information. n will be ablated to find the most suitable value balancing the performance and computational cost. §.§.§ Lightweight Ensemble Classifiers After getting the dynamic representation, a lightweight network is needed to predict the classification label. Inspired by <cit.>, we train an ensemble of lightweight linear classifiers that takes the dynamic pixel representations as inputs and predicts the classification label of each pixel. Specifically, each classifier is trained independently, consisting of two hidden layers with ReLU activation and batch normalization. When testing a sample, the final predicted label is obtained by majority voting of the ensemble of pixel classifiers, as illustrated in Fig. <ref>. This method brings more stability of prediction with a very small cost since the parameters of each classifier are very limited. § EXPERIMENTS AND RESULTS In this section, we first describe the three used well-known HSI datasets, including the Indian Pines dataset, Pavia University dataset and Houston2018 dataset. The experimental setting is then introduced including evaluation metrics, a brief introduction of compared state-of-art methods, and implementation details. Then, we conduct quantitative experiments and ablation analysis to evaluate our proposed method. §.§ Datasets Description §.§.§ Indian Pines The Indian Pines dataset was acquired in 1992 over an area of Indian pines in North-Western Indiana by Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor. It consists of 145 × 145 pixels with a spatial resolution of 20 m and 220 spectral bands in the wavelength range of 400 to 2500 nm. There are 200 bands retained for classification (1-103, 109-149, 164-219) after removing the bands affected by noise. The dataset contains 10249 labeled pixels with 16 categories. We use 10% of the labeled samples for training and the rest for testing. The class name and the number of training and testing samples are listed in Table <ref>. §.§.§ Pavia University The University of Pavia dataset was collected in 2003 by the reflective optics system imaging spectrometer (ROSIS-3) sensor over a part of the city of Pavia, Italy. The dataset consists of 610 × 340 pixels with a spatial resolution of 1.3 m and 115 spectral bands in the wavelength range of 430 to 860 nm. 103 out of 115 bands are used for classification after removing 12 noisy bands. The image contains a large number of background pixels and only 42776 labeled pixels divided into 9 classes, including asphalt, meadows, gravel, and so on. We use 5% of the labeled samples for training and the rest for testing. The class name and the number of training and testing samples are listed in Table <ref>. §.§.§ Houston2018 The Houston2018 dataset, identified as 2018 IEEE GRSS DFC dataset, was gathered in 2018 by the National Center for Airborne Laser Mapping (NCALM) over the University of Houston campus and its neighboring urban area, including HSI, multispectral LiDAR, and very high resolution RGB images. The HSI dataset consists of 601 × 2384 pixels with a spatial resolution of 1 m and 48 spectral bands in the wavelength range of 380 to 1050 nm. It contains 504856 labeled pixels and 20 classes of interest. We use 5% of the labeled samples for training and the rest for testing. The class name and the number of training and testing samples are listed in Table <ref>. §.§ Experimental Setting §.§.§ Evaluation Metrics We evaluate the performance of all methods by three widely used indexes: overall accuracy (OA), average accuracy (AA), and Kappa coefficient (κ). §.§.§ Comparison with State-of-the-art Methods To demonstrate the effectiveness of our proposed method, we compare our classification performance with several state-of-the-art approaches, including CNN-based and transformer-based supervised methods, and unsupervised methods. We use the most effective setting for each of these methods. * The 2-D CNN architecture contains three 2-D convolution blocks and a softmax layer. Each convolution block consists of a 2-D convolution layer, a BN layer, an avg-pooling layer, and a ReLU activation function. * The 3-D CNN contains three 3-D convolution blocks and a softmax layer. Each 3-D convolution block consists of a 3-D convolution layer, a BN layer, a ReLU activation function, and a 3-D convolution layer with step size 2. * The SSRN<cit.> is a spectral-spatial residual network based on 3-D CNN and residual connection. Spatial residual blocks and spatial residual blocks are designed to extract discriminative features from HSI data. * For SF<cit.>, there are four encoder blocks in the ViT-based network. In the transformer framework of SF, group-wise spectral embedding and cross-layer adaptive fusion modules are adopted to capture local spectral representations from neighboring bands. * The SSFTT<cit.> systematically combines CNN network and transformer structure to exploit spectral-spatial information in the HSI, with a Gaussian weighted feature tokenizer module making the samples more separable. * The GAHT<cit.> is a end-to-end group-aware transformer method with three-stage hierarchical framework. * The 3DCAE<cit.> is an unsupervised method using the encoder-decoder backbone with 3D convolution operation to learn spectral-spatial features. * The UMSDFL<cit.> is an unsupervised method using encoder and decoder with convolutional layers to learn spectral-spatial features. A clustering branch and a multi-layer fusion module is designed to enhance the features. §.§.§ Implementation Details The proposed Diff-HSI was implemented using the Pytorch framework. The patch size is set to 48×48, and the dimension of PCA is set to 10. In the diffusion-pretraining procedure, we use Kullback-Leibler Divergence Loss as the loss function. And the Adam optimizer is adopted with a batch size of 128 and a learning rate of 1e-4. In the second stage, the cross-entropy loss is used in lightweight classifiers. m, n, and k are set to be 20, 3, and 5, respectively. We adopt the Adam Optimizer and the Cosine Annealing as our training schedule. The original learning rate and minimum learning rate are set to be 1e-4 and 5e-6, respectively. The number of epochs is set to 100 for all datasets. Fairly, we calculate the results by averaging the results of ten repeated experiments with different training sample selections. §.§ Quantitative Results and Analysis Quantitative classification results in terms of class-specific accuracy, OA, AA, and κ of the compared methods on the Indian Pines, PaviaU, and Houston 2018 datasets are listed in Table <ref>, <ref>, and <ref>, respectively. And the classification maps of all methods are shown in Fig. <ref>, <ref> and <ref>. Compared with other methods, our proposed Diff-HSI achieves the highest OA, AA, and κ on the three datasets. According to the results, the CNN-based supervised methods obtain good performance owing to the ability of capturing local spatial information. Besides, since transformers are capable of capturing sequential information, transformer-based supervised methods also achieve competitive performance. The GAHT method achieves the best results in CNN-based and transformer-based supervised methods because it balances the local information and the global dependencies. However, the methods that learn explicit mappings have bottlenecks in learning label-agnostic information and complex relations, which limits performance. Some unsupervised methods are proposed to tackle the problem by learning representative features through an encoder-decoder network. Limited by the model architecture, the features learned are not discriminative and rich enough for HSI classification lacking high-level information. Therefore, the performance of these methods is even lower than that of some explicit learning methods. Our proposed Diff-HSI introduces the diffusion model to HSI classification, which has the powerful ability to model complex relations. Furthermore, the features learned by our method are of great diversity, enriching the representation capability. Thus, our Diff-HSI outperforms all the previous methods on the three datasets. Notably, the classification performance of the Houston2018 Dataset is largely improved compared with the previous SOTA method in terms of OA (98.07% versus 96.69%), AA (95.37% versus 92.98%), and κ (0.9748 versus 0.9570), which especially demonstrate our effectiveness. §.§ Ablation Studies In this section, we analyze the effect of the components in our method. §.§.§ Ablation for Set Number of Dynamic Weights Our dynamic feature fusion module contains n sets of dynamic weights which impact the classification performance. The change of results by changing n is shown in Tabel <ref>. We reveal that the best performance is achieved when n is taken as 3. With smaller n, only part of the useful information is obtained from the feature bank, which is insufficient for classification. On the other hand, larger n will bring more parameters and excess information. §.§.§ Ablation for Pretraining Steps We validate the effectiveness of pretraining on the Indian Pines dataset in terms of OA, AA, and κ. As shown in Table <ref>, only 10k steps pretraining brings dramatic improvement (more than 30% OA) to the final classification performance. Furthermore, as the pretraining steps increase, the performance continues to rise to the best at around 40k steps. §.§.§ Sensitivity Analysis of Timestep T To analyze the features extracted from different timestep t, we record the change of the classification performance when changing t. For easy understanding, we choose 4 of the 16 classes to show their changes, as illustrated in Fig. <ref>. The performance for each class behaves differently as the t increases. Features of larger t are more sensitive to class "corn-notill" and features of smaller t are more informative to class "woods". For class "Grass-pasture-mowed" and class "corn", features extracted at the intermediate t is the most discriminative. Thus, an appropriate set of t is vital for accurate performance. Table <ref> shows the results with several sets of manually assigned t. Specifically, given a timestep set X=[t_1,t_2,t_3], we extract the features at timestep t_1, t_2, and t_3 to construct the pixel-wise representation for classification. As Table <ref> presents, the performance is sensitive to the choice of t. X with a large t, a small t and a medial t performs better than X with only smaller t or larger t. Because the former contains a wider range of t leading to more diverse features which make the representation more general for samples of all classes. However, representation obtained by the manually designed method is not always the most appropriate for each sample, since it sacrifices some specificity to improve generalization. So we propose the dynamic feature fusion module to customize T for each sample which makes it more accurate for classification. The results in Table <ref> demonstrate that our dynamic feature fusion module learns more effective representation compared with the manually selected method. §.§ Parameter Analysis In this section, we analyze the effect of various parameters that influence classification performance by training our proposed Diff-HSI in the same experimental setting as Section IV-B with different parameters. §.§.§ Effect of Different Patch Size First, we discuss the effect of different patch sizes on classification performance by two indexes, OA and AA. As shown in Fig. <ref>, the patch size varies from 24×24 to 60×60. as the patch size increases, the performance first increases and then decreases. The best performance is obtained when the patch size is 48×48 with OA of 99.39% and AA of 99.30%. Too small patches contain insufficient spatial information and too large patches reduce the attention to detailed structures. Thus, we choose 48×48 to be the patch size for the proposed Diff-HSI. §.§.§ Effect of Different PCA Components This section analyzes the influence of the number of PCA components, which determines how much spectral information is retained in the compressed data. As the number of PCA components increases, more spectral information is retained while more computational cost and more redundant information are brought. Since each dataset has a different number of channels, the range of PCA components is different for the three datasets. Assuming that N is the channel number of a dataset, the number of PCA components varies from N/20 to N/5. According to the results shown in Fig. <ref>, the best performance is achieved at PCA components of N/8. § CONCLUSION HSI contains rich spectral-spatial information and complex relations, which are critical for classification tasks. We propose the Diff-HSI method to learn discriminative spectral-spatial features in an unsupervised manner using the diffusion model. Most of the existing works perform HSI classification in an explicit way based on CNN or Transformer. These methods often heavily rely on the number of labeled data and are insufficient to model complex relations, limiting the improvement in performance. Some works explore unsupervised learning for HSI classification through an encoder-decoder feature learning method. However, limited by the feature-extracting capability of the model, they mainly tend to capture low-level features, lacking high-level features which are significant for discriminative spectral-spatial feature representation. Our proposed method leverages the powerful implicit learning ability of the diffusion model to learn effective spectral-spatial features with high degrees of freedom, constructing the step-wise feature bank. Furthermore, we design a dynamic feature fusion model to obtain multi-timestep representation from the feature bank automatically. Quantitative experiments on three HSI datasets demonstrate that our proposed Diff-HSI outperforms state-of-art supervised and unsupervised methods. IEEEtran
http://arxiv.org/abs/2306.02778v1
20230605110338
EffCRN: An Efficient Convolutional Recurrent Network for High-Performance Speech Enhancement
[ "Marvin Sach", "Jan Franzen", "Bruno Defraene", "Kristoff Fluyt", "Maximilian Strake", "Wouter Tirry", "Tim Fingscheidt" ]
eess.AS
[ "eess.AS" ]
Stochastic p-Bits Based on Spin-Orbit Torque Magnetic Tunnel Junctions X. F. Han July 31, 2023 ====================================================================== Fully convolutional recurrent neural networks (FCRNs) have shown state-of-the-art performance in single-channel speech enhancement. However, the number of parameters and the FLOPs/second of the original FCRN are restrictively high. A further important class of efficient networks is the CRUSE topology, serving as reference in our work. By applying a number of topological changes at once, we propose both an efficient FCRN (), and a new family of efficient convolutional recurrent neural networks (, ). We show that our (875K parameters) and (396K) outperform the already efficient (85M) and (7.2M) networks, respectively, w.r.t. PESQ, DNSMOS and ΔSNR, while requiring about 94% less parameters and about 20% less #FLOPs/frame. Thereby, according to these metrics, the FCRN/EffCRN class of networks provides new best-in-class network topologies for speech enhancement. Index Terms: noise suppression, efficient networks, convolutional recurrent neural networks, speech enhancement § INTRODUCTION In single channel noise suppression the aim is to estimate a clean speech signal from a noisy mixture of clean speech and interfering background noise. For real world applications, efficient methods enabling low-latency real-time processing and adhering to memory limitations are of utmost importance <cit.>. Recently, neural networks have seen increasing use for this task with many of the prominent approaches estimating a complex spectral mask for the noisy speech in the short-time Fourier transform (STFT) domain <cit.>. Especially convolutional neural networks (CNNs) have been widely used for the task of speech enhancement <cit.>. The sliding kernels allow for precise modelling of local dependencies in the speech spectra <cit.>. Recurrent processing has been employed to model temporal dependencies in addition to spectral characteristics <cit.>. Namely, long short-term memory (LSTM) <cit.> and gated recurrent unit (GRU) <cit.> layers have been incorporated into CNNs <cit.>. A fully convolutional recurrent network (FCRN) <cit.> has been introduced to combine the strengths of convolutional modelling even throughout the recurrent layers by using convolutional LSTMs (CLSTMs) for feature processing, achieving state-of-the-art performance <cit.>. Considering the reduction of computational complexity, multiple approaches such as pruning or quantization have been proposed <cit.>. Earlier models achieved efficiency using hybrid processing methods and coarse features <cit.> while recent methods employed deep filtering in a multi-stage setup <cit.>. While other fields have seen neural architecture search employed to find efficient base topologies for subsequent scaling <cit.>, in speech enhancement an important recent advancement in terms of providing an efficient high-performance topology was the Convolutional Recurrent U-net for Speech Enhancement (CRUSE) class of networks <cit.>. However, the reduction of computational complexity comes with a tradeoff in terms of model performance. Additionally, the huge parameter counts, with the smaller CRUSE versions being even in the range of the original FCRN (5.2 M) or higher, remain a problem for memory-constrained applications. Our work builds upon the idea of the FCRN and improves the efficiency by reducing the number of filters and the kernel size. This allows to increase the network's depth, but requires to introduce learnable skip connections and an only linear increase (decrease) of filter numbers in the encoder (decoder), thereby creating a smaller network that retains most of its performance. Besides the efficient , our core contribution in this paper is the efficient convolutional recurrent neural network topology which takes the design principles "deeper and wider with smaller kernels" a step further. Additionally, we reduce zero-padding of layer inputs and regain good quality by allowing for non-convolutional layers in the now sparse bottleneck. We compare the FCRN and EffCRN variants to the strongest CRUSE networks. The paper is structured as follows. Section <ref> introduces the evaluation framework and the network topologies used in our experiments. Experimental setup, datasets, and training parameters are detailed in Section <ref>, followed by a discussion of our experimental results. We conclude in Section <ref>. § MODEL TOPOLOGIES §.§ Evaluation Framework Our models operate in the STFT domain. The spectral coefficients of clean speech S_ℓ(k) and noise D_ℓ(k) yield noisy speech Y_ℓ(k) = S_ℓ(k) + D_ℓ(k). While ℓ denotes the frame index, k ∈𝒦 = {0,1,… K - 1} is the frequency bin index of a K-point DFT. The models predict a spectral mask G_ℓ(k) ∈ℂ which is the complex-valued representation of the real-valued network output tensor 𝐆_ℓ(k) ∈ℝ^2. The clean speech estimate is then Ŝ_ℓ(k) = G_ℓ(k) · Y_ℓ(k). §.§ FCRN Variants The original FCRN <cit.> consists of 10 convolutional layers, 1 CLSTM and multiple up-/downsampling layers as well as 2 skip connections that additively combine features from the encoder and from the decoder. A more efficient version of this network is the depicted in Fig. 1. It performs up- and downsampling with strided convolutions (as shown to be beneficial in <cit.>) and increases the network's depth to 15 layers. Additionally, it enhances the additive skip connections with 1× 1 depthwise convolutions. Less and smaller filter-kernels enable its two CLSTMs to be more efficient than the FCRN's single CLSTM. We depict the using an encoder-decoder block EDBlock() which combines two convolutions from both encoder and decoder and the learnable skip connections as shown in Fig. 2. It requires two input signals 𝐱^enc/dec_in and produces two output signals 𝐱^enc/dec_out. The number of filter kernels is i· F, where i is the index of the EDBlock, N and V=1 are the size of the 3D kernel along the frequency and the time axis, respectively, and S=2 denotes the stride. All employed non-depthwise convolutions possess kernels with a third axis covering all available feature maps. The size of the compressed frequency axis is M', while C_in^enc/dec describes the number of channels of each input. Convolutional layers are denoted by Conv(i· F, N × V)_/S, with S being an optional stride. DeConv(i· F, N × V)_S refers to a transposed convolution with the same parameter set. Layer outputs have a size feature axis × time axis × feature maps. In comparison to the well known CRUSE topology we only downsample every other convolutional layer and we increase the number of filters linearly instead of exponentially with depth. All models take an input M × 1 × C_in with M=K/2+1+P and P being the number of zeros padded to the non-redundant bins of the spectrum. We use the LeakyReLU activation <cit.> for all but the last convolutional layer, which uses linear activation. Instead of G_ℓ(k), the bounded network output G'_ℓ(k) = tanh(|G_ℓ(k)|) ·G_ℓ(k)/|G_ℓ(k)| with constrained magnitude | G'_ℓ(k)| ∈ [0,1] is then used for masking by Ŝ_ℓ(k) = G'_ℓ(k) · Y_ℓ(k) <cit.>. CLSTM activations are tanh and sigmoid as published in <cit.>. §.§ The New EffCRN Topologies The new topology can also be depicted using the EDBlocks as shown in <ref> with an increased total depth of now 23 layers (change D). Compared to the , the self-evident approach for the is to initially use fewer and overall significantly smaller filter kernels (change F). However, contrary to the CRUSE approach, we do not decrease network depth to obtain the smaller network. Instead we increase network depth, thereby allowing the network to compensate for the drastic impact of reduced filter count and size on it's capacity. The increased depth allows the network to find its own powerful feature representation while maintaining overall computing costs low. This is in line with literature which states that an optimal ratio of depth to width exists for a given network <cit.>. Our network performs further downsampling along the frequency axis and achieves a significantly smaller feature representation at the bottleneck between encoder and decoder, where recurrent modelling takes place. Notably we use the first CLSTM to sharply reduce the number of filters as well for efficient processing. However, to keep output signal quality high, the resulting smaller representation not only allows but requires non-convolutional recurrent bottleneck processing, replacing the second CLSTM layer (change C) with an efficient non-convolutional GRU layer (change G) without an excessive increase of model parameters. Additionally, we reduce computations by optimizing zero-padding and apply it only when necessary (change -0.5ptP). Single entries are padded to even input sizes in the encoder directly before EDBlocks and the extra entries are removed at the respective location in the decoder as shown in Fig. 3. The is an even smaller version of the same topology featuring fewer filters. Activations and output bounding (<ref>) are identical to the FCRN variants. Please note that the changes from to can be concisely stated by the introduced shorthand as: = DPFCG. §.§ CRUSE Network Baselines We compare our work against the CRUSE networks <cit.>, which have been designed to be efficient. We choose the two most powerful architectures from <cit.>, namely CRUSE5_256-_2xLSTM1 () and CRUSE4_­128_1xGRU4_convskip () for re-imple­men­ta­tion and reference. They feature a symmetrical encoder-decoder structure consisting entirely of convolutional blocks using a (3× 2) kernel, whereas our models process single frames (V=1). In the bottleneck they feature either 2 sequential LSTMs or 4 parallel GRU layers. The GRUs were fed by flattening, then splitting the input data to those layers. Since we focus on a network comparison, we employ the CRUSE topologies detailed in <cit.>, keeping layers and activations unchanged. Note that applying a linear output layer and a bounding (<ref>) as in the FCRN/EffCRN variants, performance changes negligibly. We use the same input features as for our networks and pad them to fulfill divisibility constraints imposed by repeated downsampling. § EXPERIMENTS AND DISCUSSION §.§ Datasets and Parameter Settings Experiments are performed on WSJ0 speech data <cit.> mixed with noise from the DEMAND <cit.> and QUT <cit.> datasets for training and development, whereas unseen noise data is taken from the ETSI dataset <cit.> for the test set. SNR conditions are 0, 5 and 10 dB at an active speech level of -26 dBov before mixing <cit.>. We employ 105 hours of training data, 10 hours of validation data, and 1 hour of test data, with disjoint speakers between any of the three datasets. All audio data is sampled at 16 kHz. Framing is performed with a square-root Hann window / frame length of 512 samples (equals DFT size K) and a 50% frameshift, allowing #FLOPS/frame to be compared across all networks. Input and output of all networks are real and imaginary parts of spectrum and gain, respectively (C_in = C_out= 2 channels). While the baseline FCRN uses F=88, N=24 and M=260, for the , we choose F=32 and N=12. The input size M= 264 enables threefold downsampling by a factor of 2. For the topology we set the number of filters to F=27 and the kernel size N=4. Due to in-network padding we can choose M=260. The uses F=17 for an even smaller network and is otherwise identical to the . §.§ Network Training The models are trained using backpropagation-through-time with a sequence (utterance) length of |ℒ_u|=100 frames, corresponding to 1.6 seconds, and a minibatch size of |ℬ|= 16. We use the Adam optimizer with standard parameter settings as given in <cit.>. The learning rate is set to 10^-4, which is dynamically reduced by a factor of 0.6 upon 4 consecutive epochs without improved validation loss. Training is stopped after the learning rate falls below 10^-6, or after 10 consecutive epochs without improved validation loss, or upon completing 70 epochs. We train using a fixed seed that we do not optimize for. Training is performed on a single . As optimization target we use the state of the art loss function by Braun and Tashev <cit.> J = 1/|ℬ|∑_u ∈ℬ1/|ℒ_u|∑_ℓ∈ℒ_u( 1-α/|𝒦|∑_k∈𝒦||Ŝ_ℓ(k)|^c - |S_ℓ(k)|^c|^2 + α/|𝒦|∑_k ∈𝒦||Ŝ_ℓ(k)|^c e^j φ_ŝ,ℓ(k) - |S_ℓ(k)|^c e^jφ_s,ℓ(k)|^2 ), with a compression factor c=0.3 and α = 0.3 being a weighting between complex and magnitude contribution. The set of utterance indices per minibatch is denoted as ℬ, while ℒ_u denotes the set of frame indices in utterance u. The phase of the enhanced and clean signal are denoted as φ_ŝ,ℓ(k) and φ_s,ℓ(k), respectively. §.§ Metrics, Results, and Discussion For the instrumental evaluation we use perceptual evaluation of speech quality (PESQ) <cit.>, ΔSNR = SNR_out - SNR_in and DNSMOS <cit.>. Signal levels for SNR calculation are determined following ITU-P.56 <cit.>. Parameter count and #FLOPs/frame are reported based on our implementation in <cit.>. The evaluation results are averaged over all unseen noise types and all SNRs of the test set and presented in <ref>. The baseline FCRN <cit.> shows the best performance in all metrics although at a high computational complexity of 1.5 GFLOPs/frame. The efficient CRUSE networks computationally operate at a significantly lower #FLOPs/frame, but in our setup require even more parameters than the FCRN. The scores close to the baseline in all reported measures, being just 0.06 PESQ points and 0.19 dB in ΔSNR below, but with only 16.8% of parameters and 8.2% of computational complexity. The has a slightly higher number of parameters but still stays below 1 M and reduces computational complexity even further: With 41 MFLOPs/frame the requires only 3% of computational complexity of the original FCRN at the price of 0.11 PESQ points and 0.38 dB ΔSNR. Downscaling the architecture to the reduces computational complexity to 1.1% with still moderate losses in PESQ (0.15) and ΔSNR (0.31 dB). DNSMOS results are close, with the being the best among the efficient models. Figs. 4 and 5 show an overview of all models' performance in terms of ΔSNR (∘) and PESQ (∗) over #FLOPS/frame and parameter count while the respective DNSMOS values can be seen in <ref>. In the efficient network regime, the excels over the in all reported metrics PESQ, DNSMOS, ΔSNR, with lower parameter count (-98%) and #FLOPS/frame (-32%). In the highly efficient network regime, the excels over the in all reported metrics (-94% parameters and -20% #FLOPS/frame), showing that our FCRN/EffCRN family of networks provides best-in-class network topologies for speech enhancement. Furthermore, Table 1 (lower half) and Table 2 show the impact of our intermediate steps (C,G,D,P,F) on computational complexity and model performance. Experimenting with the recurrent layers of the shows a slight loss of performance in case of both removal of the second CLSTM () and its replacement with a GRU (). With 7.2 M parameters, however, the latter is simply too large. As visible in <ref>, just increasing network depth alone (D) does not reduce, but increase both #parameters and #FLOPs/frame, as expected. Also shifting padding from input data towards internal data representations of the network (DP), #FLOPs/frame can be reduced by 9 M, yet the network remains unacceptably large (2.8 M). Beginning our modifications with just decreasing filter numbers and kernel sizes, leads to a very small and efficient network (F), however, coming with a dramatic performance loss across all measures compared to the , see Table <ref> again. This motivates in a first step to combine the deep network with smaller and fewer filter kernels (), which yields a still small (665 K) and efficient (41 MFLOPS/frame) network. To regain performance in a second step, we revisit the initial idea of modifying recurrent processing by replacing the 's second CLSTM with a GRU (FDPCG) which is in total then identical to the . Due to the small bottleneck feature representation, this network manages to regain 0.08 PESQ-points and 0.09 DNSMOS-points while maintaining very low complexity and staying below 1 M parameters. § CONCLUSIONS In this paper, we have introduced the topology and newly proposed the class of networks for speech enhancement. Significant reductions in model parameter count as well as #FLOPs/frame compared to the FCRN baseline can be achieved by smaller kernels and an only linear increase (decrease) of the number of filters in the encoder (decoder). This allows then to increase the network depth, but requires learnable skip connections and re-introduction of a non-convolutional (GRU) layer in the sparse bottleneck. The efficient models retain high performance with the requiring 7.6% of parameters and 1.1% of FLOPs/frame compared to the baseline topology. We furthermore show that the and outperform the and networks, respectively, w.r.t. PESQ and DNSMOS and ΔSNR, while requiring about 94% less parameters and about 20% less #FLOPs/frame. IEEEtran
http://arxiv.org/abs/2306.17726v1
20230630151606
Large-Scale Atom Interferometry for Fundamental Physics
[ "Oliver Buchmueller", "John Ellis", "Ulrich Schneider" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ex", "hep-ph", "physics.atom-ph" ]
Atom interferometers measure quantum interference patterns in the wave functions of cold atoms that follow superpositions of different space-time trajectories. These can be sensitive to phase shifts induced by fundamental physics processes such as interactions with ultralight dark matter or the passage of gravitational waves. The capabilities of large-scale atom interferometers are illustrated by their estimated sensitivities to the possible interactions of ultralight dark matter with electrons and photons, and to gravitational waves in the frequency range around 1 Hz, intermediate between the peak sensitivities of the LIGO and LISA experiments. Atom interferometers can probe ultralight scalar couplings with much greater sensitivity than is currently available from probes of the Equivalence Principle. Their sensitivity to mid-frequency gravitational waves may open a window on mergers of masses intermediate between those discovered by the LIGO and Virgo experiments and the supermassive black holes present in the cores of galaxies, as well as fundamental physics processes in the early Universe such as first-order phase transitions and the evolution of networks of cosmic strings.                   AION-REPORT/2023-04, KCL-PH-TH/2023-33, CERN-TH-2023-105                      1,2]Oliver Buchmueller, 3]John Ellis, 4]Ulrich Schneider [1]Imperial College, Physics Department, Blackett Laboratory, Prince Consort Road, London, SW7 2AZ, UK [2]University of Oxford, Department of Physics, Clarendon Laboratory, South Parks Road, Oxford OX1 3PU, UK [3]Theoretical Particle Physics and Cosmology Group, Department of Physics, King's College London, Strand, London, WC2R 2LS, UK [4]Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge, CB3 0HE, UK Large-Scale Atom Interferometry for Fundamental Physics [ July 31, 2023 ======================================================= § INTRODUCTION Quantum technologies offer new prospects for sensitive and precise measurements in fundamental physics. Prominent among quantum sensors are atom interferometers, and in this article we review their potential applications to some of the most challenging topics in fundamental physics, namely the nature of dark matter and measurements of gravitational waves. For some 80 years now, astronomers and cosmologists have been accumulating evidence that the visible matter in the Universe floats in invisible clouds of dark matter <cit.> that exert gravitational attraction but can have at most very weak electromagnetic and other interactions with the visible matter. There are strong constraints on the fraction of the dark matter that could be provided by black holes <cit.> or other compact astrophysical objects, and the general expectation is that dark matter is composed of one or more species of so far undetected neutral particles. There are two general schools of thought about the properties of these dark matter particles, which might be either weakly-interactive massive particles (WIMPs) or ultralight bosonic fields that form coherent waves moving non-relativistically through the Universe. In the absence of evidence for the production of any WIMPs at particle colliders (see, e.g., <cit.>) or for their collisions with heavy nuclear targets deep underground <cit.>, attention is shifting towards ultralight dark matter (ULDM) candidates that have very weak interactions with the particles of the Standard Model (SM) that make up the visible matter in the Universe. Atom interferometers enable very precise measurements of atomic properties, so are exquisitely sensitive to the possible interactions of ULDM fields with SM particles <cit.>. Atom interferometers can also probe the small distortions of space-time caused by the passage of gravitational waves (GWs) <cit.>. The discovery of GWs by the LIGO and Virgo laser interferometers <cit.> opened a new window on the Universe through which novel phenomena could be observed, such as the mergers of black holes (BHs) and neutron stars (NSs) <cit.>. It is known that supermassive BHs (SMBHs) far more massive than those revealed by LIGO and Virgo are present in the cores of galaxies <cit.>, and their binary systems and mergers are the prime target of the planned LISA space-borne laser interferometer <cit.>. The existence of intermediate-mass BHs (IMBHs) is strongly suspected <cit.>, and measurements of the GWs emitted during their mergers could cast light on the assembly of SMBHs. As we discuss in more detail below, atom interferometers can observe GWs in the mid-frequency range intermediate between LISA and LIGO/Virgo, and are hence ideal for addressing this issue. Atom interferometers can also search for possible fundamental-physics sources of GWs, such as first-order phase transitions in the early Universe and networks of cosmic strings <cit.>. In addition to these primary objectives, atom interferometers can also provide unique tests of general relativity, e.g., by probing Einstein's Equivalence Principle <cit.>, as well as testing the limits of quantum mechanics <cit.> and probing quantum effects in gravitational fields <cit.>. Beyond making significant contributions to fundamental physics, atom interferometry can also be used to study the properties of ultracold atoms, such as Bose-Einstein condensates and degenerate Fermi gases. Moreover, the precision of measurements using atom interferometry offers unprecedented sensitivity and accuracy for a variety of physical quantities, such as acceleration, local gravity, and rotation. For example, atom interferometry can be used to create highly sensitive inertial sensors for navigation and geophysical exploration. Precision measurements of the local gravitational acceleration are of relevance to earth observation and to detect underground structures <cit.>. In view of its many potential applications to precision measurements and quantum technology, atom interferometry is a rapidly-developing field of research. Many of the practical challenges involved in the design and operation of atom interferometry are being actively addressed, and much research is underway to explore fully its possibilities. The outline of this article is the following. We review the basic principles of atom interferometry in Section <ref>, and in Section <ref> we review current projects to scale up atom interferometers to baselines ranging from O(10) m to O(1) km on Earth, and potentially thousands of km in space. We then discuss in more detail in Section <ref> the potential applications to fundamental physics that motivate these projects, principally searches for ULDM and GWs. § ATOM INTERFEROMETRY IN A NUTSHELL Atom interferometry is a relatively new and exciting field of research that explores and exploits the wave-like properties of atoms in setups analogous to optical interferometers. The study of matter waves and their interference dates back to the early days of quantum mechanics, but the development of laser cooling and trapping techniques in the 1980s and 1990s greatly expanded the possibilities for this area of research. The concept of wave-particle duality was first proposed by Louis de Broglie in the 1920s <cit.>. It stipulates that all particles, including atoms, have wave-like properties and that these can give rise to observable interference patterns, for instance when a beam of particles passes through two closely-spaced slits, in the same spirit as Thomas Young's famous double-slit experiment in optics <cit.>. This proposal was later experimentally confirmed for electrons in independent experiments by Davisson and Germer <cit.> and by Thomson <cit.> in 1927. Esterman and Stern in 1930 extended these observations to atoms by diffracting a beam of sodium atoms off a surface of NaCl <cit.>. These early observations suggested that the principle of interferometry could also be extended to matter waves. The development of lasers beginning in the 1950s enabled not only ever more precise spectroscopy but also more precise control over the internal states and momenta of atoms. In particular, lasers could be used to cool and trap atoms, creating a new field of research using cold atoms. In 1995, the groups of Eric Cornell and Carl Wieman at the University of Colorado and of Wolfgang Ketterle at the Massachusetts Institute of Technology created for the first time a Bose-Einstein condensate (BEC), a new state of matter in which a group of atoms behaves as a coherent entity <cit.>. The creation of BECs opened new possibilities for the study of matter waves. The first complete atom interferometers were reported in 1991: Carnal and Mlynek <cit.> and the Pritchard group <cit.> independently demonstrated atom interferometers using microfabricated diffraction gratings. At the same time, the first atom interferometers based on laser pulses were demonstrated independently in the group of Bordé at PTB <cit.> and by Mark Kasevich and Steven Chu in Stanford <cit.>. For a general introduction and review of atom interferometry and some applications see <cit.>. §.§ Principle Atom interferometers are most easily understood by analogy to optical interferometers: by splitting and recombining beams of light, the latter are capable of detecting extremely small differences in the phase of the light waves accrued over the different paths, allowing for the measurement of tiny changes in distance, angular velocity, and other physical quantities. Laser interferometers have revolutionized our ability to make precise measurements in a wide range of fields, from astronomy to nanotechnology, and atom interferometers will expand this range even further. Any interferometer will rely on the same basic ingredients, most easily visualized in the Mach-Zehnder configuration <cit.> illustrated in Fig. <ref>. Starting from a coherent source, such as a laser in optical interferometry, a first beamsplitter splits the beam into two parts that travel along different paths, where they can pick up different phases. A mirror is then used to bring the beams back together so that they can interfere at the final (closing) beamsplitter. Crucially, the intensities at the two output ports will depend on the differential phase Δφ=ϕ_1-ϕ_2 picked up along the two arms of the interferometer. While atom interferometers can be built using microfabricated diffraction gratings, the interferometers discussed in this review rely on atom-light interactions. They have been made possible by the development of laser technology and quantum optics to levels where the central elements of any interferometer, namely the beamsplitters and mirrors, can be replaced by atom-light interactions, as illustrated in the right panel of Fig. <ref>. The laser pulses consist of coherent single-frequency light that is resonant, or close-to-resonant, with the transition between the atomic ground state and a specific excited state (|g⟩↔|e⟩). As illustrated in Fig. <ref> for an atom with initial momentum p_0, the absorption (emission) of a photon of wavelength λ not only exchanges the atom's internal state but in the process also changes the atomic momentum by the photon's momentum ħ k (-ħ k), where k=2π/λ. This recoil kick thereby deflects the atom, as seen in the right panel of Fig. <ref>. By precisely controlling the amplitude and duration of the light pulse, it is possible to implement a π/2-pulse that can act as a beamsplitter by bringing the atom into an equal superposition of ground and excited states whose momenta differ by ħ k. After this splitting, the two components propagate along their respective paths and may accrue different phase shifts due to, for instance, different magnetic, electric, or gravitational fields. After a time T, a π-pulse swaps ground and excited states and imparts a further momentum kick of ±ħ k that acts as a mirror, so that the paths are eventually recombined. A final π/2-pulse then acts as the final beamsplitter before the numbers of atoms in the ground and excited states are read out by, e.g., fluorescence imaging. These final atom numbers provide the output ports of this interferometer. This π/2-π-π/2 or Ramsey pulse sequence is directly analogous to the operation of an atomic clock <cit.>. The main difference is that in an interferometer the two parts of the wavefunction become spatially separated and can therefore pick up different spatially-dependent phases. The resulting interference pattern provides information about the physical properties of the system, such as acceleration, rotation, or gravitational field strength. This was demonstrated successfully in 1991 by the group of Bordé that observed the Sagnac effect in a rotating apparatus <cit.>. In the following years, several types of atom interferometers were developed, such as Mach-Zehnder, Ramsey-Bordé, and Sagnac interferometers, each with its own strengths and weaknesses <cit.>. In another incarnation of atom interferometry, standing waves of light are used as a diffraction grating for the atoms <cit.> in a similar fashion as crystals can be used to diffract electrons. The standing wave creates a periodic potential that interacts with the atoms, causing them to diffract into two or more paths. This remains the method of choice when implementing interferometers using ever-larger molecules to test the possible breakdown of quantum mechanics in large-mass systems <cit.>. §.§ Sensitivity The achievable sensitivity for an atom interferometer can be decomposed into two parts. The readout sensitivity, that is the precision with which the final phase of the interferometer can be read out, can be assumed to be limited by the shot-noise limit, i.e., to scale as ∝ 1/√(N), where N is the number of atoms. Current interferometers operate in the regime of 10^6 atoms per second <cit.>, and one area of development for the coming years will be to boost this by several orders of magnitude. An additional boost to the readout sensitivity will come from squeezing, as discussed below. The second part of the total sensitivity is the intrinsic sensitivity, i.e., how big a phase shift results from a given strength of the targeted physical effect. It is hence also important to design the interferometer to maximise the phase shifts created by the physics in question while minimising the sensitivity to unwanted systematics. Some important considerations are reviewed briefly below. Gradiometer A fundamental limitation of atom interferometry stems from the phase noise of the interferometer (interrogation) laser used to implement the beamsplitters. As in an atomic clock, the final populations in the output ports depend on the phase difference between the interrogation laser during the final π/2-pulse and the relative phase between the components of the atomic wave function. Single interferometers are therefore intrinsically susceptible to laser noise. This sensitivity to laser noise forms the basis for atomic clocks, where the resulting error signal is used to stabilize the laser to the atomic transition, giving rise to excellent precision for the time-averaged frequency <cit.>. However, it represents a significant limitation for interferometers that aim to study time-dependent effects such as gravitational waves or oscillating dark matter fields. An elegant solution for mitigating this limitation is provided by gradiometer configurations, where two or more interferometers are interrogated by a common laser <cit.>, as illustrated in the right panel of Fig. <ref>. In such a configuration, the laser noise becomes a common mode for all interferometers and hence does not contribute to differential signals, i.e., the differences between the phases measured by the interferometers. Such configurations are by construction not sensitive to common fields, which would induce identical phases in the interferometers, but only to the gradients of, for instance, electric or magnetic fields, and hence are known as gradiometers. Their sensitivity scales linearly with the separation Δ r between the interferometers. For terrestrial interferometers this separation is in practice limited to at most the kilometre scale, since the laser needs to travel in vacuum between the interferometers in order to avoid unwanted phase fluctuations from air turbulence or fibre noise. Large Momentum Transfer (LMT) Large momentum transfers in atom interferometry are analogous to the use of cavities in optical interferometers, where n round trips result in an n-times larger overall phase shift. In an LMT pulse sequence, the atoms interact n times with counter-propagating interferometer lasers, as illustrated in the left panel of Fig. <ref>, and thereby acquire a large momentum kick of n2ħ k. In the process, the atoms get exposed n times to the laser phase, which ultimately increases the sensitivity of the interferometer n-fold and thereby enables high-precision measurements. In the last few years LMTs up to 400ħ k have been demonstrated experimentally <cit.>. Future developments will aim at significantly increasing the amount of LMT and will need to overcome the challenge that the interferometer contrast typically decays with increasing LMT due to imperfections in the pulses used and effects of the finite initial temperature of the atoms, which gives rise to inhomogeneous Doppler shifts. Mitigation strategies will likely include more complex pulse sequences and developments of higher power laser sources. Spontaneous emission limit Spontaneous emission occurs when an excited atom spontaneously decays back to its ground state by emitting a photon and is a fundamental limitation in atom interferometry. The spontaneous emission process is stochastic, with the timing and direction of the photon emission being unpredictable. As a result, spontaneous emission introduces a random phase shift in the interferometer, which limits the precision of the measurement. For a given atomic transition, the amount of achievable LMT is ultimately limited by the rate of spontaneous emission, which is proportional to the inverse of the excited-state lifetime. As a consequence, many current proposals plan to use clock transitions, i.e., doubly-forbidden ultra-narrow transitions in atoms such as strontium (see Fig. <ref>) or ytterbium that are very weak and hence provide mHz linewidths and long lifetimes of the excited states with lifetimes >100s. These transitions also form the basis of optical atomic clocks <cit.>. While the use of ultra-narrow clock lines avoids spontaneous emissions, it requires high laser powers to excite these weak lines at a suitable rate to achieve large LMT. Squeezing As detailed above, when measuring the output of an interferometer, one characteristic limitation on the achievable precision arises from the quantum noise associated with the counting of uncorrelated probe particles. This shot noise or projection noise gives rise to the standard quantum limit (SQL) of phase resolution. It can in principle be reduced to the fundamental Heisenberg limit by entangling the probe particles, a process known as squeezing. 20dB of squeezing have recently been demonstrated in cold atoms with the help of an optical cavity, which would lead to a 100-fold reduction in phase noise <cit.>. §.§ Signals of ultralight dark matter and gravitational waves in atom interferometers Here we describe briefly the possible signals of ultralight dark matter and gravitational waves in atom interferometers in gradiometer configurations. Ultralight dark matter As discussed in more detail in Sec. <ref>, scalar ultralight dark matter (ULDM) can be modelled as a temporally and spatially oscillating classical field. If ULDM couples in a weak but finite non-gravitational linear way to Standard Model particles <cit.>, the presence of the dark matter field would manifest itself as a periodic modulation of the excitation energy and hence the transition frequency ω→ω + Δω between the atomic ground and excited states, |g ⟩ and |e ⟩, generating a phase shift in atom interferometers: 1/√(2) |g ⟩ + 1/√(2) |e ⟩ e^-iω T → 1/√(2) |g ⟩ + 1/√(2) |e ⟩ e^-i(ω +Δω)T . Spatial gradients of the dark matter field are expected to be too small to be detectable, but ULDM will nonetheless make its presence felt in an atom gradiometer, as the interferometer pulses interact with the different atomic clouds at different times separated by Δ t=c Δ r, giving rise to differential phase shifts between the interferometers. Gravitational waves While ULDM is ultimately detected by its effect on the atomic transition frequencies, a passing gravitational wave (GW) is detected via the strain it creates in the space between the free-falling atoms. This strain changes the light propagation time, and therefore gives rise to an additional phase shift for the light pulses seen by the spatially-separated interferometers in the gradiometer: 1/√(2) |g ⟩ + 1/√(2) |e ⟩ e^-iω T → 1/√(2) |g ⟩ + 1/√(2) |e ⟩ e^-iω (T +Δ T) . In both cases, the resulting phase is directly proportional to the separation Δ r between the two interferometers and the number n of LMT pulses applied. Hence, large-scale interferometers are required to reach the desired sensitivities. § LANDSCAPE OF LARGE-SCALE ATOM INTERFEROMETER EXPERIMENTS The experimental landscape of atom interferometry projects has expanded significantly in recent years, with several initiatives underway to construct projects exploiting different cold atom technologies beyond the conventional laboratory environment. These include the construction of prototype projects at scales O(10 - 100) m, namely a 10-m fountain at Stanford and MAGIS-100 at FNAL in the US <cit.>, AION-10 at Oxford with possible 100-m sites at Boulby in the UK and at CERN under investigation <cit.>, MIGA in France <cit.>, VLBAI at Hannover in Germany <cit.> and a 10-m fountain and the ZAIGA project in China <cit.>. These projects will demonstrate the feasibility of atom interferometry at macroscopic scales, paving the way for km-scale terrestrial experiments as the next steps. It is foreseen that by about 2035 one or more km-scale detectors will have entered operation. These km-scale experiments would not only be able to explore systematically for the first time the mid-frequency band of gravitational waves O(10^-2 - 10^2) Hz, but would also serve as technology readiness demonstrators for a space-based mission such as AEDGE <cit.> that would reach the ultimate sensitivity to explore the fundamental physics goals outlined in this article. In the following Sections <ref> and <ref> we outline further details of the ongoing and proposed terrestrial projects and the space-based mission concept, respectively, with a view to highlighting the enormous science potential of large-scale atom interferometry projects. §.§ Terrestrial The expanded landscape of terrestrial atom interferometer experiments includes projects ranging from ultra-sensitive laboratory setups to portable devices and commercially available gravimeters. These developments have been made possible by advancements in technology, including improved laser cooling and trapping techniques, better control over atom interferometer systems, and more precise readout and feedback mechanisms. We now review the larger terrestrial atom interferometer experiments ranging in size from O(10) m to O(1) km that are in operation, in preparation, planned or proposed in different geographical locations around the world. In the US, a pioneering facility has been operating for some years at Stanford University <cit.>. It uses a 10-m vertical vacuum chamber into which clouds of rubidium are launched. One of the experiments performed in this facility has been to test the Einstein Equivalence Principle (EEP) by comparing the motion in free fall of clouds of ^85Rb and ^87Rb. The EEP was verified at the 10^-12 level <cit.>, the most precise check in a terrestrial experiment. Another experiment used pairs of ^87Rb clouds launched simultaneously to different heights to verify a gravitational analogue of the Aharonov-Bohm effect <cit.>, namely a phase shift Δϕ = m ∫ [V(x_1,t) - V(x_2,t)] dt induced by the gravitational field of a tungsten source mass placed close to one of the clouds, as depicted in Fig. <ref>. Another 10-m fountain using strontium is currently under construction in another Stanford laboratory, and a next-stage US experiment, MAGIS-100, is currently being prepared for installation in a 100-m vertical shaft at Fermilab <cit.>, see Fig. <ref>. It plans to use clouds of strontium to probe the possible couplings of ultralight dark matter (ULDM) to Standard Model particles, and to act as a pathfinder for the search for GWs. A follow-on strontium experiment could be located in a 2-km vertical shaft at the Sanford Underground Research Facility (SURF) in South Dakota <cit.>. In addition to extending the search for ULDM interactions, it would pioneer the search for GWs in the range of frequencies intermediate between LIGO/Virgo and LISA. In Germany, the Very Long Baseline Atom Interferometry (VLBAI) test stand experiment is under construction in Hannover <cit.>, see the upper left panel of Fig. <ref>. It will consist of a 10-m vertical atom interferometer operated as a gravimeter using mixtures of ultracold ytterbium and rubidium atoms. In addition to high-precision matter-wave gravimetry and gradiometry, it will make a quantum test of the EEP, investigate decoherence and dephasing in atom interferometers and perform research on quantum clocks. In the UK, the Atom Interferometry Observatory and Network (AION) project <cit.> envisages a staged series of atom interferometers using fountains of strontium clouds in vertical vacuum chambers, as illustrated schematically in the left panel of Fig. <ref>. The first stage is to build a 10-m device [See <cit.> for a summary of the current technical status of the AION project.] in the basement of the Oxford University Physics Department, also illustrated in Fig. <ref>, which would make initial probes of possible ULDM couplings to Standard Model particles. This would be followed by a 100-m detector that could be located either in the UKRI Boulby Underground Laboratory or at CERN - see the upper right panel of Fig. <ref>. AION-100 would probe further such ULDM couplings and make initial searches for GWs. These AION detectors would be operated in partnership with the MAGIS-100 detector, with which they shares many common technologies. The final terrestrial step for AION would be a km-scale detector, which could be installed near the 100-m detector in the UKRI Boulby Underground Laboratory. The fourth step in the AION programme would be the AEDGE space-borne detector discussed in Section <ref>. In France, the Matter-wave laser Interferometric Gravitation Antenna (MIGA) is to be installed in the Laboratoire souterrain à bas bruit (LSBB) in Rustrel at a depth of 300 m <cit.>, see the lower left panel of Fig. <ref>. It will have two orthogonal horizontal 150-m arms, each containing a pair of parallel laser beams that drive three rubidium interferometers, to measure the components of the gravitational field. The differential measurements of the gravitational field will enable the local gravity gradient to be extracted. MIGA is intended to pave the way towards ELGAR <cit.>, a proposed European research infrastructure with two horizontal 16-km arms that could study gravity with unprecedented precision. In China, the Zhaoshan long-baseline Atom Interferometer Gravitation Antenna (ZAIGA) is a large-scale interferometer facility that is currently under construction at an average depth of 200 m under a mountain near Wuhan <cit.>. ZAIGA will combine long-baseline atom interferometers, high-precision atom clocks, and large-scale gyros, as seen in the lower right panel of Fig. <ref>. It will combine a horizontal equilateral triangle of tunnels with two rubidium interferometers separated by 1 km in each arm with a 300-m vertical shaft and another horizontal 1-km tunnel housing optical clocks linked by locked lasers. The ZAIGA facility will be used for gravitational wave detection, a test of the EEP, and measurements of the gravitational red-shift, rotation and the gravito-magnetic effect. The goal of the international community interested in long-baseline terrestrial atom interferometer experiments is to have at least one km-scale detector in operation by 2035, which would maximise the possible terrestrial exploration of the mid-frequency band of gravitational waves as well as probe possible ultralight dark matter. It would also demonstrate the readiness of key technologies ahead of a space-based atom interferometry mission such as AEDGE (see Section <ref>). With this goal in mind, a workshop was organized at CERN to develop a roadmap for the design and technology choices for one or several km-scale detectors to be ready for operation in the mid-2030s <cit.>. This workshop brought together the cold atom, astrophysics, cosmology, and fundamental physics communities and built upon the previous Community Workshop on Cold Atoms in Space held in September 2021, which established a corresponding roadmap for cold atoms in space <cit.>. The workshop laid the basis for a roadmap for terrestrial experiments outlining technological milestones and refining the interim and long-term scientific goals that would guide the development of km-scale atom interferometers. Participants in the workshop agreed that the global community interested in such experiments would work together towards the establishment of an informal proto-collaboration that could develop the science case for such facilities and provide a forum for exchanging ides how to develop the necessary technological advances, as well as coordinate implementation of the roadmap for their realisation. §.§ Space Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) is a concept for a space mission deploying two strontium atom interferometers in a pair of satellites in medium earth orbit with a nominal separation of 4 × 10^4 km <cit.>. It was proposed to the European Space Agency (ESA) in response to its Voyage 2050 call for scientific mission concepts to follow the launch of the LISA GW experiment. AEDGE would measure GWs in the mid-frequency band intermediate between LISA and terrestrial laser interferometers such as LIGO, Virgo, KAGRA, ET and CE. This range of frequencies could be expanded if it is found to be feasible to use atom clouds outside the spacecraft, a configuration called AEDGE+ <cit.>. The AEDGE concept was received favourably by an ESA senior advisory committee, but the need for a preparatory programme of technology development and space qualification was emphasised. The Community Workshop on Cold Atoms in Space in 2021 was organised in response to provide an opportunity for experts from various communities to discuss together the development of a cold atom quantum technology programme coordinated at the European level. The workshop outlined a roadmap for future developments in this field with the aims of enhancing the strength and coherence of a community embracing cold atom technology experts as well as prospective users. This was presented in a community report on the prospects for cold atoms in space <cit.>, see Fig. <ref>, which highlighted the synergies and technological commonalities between space projects for fundamental science and next-generation missions for earth observation, geodesy and time-keeping. Current space-borne R&D work towards large-scale mission concepts includes pathfinder missions deploying cold atom experiments, such as CACES <cit.>, MAIUS <cit.> and CAL <cit.>. Prototypes for the key underlying optical technologies such as FOKUS <cit.>, KALEXUS <cit.> and JOKARUS <cit.> have already demonstrated reliable operation, and much more space experience will be gained in the coming years. In parallel, several other cold atom projects are aiming to demonstrate the general space-readiness of cold-atom technology including the scaling of the basic parameters that are required for large-scale space-borne mission concepts, e.g., the medium-scale STE-QUEST mission concept <cit.>. These and other technology demonstrators and pathfinder missions will serve as milestones that help to test and validate cold-atom technology, as well as identify any potential issues that need to be addressed before full-scale missions such as AEDGE can be launched. The roadmap document <cit.> provides detailed requirements for technology development and space qualification, and reviews pathfinder missions in detail. This information provided serves to guide researchers and developers who are working on cold atom technologies and will help to ensure that they meet the necessary standards for use in space. The proposed roadmap is in synergy with European Union (EU) programmes and recommendations from the ESA Voyage 2050 Senior Science Committee. This alignment will help to ensure that efforts are coordinated across different organizations and that resources are used efficiently. § FUNDAMENTAL PHYSICS WITH LARGE-SCALE ATOM INTERFEROMETERS §.§ Dark Matter Despite the overwhelming astrophysical and cosmological evidence for the gravitational effects of cold (non-relativistic) dark matter, its composition is mysterious. Searches for WIMPs continue at the LHC and via direct and indirect astrophysical signals, but without success so far, see, e.g., <cit.>. This has led to increasing interest in ultra-light dark matter (ULDM) in the form of coherent waves of scalar bosons such as dilatons, moduli or the relaxion, pseudoscalar axion-like particles, vector or tensor bosons. Atom interferometers have unique capabilities to detect ULDM candidates that couple to SM particles, and proposals have been made for probing scalar, pseudoscalar and vector fields of dark matter <cit.>. A coupling between scalar ultra-light dark matter (ULDM) and electrons, photons, or quarks can produce a time-varying modification of the effective mass of electrons, fine-structure constant of electromagnetism, or mass of quarks, respectively <cit.>. The time variations in these parameters would lead to small modulations in the energy levels of atoms, including the “clock" transition in strontium, see Fig. <ref>, which is utilised in experiments such as AION, MAGIS, and AEDGE. The simplest possibility is that the scalar ULDM field ϕ(𝐱,t) couples to SM fields <cit.> through linear interactions of the form ℒ^lin_int⊃ - ϕ(𝐱,t) ·√(4 π G_N)·[ d_m_e m_e e̅e - 1/4 d_e F_μν F^μν + d_m_q m_q q̅q] + b ϕ(𝐱,t) |H|^2 , where G_N is Newton's constant, m_e,q are the electron and quark masses, d_m_e,q, d_e and b are the couplings of the scalar ULDM to normal matter, and we have used units where ħ=c=1. The large ULDM occupation number implies that the scalar ULDM field behaves as a non-relativistic oscillating field approximated by ϕ(𝐱,t)=√(2 ρ_DM)/m_ϕcos[m_ϕ(t - 𝐯_ϕ·𝐱)+⋯] , where m_ϕ is the scalar DM mass, ρ_DM is the local cold DM density, whose average value we take to be ∼ 0.3 GeV/cm^3, and 𝐯_ϕ is the DM velocity whose magnitude |𝐯_ϕ| and dispersion v_vir have values that are characteristically ∼ 10^-3c <cit.>. The ellipses in the argument of the cosine include an unknown random phase θ that encodes the coherence properties of the ULDM field. Its coherence length is <cit.> λ_c = ħ/m_ϕ v_vir≈ 2.0 × 10^3 ( 10^-10  eV/m_ϕ) km , and its coherence time is τ_c = ħ/m_ϕ v_vir^2≈ 6.6 ( 10^-10  eV/m_ϕ) s . The magnitude of the oscillation in transition energy induced by ULDM depends on several factors, including the ULDM mass, the energy density of ULDM at the detector's location, and the strength of the linear couplings between ULDM and SM fields. The frequency of the transition-energy oscillation is determined by the ULDM mass, and the highest signal from ULDM-induced oscillation is observed when this frequency falls within the mid-frequency range commonly used by atom interferometer experiments. The panels in Fig. <ref> display sensitivity projections for scalar ULDM that is linearly coupled to electrons (photons) with SNR = 1. These projections were computed assuming the experimental parameters given in Ref.<cit.>. The AION and AEDGE sensitivity curves are overlaid on existing constraints, which are indicated by orange shading. The atom interferometer sensitivity typically demonstrates oscillations with respect to the ULDM mass. For clarity, only the envelope of the oscillations is plotted in the AION-100, AION-km, and AEDGE projections. Additionally, the AEDGE sensitivity curve is shown only until the point at which it approaches the AION-km line, although its sensitivity also extends to higher frequencies, where it would overlap with the AION-km line. Figure <ref> demonstrates the potential of atom interferometers for exploring uncharted regions of parameter space in the ULDM couplings to SM fields for scalar ULDM masses ranging from 10^-18 eV to 10^-12 eV. AION-10 aims to achieve, or even exceed, existing constraints, while AION-100 and AION-km are expected to expand significantly the range of detectable couplings to lower values. It should be noted that these projections assume atom shot-noise as the limiting factor for phase-noise. However, it is expected that below approximately 0.1 Hz Gravity Gradient Noise (GGN) will become the dominant noise source. The magnitude of the GGN is site-dependent and may be mitigated to some extent by the experimental design: see the discussion in <cit.>. In contrast, space-borne experiments are not subject to GGN, and the AEDGE projections are not subject to such uncertainties at lower frequencies, corresponding to lower ULDM masses. §.§ Gravitational Waves (GWs) The discovery of GWs by the LIGO and Virgo experiments not only confirmed a century-old prediction by Einstein, but also opened a new window on the Universe through which hitherto invisible objects could be detected. A priori, the GW spectrum could be as rich as the electromagnetic spectrum, and its exploration will require a combination of experimental techniques that are optimised for different frequency ranges. The terrestrial LIGO, Virgo and KAGRA laser interferometers are optimised for GWs with frequencies that are O(100) Hz, the planned space-borne LISA laser interferometer is optimised for frequencies that are O(10^-2 - 10^-4) Hz, and Pulsar Timing Arrays (PTAs) and SKA <cit.> are in principle sensitive to GWs with frequencies that are O(10^-9) Hz: see <cit.> and references therein. There are gaps in the spectrum between the ranges where these detectors are sensitive, and atom interferometers target primarily the O(10^-2 - 100) Hz frequency band that is intermediate between those targeted by LIGO/Virgo/KAGRA and LISA. LIGO and Virgo have discovered GWs emitted by the mergers of black holes (BHs) weighing up to a few dozen solar masses, as well as mergers of such BHs with neutron stars (NSs), and have also observed a couple of NS-NS mergers <cit.>. More recently, the NANOGrav Collaboration has reported evidence for GWs in the nHz range <cit.> that may come from binary systems containing supermassive BHs (SMBHs) <cit.> or from cosmic string networks <cit.>. So far, these observations are consistent with predictions derived from General Relativity, and the measurements of NS-NS mergers begin to constrain models of the NS equation of state. On the other hand, the observations of BH-BH mergers pose a number of astrophysical issues. What are the origins of the detected BHs? Could they be primordial? Are there BHs or other compact objects with masses between 2 and 5 solar masses? Is there a BH mass gap around 100 solar masses, as predicted by models of stellar evolution? Are there intermediate-mass BHs (IMBHs) <cit.> with masses intermediate between those detected by LIGO/Virgo/KAGRA and the SMBHs in the cores of galaxies? How were the SMBHs assembled? Answering the latter questions will require measurements in the O(10^-2 - 100) Hz frequency band targeted by large-scale atom interferometers. The left panel of Fig. <ref> shows as solid lines the possible GW sensitivities of proposed terrestrial atom interferometers (AION-10, -100 and -km) <cit.> and the proposed space-borne atom interferometer (AEDGE) <cit.>, as well as the sensitivities of LIGO, LISA and the Einstein Telescope (ET), a next-generation terrestrial laser interferometer <cit.>. Also shown are the calculated GW signals from the mergers of BH pairs with total masses 60, 10^4 and 10^7 solar masses at redshifts from 0.1 to 10. As can be seen in the right panel of Fig. <ref>, the atom interferometers would have unique potential for measuring the mergers of IMBH pairs weighing beteen 10^2 and 10^5 solar masses if they occur with redshifts ≲ 10. The left panel shows that they could also observe the early infall stages of BH mergers whose final stages could be detected by LIGO or ET, and their observations could be used to predict the times and directions of these subsequent mergers, providing alerts that could trigger subsequent multi-messenger observations. Conversely, inspiral measurements by LISA could be used to make predictions for subsequent AION or AEDGE mergers. The steep dashed lines in the left panel of Fig. <ref> and in the central part of the right panel are estimates of the possible GGN background, which is important at frequencies below O(1) Hz in terrestrial atom interferometers and will require mitigation if their full GW potential is to be realised. There is no GGN background for space-borne experiments, but there are backgrounds from unresolved galactic and extragalactic binaries at frequencies ≲ O(10^-2) Hz that impact the prospective sensitivities of LISA and AEDGE. The dashed lines at low frequencies ≲ O(10^-3) Hz in the left panel of Fig. <ref> and large masses and redshifts in the right panel indicate the possible gain in sensitivity for AEDGE that could be obtained by operating with atom clouds outside the spacecraft at distances O(10^2) m. It is all very well to be sensitive in principle to IMBH mergers, but how many such events could be expected? This question was addressed in a simple calculation using the extended Press-Schechter model to calculate the galactic halo mass function and merger rate, combined with a simple relation between the halo masses and those of their central black holes and a universal merger efficiency factor p_ BH <cit.>. The left panel of Fig. <ref> shows the mean GW energy density predicted in this model as a function of frequency, assuming a universal value of p_ BH = 0.17^+0.18_-0.08 suggested by International Pulsar Timing Array data <cit.> (recent NANOGrav data <cit.> suggest a larger value of p_ BH <cit.>). Below a frequency f ∼ 10 nHz most of the GWs are due to unresolved sources, but these become distinguishable at higher frequencies. The right panel of Fig. <ref> compares the expected numbers of detectable GW events generated during the last day before the merger in a year of observation by LISA and AEDGE as functions of the chirp mass ℳ and redshift z, assuming a universal merger efficiency factor of p_ BH=0.17. AEDGE could see an interesting number of IMBH mergers, and many of these would have been detectable (and hence predictable) by LISA during the previous year, illustrating the synergy between these two detectors. Atom interferometers are also sensitive to possible GWs from fundamental physics sources in the early Universe. The left panel of Fig. <ref> compares the sensitivities of different detectors to GWs from generic first-order phase transitions with the indicated transition temperature T_*, strength α and transition rate β/H = 10. Other possible early Universe sources of GWs are networks of cosmic strings. The right panel of Fig. <ref> compares the sensitivities of different detectors to a 10% modification of the strength of the GW signal generated by cosmic strings with tension G μ due to a change in the cosmic expansion rate at a temperature T_Δ. We see that AION and AEDGE complement well the sensitivities of LISA and ET <cit.>: see also an analysis of cosmic strings <cit.> in light of recent NANOGrav data <cit.>. §.§ Other Probes of Fundamental Physics In addition to the headline topics of the search for ULDM and measurements of mid-frequency GWs, atom interferometers can probe fundamental physics in a number of other ways, of which we now mention a few. One may consider simple modifications of the dispersion relation for propagating modes of GWs of the general form E^2 = p^2 + A p^α , where E is the energy and p the momentum of the mode. This causes frequency-dependent phase shifts of the GWs from BH binary sources, which are constrained by experimental data. Fig. <ref> compares constraints from LIGO data and gravitational Cherenkov radiation with the potential sensitivities of measurements of a `LIGO-like' BH merger at a distance D_L = 420 Mpc by AION 1km and AEDGE to the possible value of the parameter A in (<ref>) for different values of α. We see that AION 1km and AEDGE provide the strongest constraints for α≤ 1, and note that the case α = 0 corresponds to a graviton mass m = √(A). Data from LIGO have established <cit.> the upper limit m < 1.27 × 10^-23 eV . The longer observing time with AION 1km of an event similar to the LIGO/Virgo discovery event would improve the sensitivity to m < 1.1 × 10^-24  eV at the 90% CL, observations with AEDGE for 60 days before the binary merger would be sensitive to m <1.3 × 10^-25 eV at the 90% CL, and AEDGE observations of the infall stages of a BH binary with total mass 10^4 solar masses at redshift z = 1 would be sensitive to m < 8.1 × 10^-27 eV at the 90% CL, an improvement on (<ref>) by over three orders of magnitude. As concerns Lorentz violation, AEDGE measurements could improve on the LIGO sensitivity by O(1000) for α = 1/2 and by O(10) for α = 1 <cit.>. The Einstein Equivalence Principle (EEP) may be probed by measuring the relative accelerations of two different species in an atom interferometer. A pioneering experiment of this type at Stanford used wave packets of ^85Rb and ^87Rb falling freely in the Earth’s gravitational field for 2 seconds and comparing their interference fringes. This experiment measured the Eötvös parameter η = [1.6 ± 1.8 ( stat.) ± 3.4 ( sys.)] × 10^-12, consistent with zero <cit.>. This is one of the most sensitive tests of the EEP in a terrestrial experiment, and future large atom interferometers may have interesting capabilities to test the EEP. For comparison, the space-borne MICROSCOPE experiment comparing titanium and platinum alloys in free fall for five months used differential electrostatic accelerometers to measure η ( Ti, Pt) = [-1.5 ± 2.3 ( stat.) ± 1.5 ( syst.)] × 10^-15 <cit.> and the STE-QUEST proposal for a cold atom experiment in space aims to measure η ( Rb, K) with a precision ∼ 10^-17 <cit.>. Another possibility for future large atom interferometers is to study the gravitational Aharonov-Bohm effect, following the pioneering Stanford experiment mentioned earlier that made a first measurement of this effect by splitting an atom cloud into two wave packets, one of which passed close to a large source mass whereas the other was distant <cit.>. In addition to the classical effect of the gravitational field on the trajectory of the first wave packet, the atom interferometer experiment was able to measure a quantum-mechanical phase shift due to the gravitational potential difference between the two trajectories. Considering this experiment in a quantum superposition of reference frames, it has been suggested <cit.> that this result could be interpreted as providing evidence for the quantum superposition of different gravitational field configurations. These few examples illustrate the new possibilities for probing fundamental physics that are opened up by atom interferometers. § SUMMARY AND CONCLUSION Large-scale atom interferometry is an emerging field that has the potential to contribute greatly to our understanding of the universe. In this article we have reviewed the basic principles of such devices, which exploit the wave-like nature of atoms by splitting a beam of atoms into clouds following two distinct trajectories before then recombining them to create an interference pattern that can be used to extract even tiny phase shifts. Laboratory atom interferometry experiments have already provided one of the most sensitive terrestrial tests of Einstein's Equivalence Principle and demonstrated a gravitational analogue of the Aharanov-Bohm effect. This review has highlighted how current atom interferometer projects can be scaled up from the baselines of O(10) m used in current experiments to baselines that are O(1) km on Earth, and potentially thousands of km in space. These large-scale atom interferometers have the potential to improve significantly our ability to detect and study ultralight dark matter and gravitational waves, which are some of the most elusive phenomena in modern physics. For example, they can be used to search for ultralight bosonic dark matter that interacts with Standard Model particles by measuring its effects on the energy levels of the atoms. Also, they are sensitive to the passage of gravitational waves through the phase shifts induced by the distortions of space-time that they generate. Besides these headline fundamental physics objectives, large-scale atom interferometers may also provide additional tests of the Equivalence Principle and more sensitive measurements of gravitational phase shifts. They could also be used to look for new fundamental interactions and probe the validity of quantum mechanics. A recent workshop hosted by CERN brought together members of the cold atom, particle physics, astrophysics and cosmology communities to discuss the fundamental scientific objectives of terrestrial long-baseline atom interferometers, review ongoing projects, and consider the possibilities for future projects at the km scale <cit.>. Based on these discussions, work is underway to develop a roadmap for international efforts to construct one or more km-scale atom interferometers in the 2030s. In parallel to these exciting prospects in fundamental physics research, atom interferometry has many potential practical applications. For example, it can be used in highly sensitive inertial sensors suitable for precision measurements of use in navigation and geophysical exploration. It also enables high-precision measurements of the local gravitational acceleration that are useful for detecting underground structures and potentially Earth Observation from space that can contribute to understanding the impact of climate change. Prospects for such measurements and their synergies with atom interferometer experiments in fundamental physics have been reviewed in a previous workshop that proposed a roadmap for cold atoms in space, with milestones paving the way for a possible space-borne atom interferometer experiment to undertake searches for ultralight dark matter and intermediate-frequency gravitational waves in the 2040s <cit.>. In conclusion, large-scale atom interferometry is an exciting field with many potential applications in fundamental physics, with the potential for important discoveries. As cold-atom technology continues to advance and new projects are developed, we can expect it to enable breakthroughs in our understanding of fundamental physics and the Universe as a whole. Moreover, atom interferometry has the potential to revolutionise many scientific fields beyond fundamental physics research. § ACKNOWLEDGEMENTS This work was supported by UKRI through its Quantum Technology for Fundamental Physics programme, via the following grants from EPSRC and STFC in the framework of the AION Consortium: STFC Grants ST/T006579/1 and ST/W006200/1 to the University of Cambridge, STFC Grant ST/T006994/1 to Imperial College London, and STFC Grant ST/T00679X/1 to King's College London. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission. JHEP
http://arxiv.org/abs/2306.03274v1
20230605214543
All-Orders Evolution of Parton Distributions: Principle, Practice, and Predictions
[ "Pei-Lin Yin", "Yin-Zhen. XuID", "Zhu-Fang Cui", "Craig D. Roberts", "José Rodríguez-Quintero" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-lat", "nucl-ex", "nucl-th" ]
UTF8song College of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China Dpto. Ciencias Integradas, Centro de Estudios Avanzados en Fis., Mat. y Comp., Fac. Ciencias Experimentales, Universidad de Huelva, Huelva 21071, Spain Dpto. Sistemas Físicos, Químicos y Naturales, Univ. Pablo de Olavide, E-41013 Sevilla, Spain School of Physics, Nanjing University, Nanjing, Jiangsu 210093, China Institute for Nonperturbative Physics, Nanjing University, Nanjing, Jiangsu 210093, China School of Physics, Nanjing University, Nanjing, Jiangsu 210093, China Institute for Nonperturbative Physics, Nanjing University, Nanjing, Jiangsu 210093, China Dpto. Ciencias Integradas, Centro de Estudios Avanzados en Fis., Mat. y Comp., Fac. Ciencias Experimentales, Universidad de Huelva, Huelva 21071, Spain mailto:[email protected]@njupt.edu.cn (PLY); mailto:[email protected]@dci.uhu.es (YZX); mailto:[email protected]@nju.edu.cn (ZFC); mailto:[email protected]@nju.edu.cn (CDR); mailto:[email protected]@dfaie.uhu.es (JRQ) Parton distribution functions (DFs) are defining expressions of hadron structure. Exploiting the role of effective charges in quantum chromodynamics, an algebraic scheme is described which, given any hadron's valence parton DFs at the hadron scale, delivers predictions for all its DFs – unpolarised and polarised – at any higher scale. The scheme delivers results that are largely independent of both the value of the hadron scale and the pointwise form of the charge; and, inter alia, enables derivation of a model-independent identity that relates the strength of the proton's gluon helicity DF, Δ G_p^ζ, to that of the analogous singlet polarised quark DF and valence quark momentum fraction. Using available data fits and theory predictions, the identity yields Δ G_p(ζ_ C=√ 3 GeV)=1.48(10). It furthermore entails that the measurable quark helicity contribution to the proton spin is ã_0p^ζ_ C=0.32(3), thereby reconciling contemporary experiment and theory. Preprint no. NJU-INP 075/23 All-Orders Evolution of Parton Distributions: Principle, Practice, and Predictions José Rodríguez-Quintero ^https://orcid.org/0000-0002-1651-5717 ID, 2023 June 05 ================================================================================================================ 1.Introduction. The Standard Model of particle physics has three components, one of which – quantum chromodynamics (QCD) – is supposed to describe strong interactions, i.e., the interactions between gluons and quarks that lead to the formation of mesons, baryons and, ultimately, nuclei. QCD originated with development of the quark model almost sixty years ago <cit.> and crystalised following discovery of pointlike constituents of the proton (quarks) in electron + proton deep inelastic scattering (DIS) experiments <cit.> – see Fig. <ref>. Since the electron probe is well understood, such DIS experiments are seen as a measurement of the target proton's structure. The formalism is now textbook material, e.g., Ref. <cit.>-[Ch. 4]. In the Bjorken limit – see Fig. <ref>, the proton's two structure functions become equivalent and the measurement can be interpreted in terms of quark parton distribution functions (DFs) <cit.>: writing x≈ x_B, then q(x) dx is the probability that a quark within the proton carries a light-front fraction of the proton's momentum which lies in the range [x,x+dx], x∈[0,1]. Owing to the character of quantum field theory, as illustrated by Fig. <ref>, DFs associated with glue and sea-quark degrees of freedom (d^ sof) also play a role in the QCD description of scattering processes. DFs are amongst the most fundamental properties of the target. They have been a principal focus of experiment and theory for fifty years and the potential for new, deeper insights is growing with the operation of upgraded and construction of next-generation facilities <cit.>. Crucially, parton DFs are independent of the measurement process. In fact, they are related to the modulus-squared of the target hadron light-front wave function <cit.>. Notably, an analysis of the DIS process in the infinite momentum frame, where the hadron is considered to have momentum P≈ (P,0,0, i P), P≫ m_p, is equivalent to a light-front formulation of the underlying theory <cit.>. This justifies x≈ x_B. The process-independence of DFs means that they can also be extracted from measurements of other high-energy processes. For instance, pion parton DFs can be measured in pion + proton/nucleus (π + p/A) Drell-Yan experiments – see Fig. <ref> and, e.g., Ref. <cit.>-[Ch. 9]. Regarding such high-energy processes, QCD perturbation theory is used to express each associated cross-section as a convolution of (i) elementary scattering processes involving the gluon and quark fields used to define the QCD Lagrangian density with (ii) the in-hadron parton DFs. The experimental energy scale at which the data is represented in terms of this convolution, ζ_E, is also called the factorisation scale. Asymptotic freedom in QCD <cit.> and the need to use perturbation theory to analyse the reaction means that ζ_E > m_p is required. Supposing one has a valid treatment of (i), then the parton DFs can be inferred via comparison between a measured cross-section and predictions of the convolution formula. Of course, different measurements are often made at distinct energy scales and the inferred DFs depend on that scale; but within the context of perturbation theory, a DF obtained at scale ζ_E^1>m_p can be mapped into the DF measured at ζ_E^2>ζ_E^1 using perturbative scale evolution (DGLAP) equations <cit.>. This fact somewhat obscures a critical issue. The hard scattering reactions in the measurement processes can (and must) be treated using perturbation theory. However, since DFs express properties of hadron wave functions, they are essentially nonperturbative in character. So, whilst DFs may be inferred from data, they can only be predicted using a nonperturbative treatment of QCD. For roughly forty years, models of hadron structure have been employed to estimate DFs <cit.>. Such models are assumed to represent the target at some so-called hadron scale, ζ_ H< m_p, and, from the beginning, the question has been <cit.>: At which value of ζ_ H should the result be viewed as valid? This uncertainty means that, in almost all cases, ζ_ H is treated as a data fitting parameter; an optimal value being chosen so that, after DGLAP evolution, descriptive agreement with some selected data is achieved. Consequently, both predictive power and the ability to use data to validate QCD are diminished. An alternative to modelling can today be found in analyses of parton DF matrix elements using Euclidean space lattice-regularised QCD (lQCD) <cit.>, which are beginning to provide information about global and local properties of DFs. In the lQCD scheme, the DF scale is known a priori, being determined by the lattice setup. However, challenges are posed by (i) the needs to extrapolate to zero lattice spacing, infinite volume, and realistic quark masses, and (ii) to map Euclidean-space lQCD results into the infinite momentum frame. There is steady progress in these areas. Herein, we describe an alternative to both modelling and lQCD. The scheme exploits progress made with continuum Schwinger function methods (CSMs) in treating QCD <cit.>. With CSMs, the challenge lies in a need to truncate the coupled set of quantum equations of motion when formulating the calculation of DF matrix elements. It is being overcome by using symmetries, e.g., Poincaré invariance and current conservation relations, to build sound approximations that deliver robust results <cit.>. 2.Effective Charge and Hadron Scale. There are two principal elements in the formulation of all-orders evolution. The first is the notion of an effective charge <cit.>, viz. a QCD running coupling, α_ eff(k^2), is defined by any chosen observable via the formula which expresses that observable to first-order in the perturbative coupling. Thus defined, α_ eff(k^2) implicitly incorporates terms of arbitrarily high order in the perturbative coupling. Effective charges are typically process dependent, like α_g_1(k^2), defined via the Bjorken sum rule <cit.>. On the other hand, any such effective charge has many valuable qualities, e.g., it is: consistent with the QCD renormalisation group; renormalisation scheme independent; everywhere analytic and finite; and supplies an infrared completion of any standard running coupling. Regarding DFs, other choices are possible and the following is proving efficacious: the effective charge α_1ℓ(k^2) is that function which, when used to integrate the leading-order perturbative DGLAP equations, defines an evolution scheme for all parton DFs – both unpolarised and polarised, and for any hadron – that is all-orders exact. This definition is broader than usual because it refers to an entire class of observables. It is worth stressing that, although the pointwise form of α_1ℓ(k^2) is largely irrelevant, the process-independent strong running coupling defined and computed in Refs. <cit.> has all the required properties – see, e.g., Refs. <cit.>. The second key to all-orders evolution is a specification of the hadron scale, ζ_ H<m_p, i.e., the point from which all-orders evolution should begin. In order to eliminate all ambiguity, the natural choice is to associate ζ_ H with that scale at which all properties of a given hadron are carried by its valence quasiparticle d^ sof. This means, e.g., that all of a hadron's light-front momentum is carried by valence d^ sof at ζ_ H; and, furthermore, that DFs associated with gluons and sea quarks are identically zero at ζ_ H. As an illustration, then, interpreted within the context of CSM treatments of QCD, this approach identifies ζ_ H as the scale at which strong interaction bound-state problems are most efficiently solved in terms of dressed-parton d^ sof. Today, the characteristics of these dressed-gluons and -quarks are well understood <cit.>. It is here worth recalling the approach to parton DFs introduced in Refs. <cit.>-[GRV]. GRV supposed that all DFs are nonzero at some low scale, μ_0 ≈ζ_ H, being a model parameter; characterised those DFs by an array of parameters; and used a perturbative QCD coupling to evolve the DFs to larger scales for comparison with results inferred from experiment. In the GRV scheme, all parameters are varied so as to obtain a “best fit” to some body of data. The point of contact between the GRV approach and all-orders evolution is the idea of generating all higher-scale DFs from some set of low (hadron) scale DFs; but that is where the similarity ends. In the all-orders approach, ζ_ H is not a parameter; glue and sea DFs are identically zero at this scale; the valence dof DFs are not parametrised – instead, they are obtained from one or another nonperturbative calculation; and specifying the pointwise form of α_1ℓ(k^2) is typically unnecessary. 3.Algebraic Evolution Equations: Unpolarised. Consider the unpolarised DF associated with a given dof, p, in hadron H: p_H(x;ζ_ H). Such DFs express an average over the polarisations/helicities of any given constituent dof. The Mellin moments of this DF are defined as follows: ⟨ x^n ⟩_ p_H^ζ_ H = ∫_0^1 dx x^n p_H(x;ζ_ H) , n∈ℤ^≥ 0 . Given that any DF of physical interest is completely determined once all such Mellin moments are known, then it is sufficient to discuss all-orders evolution in terms of the behaviour of the moments in Eq. (<ref>). This enables one to arrive at an entirely algebraic formulation. Further, there are two key constraints. Conservation of baryon number in the Standard Model entails ⟨ x^0 ⟩_ p_H^ζ>ζ_ H = ⟨ x^0 ⟩_ p_H^ζ_ H = n_ p∈ H , where n_ p∈ H is the number of valence p d^ sof in H, e.g., n_ u∈ p=2 because there are two valence u quarks in the proton; and, by definition, for any hadrons, H, H^', ∑_ valence d^ s of ∈ H⟨ x ⟩_ p_H^ζ_ H = 1= ∑_ valence d^ s of ∈ H^'⟨ x ⟩_ p_H^'^ζ_ H i.e., in every hadron at ζ_ H, all the light-front momentum is invested in valence d^ sof. Written in terms of quark and antiquark DFs, the nonsinglet/valence and singlet DFs for a given quark flavour (q = u, d, s, c, …) are, respectively, V_H^ q(x;ζ) = q_H(x;ζ) - q̅_H(x;ζ) , Σ_H^ q(x;ζ) = q_H(x;ζ) + q̅_H(x;ζ) . The associated sea distribution is S_H^ q(x;ζ) = Σ_H^ q(x;ζ)- V_H^ q(x;ζ) and we denote the glue distribution by g_H(x;ζ). As reviewed elsewhere <cit.>-[Ch. 4], DGLAP evolution is defined by splitting functions. Ignoring quark current-mass effects, there are four such functions: P_qq(z), P_qg(z), P_gq(z), P_gg(z), each of which expresses the probability for a given parton, p, to split into two other partons with light-front momentum fractions that are factors of (z, 1-z) smaller than that of the parent. In this context, for instance, P_qg(z) is the probability that a gluon splits into a quark + antiquark pair, thereby transferring momentum from the glue DF into the singlet distribution of the given quark flavour. The process becomes more effective as the number of active quark flavours increases. Against this background and after some straightforward algebra, the all-orders evolution equations are: ζ^2 d/dζ^2⟨ x^n ⟩_ V_H^ q^ζ = - α_1ℓ(ζ^2)/4πγ_qq^n ⟨ x^n ⟩_ V_H^ q^ζ , ζ^2 d/dζ^2⟨ x^n ⟩_Σ_H^ q^ζ = - α_1ℓ(ζ^2)/4π{γ_qq^n ⟨ x^n ⟩_Σ_H^ q^ζ. . + 2 T_ qg^ζ[γ_qg^n + B_ qg^nζ] ⟨ x^n ⟩_ g_H^ζ0ex3ex} , ζ^2 d/dζ^2⟨ x^n ⟩_ g_H^ζ = - α_1ℓ(ζ^2)/4π{γ_gq^n ∑_ q⟨ x^n ⟩_Σ_H^ q^ζ. . + γ_gg^n ⟨ x^n ⟩_ g_H^ζ0ex4ex} , where q runs over all active quark flavours, n_f=4 ⇒ u, d, s, c, and γ_qq^n, γ_qg^n, γ_gq^n, γ_gg^n, β_0=11-2 n_f/3 are anomalous dimensions: γ_qq^n =γ_0^n = -4/3[ 3 + 2/(n+1)(n+2) - 4 ∑_k=1^n+11/k] , γ_qg^n = -4+n(3+n)/(n+1)(n+2)(n+3) , γ_gq^n = - 8/34+n(3+n)/n (n+1) (n+2) , γ_gg^n = -12 [ 2(3+n(3+n))/n(n+1)(n+2)(n+3) . . - ∑_k=1^n+11/k] - β_0 . These are textbook results for massless splitting functions, included here for ease of access. Notably, α_1ℓ(ζ^2) is positive definite and γ_qq^n increases monotonically from zero with n; so, evolution works on V_H^ q(x;ζ) to shift support from large- to small-x and into glue and sea DFs. Equation (<ref>) contains two additional factors. The first, T_ qg^ζ = θ(ζ - ϵ_ q), ϵ_ q=ζ_ H + δ_ q, is a quark flavour threshold function, which introduces the feature that a given quark flavour only participates in evolution once the resolving scale exceeds a value related to its effective mass. Such a factor was exploited elsewhere <cit.>, with δ_ u, d=0, δ_ s=0.1GeV, δ_ c = 0.9GeV, to explain the small, yet nonzero c+c̅ contribution to the proton light-front momentum fraction decomposition. The second flavour-dependent factor in Eq. (<ref>) derives from the following modification of the gluon splitting function: P_qg(z) → P_qg(z;ζ) = P_qg(z) + √32 C_1^(1)(1-2z) P_ q(ζ) , where C_1^(1) is a Gegenbauer polynomial and P_ q(ζ)= b_ q/[1+(ζ/ζ_H-1)^2], with b_ q a number. The correction term enables one to represent Pauli blocking in gluon splitting, viz. following ideas in Ref. <cit.>, gluon splitting into quark + antiquark pairs should be sensitive to the hadron's valence quark content so, e.g., g→ u+u̅ in the proton should be disfavoured relative to g→ d+d̅. The factor vanishes with increasing ζ, expressing the diminishing influence of valence quarks as the proton's glue and sea content increases. Using Eq. (<ref>), one finds B_ qg^nζ = √ 3 n /2+3 n + n^2 P_ q(ζ) . B_ qg^0ζ =0; hence, leaves ⟨ x^0 ⟩ unchanged, i.e., Pauli blocking has no impact on baryon number. Further, momentum conservation entails ∑_ q T_ qg^ζ b_ q = 0. Equation (<ref>) was used elsewhere <cit.>, with b_ d=0.34=- b_ u, b_ s = 0 = b_ c, to explain the asymmetry of antimatter in the proton <cit.>-[SeaQuest]. In this case, the effect of Eq. (<ref>) was to shift momentum into d+d̅ from u+u̅, leaving the sum unchanged. Equation (<ref>) is not explicitly coupled to Eqs. (<ref>), (<ref>); so, can be solved in isolation: ∀ζ > ζ_ H, ⟨ x^n ⟩_ V_H^ q^ζ = ⟨ x^n ⟩_ V_H^ q^ζ_ H [ E_ V(ζ,ζ_ H)]^γ_qq^n/γ_0^1 , E_ V(ζ,ζ_ H) = exp[γ_0^1/4π∫^lnζ_ H^2_lnζ^2 dt α_1ℓ( e^t) ] , where we have exploited universality of α_1ℓ. N.B. With increasing ζ, E_ V(ζ,ζ_ H) falls logarithmically (with exponent γ_0^1/β_0) on ζ≫ζ_ H. Regarding the solutions for glue and sea, one proceeds by defining moments of the net singlet distribution ⟨ x^n ⟩_Σ_H^ζ = ∑_ q⟨ x^n ⟩_Σ_H^ q^ζ , in terms of which, using momentum conservation, the solutions of Eqs. (<ref>), (<ref>) can be written: [ [ ⟨ x^n ⟩_Σ_H^ζ; ⟨ x^n ⟩_ g_H^ζ ]] = {[ T_2; T_3; T_4; ]} [ [ ⟨ x^n ⟩_Σ_H^ζ_ H; ⟨ x^n ⟩_ g_H^ζ_ H ]] , {[ T_2; T_3; T_4; ]} = {[ . E_ S^2(ζ,ζ_ H)|_ζ<ϵ_ s; . E_ S^3(ζ,ϵ_ s) E_ S^2(ϵ_ s,ζ_ H) |_ϵ_ s < ζ<ϵ_ c; . E_ S^4(ζ,ϵ_ c) E_ S^3(ϵ_ c,ϵ_ s) E_ S^2(ϵ_ s,ζ_ H) |_ϵ_ c < ζ ]} . Here E^n_ S(ζ_2,ζ_1) = [ [ α_+^n E^n_- + α_-^n E^n_+ β^n_g Σ[ E^n_- - E^n_+]; β^n_Σ g[ E^n_- - E^n_+] α_-^n E^n_- + α_+^n E^n_+ ]] , where α_±^n = ±λ_±^n - γ_qq^n/λ_+^n-λ_-^n , E^n_± = [ E_ V(ζ_2,ζ_1)]^λ_±^n/γ_0^1 , β_Σ g = - 2 n_f γ_qg^n/λ_+^n-λ_-^n , β_gΣ = (λ_+^n - γ_qq^n)(λ_-^n - γ_qq^n)/2 n_f γ_qg^n (λ_+^n-λ_-^n) , and λ_±^n = 12 trΓ^n ±12√([ trΓ^n]^2 - 4 Γ^n) are eigenvalues of the matrix of anomalous dimensions: Γ^n = [ [ γ_qq^n 2 n_f γ_qg^n; γ_gq^n γ_gg ]] . N.B. α_+^n+α_-^n =1, α_+^n α_-^n = β_gΣ^nβ_Σ g^n; so, E^n_ S(ζ,ζ) = 𝕀. It is now plain that, within the all-orders scheme, the evolution kernel for all DFs is that which governs evolution of the valence DFs. Using Eq. (<ref>), one obtains the characteristic equations of all-orders evolution: E_ V(ζ_2,ζ_1) = ⟨ x ⟩_ V_p^ u^ζ_2 + ⟨ x ⟩_ V_p^ d^ζ_2/⟨ x ⟩_ V_p^ u^ζ_1 + ⟨ x ⟩_ V_p^ d^ζ_1 , E_ V(ζ,ζ_ H) = ⟨ x ⟩_ V_p^ u^ζ + ⟨ x ⟩_ V_p^ d^ζ=: ⟨ x ⟩_ V_p^ζ , where the last line follows from Eq. (<ref>). These identities are true for any hadron (meson or baryon): the right-hand side is just the ratio of the sums of the light-front momentum fractions of the valence d^ sof at two different scales. Evidently: (i) the all-orders evolution kernel is independent of the form of α_1ℓ(k^2); and (ii) given any DF at one scale, ζ_1, then its form at any other scale, ζ_2, is completely determined once the valence dof momentum fraction at ζ_2 is known. Clearly, (ii) entails that all DFs are determined by the valence dof DFs at ζ_ H. This is given weight by Eq. (<ref>), which means that all evolution can be referred back to that single hadron whose properties are most readily measurable, e.g., the proton. It is interesting to temporarily suppress quark flavour thresholds, i.e., in Eq. (<ref>), ∀ q, write T_ qg^ζ = θ(ζ-ζ_ H). Then, exploiting the character of the hadron scale, viz. ⟨ x^n ⟩_ g_H^ζ_ H≡ 0, Eq. (<ref>) becomes: [ [ ⟨ x^n ⟩_Σ_H^ζ; ⟨ x^n ⟩_ g_H^ζ ]] = [ [ α_+^n E^n_- + α_-^n E^n_+; β^n_Σ g[ E^n_- - E^n_+] ]] ⟨ x^n ⟩_Σ_H^ζ_ H . Specialising to n=1: λ_+^1=56/9, λ_-^1 = 0, α_+^1=β_Σ g^1=3/7, α_-^1=β_gΣ^1=4/7; then using ⟨ x ⟩_Σ_H^ζ_ H=1 and Eqs. (<ref>), one arrives at the following: [ ⟨ x ⟩_Σ_H^ζ = 37 + 47[ ⟨ x ⟩_ V_p^ζ]^7/4,; ⟨ x ⟩_ g_H^ζ = 47 (1 - [⟨ x ⟩_ V_p^ζ]^7/4 ) , ] from which one recovers the textbook ζ≫ζ_ H results for quark and gluon momentum fractions in any hadron. Suppose now that Eqs. (<ref>), (<ref>) are valid ∀ζ > ζ_ H and reinterpret them as statements about the DFs x Σ_H(x;ζ), x g_H(x;ζ). Then, in the limit ζ/ζ_ H→∞, the zeroth moments of these distributions are nonzero constants (3/7,4/7) and all other moments vanish; hence, x Σ_H(x;ζ) ζ_ H/ζ≃ 0≈37δ(x) , x g_H(x;ζ) ζ_ H/ζ≃ 0≈47δ(x) . Restoring threshold factors, one readily obtains the ζ > ϵ_ c generalisation of Eqs. (<ref>): [ ⟨ x ⟩_Σ_H^ζ = 37 + τ(ϵ_ c,ϵ_ s) [ ⟨ x ⟩_ V_p^ζ]^7/4,; ⟨ x ⟩_ g_H^ζ = 47 - τ(ϵ_ c,ϵ_ s)[⟨ x ⟩_ V_p^ζ]^7/4 , ] where τ(ϵ_ c,ϵ_ s) = -12175 [ ⟨ x ⟩_ V_p^ϵ_ c]^-74 -24275 [ ⟨ x ⟩_ V_p^ϵ_ c]^-316 [ ⟨ x ⟩_ V_p^ϵ_ s]^-2516 + 811 [ ⟨ x ⟩_ V_p^ϵ_ c⟨ x ⟩_ V_p^ϵ_ s]^-316. Formulae valid between thresholds may be obtained progressively by setting ϵ_ c→ζ and then ϵ_ s→ζ; and Eqs. (<ref>) are recovered by setting ϵ_ c, s→ζ_ H. N.B. Equations (<ref>), (<ref>) highlight that the presence of thresholds introduces some sensitivity to the pointwise behaviour of α_1ℓ(k^2) because the momentum fraction at ζ depends on its value at distinct lower scales greater than ζ_ H. Return now to Eqs. (<ref>) and consider the sea quark DF for any given flavour, then ⟨ x^n ⟩_ S_H^ q^ζ = ⟨ x^n ⟩_Σ_H^ q^ζ - ⟨ x^n ⟩_ V_H^ q^ζ . Using this identity, the difference Eq. (<ref>) - Eq. (<ref>) yields the following [ζ^2 d/dζ^2. . + α_1ℓ(ζ^2)/4πγ_qq^n ] ⟨ x^n ⟩_ S_H^ q^ζ = - α_1ℓ(ζ^2)/4π 2 T_ qg^ζ[γ_qg^n + B_ qg^nζ] ⟨ x^n ⟩_ g_H^ζ . Evidently, the sea quark DFs are determined once the glue DF is known and the latter is solved independently in terms of the hadron's valence dof moment. Further algebra reveals the solution of Eq. (<ref>) (z= e^t/2): ⟨ x^n ⟩_ S_H^ q^ζ = - T_ qg^ζγ_qg^n /2π∫_lnϵ_ q^2^lnζ^2 dt [α_1ℓ(z^2) ⟨ x^n ⟩_ g_H^z × [⟨ x ⟩_ V_H^ζ/⟨ x ⟩_ V_H^z]^γ_qq^n/γ_0^1] - T̃_ qg^ζ𝔹^nζ_ q , 𝔹^nζ_ q = - 1/2π∫_lnϵ_ q^2^lnζ^2 dt α_1ℓ(z^2) × B_ qg^n z⟨ x^n ⟩_ g_H^z [⟨ x ⟩_ V_H^ζ/⟨ x ⟩_ V_H^z]^γ_qq^n/γ_0^1 . If present, the Pauli blocking factor introduces some additional sensitivity to the pointwise form of α_1ℓ(k^2). It is again worth considering the special case of n=1, i.e., the light-front fraction of a given hadron's momentum stored in each flavour component of the sea. Using the above identities, straightforward algebra confirms momentum conservation: ⟨ x⟩_ S_H^ζ = ∑_ q⟨ x⟩_ S_H^ q^ζ = 1 - ⟨ x ⟩_ g_H^ζ - ∑_ q⟨ x ⟩_ V^q_H^ζ = 1 - ⟨ x ⟩_ g_H^ζ - ⟨ x ⟩_ V_H^ζ , From here, additional algebra delivers the following ζ > ϵ_ c predictions (λ_ g_H^ζ=1 - ⟨ x ⟩_ g_H^ζ): ⟨ x⟩_ S_H^ u, d^ζ = 14λ_ g_H^ζ + 112λ_ g_H^ϵ_ c E_ V(ζ,ϵ_ c) + 16λ_ g_H^ϵ_ s E_ V(ζ,ϵ_ s) -12⟨ x ⟩_ V_H^ζ - 𝔹^1ζ_ u, d , ⟨ x⟩_ S_H^ s^ζ = 14λ_ g_H^ζ + 112λ_ g_H^ϵ_ c E_ V(ζ,ϵ_ c) - 13λ_ g_H^ϵ_ s E_ V(ζ,ϵ_ s) - 𝔹^1ζ_ s , ⟨ x⟩_ S_H^ c^ζ = 14λ_ g_H^ζ - 14λ_ g_H^ϵ_ c E_ V(ζ,ϵ_ c) - 𝔹^1ζ_ c , with ⟨ x ⟩_ g_H^ζ given in Eq. (<ref>). Again, formulae valid between thresholds may be obtained progressively by setting ϵ_ c→ζ and then ϵ_ s→ζ. It is worth illustrating the impact of such Pauli blocking using the pion as an exemplar. Below the s quark threshold, gluon splitting produces u+u̅ and d+d̅ with some probability and only these combinations. On the other hand, above the threshold, s+s̅ production is both possible and preferred. This may be expressed by writing T̃_ ug^ζ = T̃_ dg^ζ = T̃_ sg^ζ = θ(ζ-ϵ_ s) in Eqs. (<ref>) and b_ s = b= -2 b_ u=-2 b_ d in Eq. (<ref>). Then, using the process-independent charge described in Ref. <cit.>-[Sec. 3] for α_1ℓ(k^2), associated parton DF predictions in Ref. <cit.>, and the value b_ u= 0.34(9) required to explain the SeaQuest data <cit.>, one finds ⟨ x⟩_ S_π^ s^ϵ_ c = 1.29(11) ⟨ x⟩_ S_π^ u^ϵ_ c = 1.29(11) ⟨ x⟩_ S_π^ d^ϵ_ c . (Ignoring flavour thresholds and Pauli blocking, the right-hand side would be 1.) Thus, the light-front momentum fraction carried by s sea quarks at ζ=ϵ_ c exceeds that carried by each flavour in the light quark sea; and, at the c quark threshold, Pauli blocking has shifted 12(3)% of the momentum from each flavour in the light quark sea into the s quark sea. These outcomes mean that the same level of Pauli blocking which explains the asymmetry of antimatter in the proton leads to a marked excess of s+s̅ relative to u+u̅ and d+d̅ in the pion sea. Qualitatively similar outcomes relative to their valence quark/dof content should be expected in other hadrons. 4.Algebraic Evolution Equations: Polarised. The polarised DF Δ p(x;ζ) expresses the light-front helicity distribution of a parton p carrying a light-front fraction x of the hadron's momentum at the scale ζ. Such DFs measure the difference between the light-front number density of partons with helicity parallel to that of the hadron and those with antiparallel helicity. Thus, polarised DFs describe the contribution of a given parton species to the hadron's total spin, J. Extraction of polarised DFs requires DIS(-like) measurements with longitudinally polarised beams and targets. These demands have limited the available empirical information. Nevertheless, results from measurements in Ref. <cit.> ignited the “proton spin crisis” with a finding which, interpreted naively, suggests that quarks carry little of the proton spin – see, e.g., Ref. <cit.>. Concerning all-orders evolution, Eqs. (<ref>) - (<ref>) apply equally to polarised DFs, so long as the anomalous dimensions are amended appropriately. Regarding Ref. <cit.> and comparing Eqs. (71)-(74) with Eqs. (100), one sees that the nonsinglet anomalous dimension is unchanged. Hence, evolution of ⟨ x^n ⟩_Δ V_H^ q^ζ is also given by Eqs. (<ref>). The other three entries in Eq. (<ref>) change: γ→γ̃, with γ̃_qg^n = - 2 n_f n/(n+1)(n+2) , γ̃_gq^n = - 8/3n+3/(n+1)(n+2) , γ̃_gg^n = 6 [2n/(n+1)(n+2) + 2 ∑_k=1^n1/k] -β_0 . (The sum in the last line is zero for n=0.) Thereafter, all steps are qualitatively identical to the unpolarised case. Regarding polarised DFs, n=0 is of principal interest because it relates to the proton spin. In this case, since γ_qq^0=0, then, under all-orders evolution: ∀ζ > ζ_ H, ⟨ x^0 ⟩_Δ V_H^ q^ζ = ⟨ x^0 ⟩_Δ V_H^ q^ζ_ H . Moreover, γ̃_qq^0=γ_qq^0 =0, γ̃_qg^0=0; hence, neglecting flavour thresholds, a_0H^ζ := ⟨ x^0 ⟩_ΔΣ_H^ζ = ⟨ x^0 ⟩_ΔΣ_H^ζ_ H , Δ G_H^ζ :=⟨ x^0 ⟩_Δ g_H^ζ = a_0^ζ 1225[[ ⟨ x ⟩_ V_H^ζ ]^-75/32 - 1] . These novel identities state (i) that the valence and singlet quark polarised DFs are scale invariant and (ii), remarkably, the polarised gluon DF grows as the inverse of the net unpolarised valence parton DF with a constant of proportionality fixed by the singlet quark polarised DF. It is worth illustrating the content of Eqs. (<ref>) using the proton. Regarding the polarised singlet quark DF, if one assumes SU(3)-flavour symmetry in analyses of the axial charges of octet baryons, then these charges are expressed in terms of two low-energy constants <cit.>-[Table 1], D, F; and ⟨ x^0 ⟩_ΔΣ_p^ u^ζ_ H = 2 F, ⟨ x^0 ⟩_ΔΣ_p^ d^ζ_ H = F-D. Using empirical information <cit.>, one finds D= 0.774(26), F=0.503(27); hence, ⟨ x^0 ⟩_ΔΣ_p^ u^ζ_ H = 1.01(5), ⟨ x^0 ⟩_ΔΣ_p^ d^ζ_ H = -0.27(5), and a_0p^ζ = ⟨ x^0 ⟩_ΔΣ_p^ζ = 0.74(11) . Using Eq. (<ref>), a result for the integrated strength of the polarised gluon DF follows immediately once a value of ⟨ x ⟩_ V_p^ζ is in hand. One might choose to use a value obtained from one or another of the existing phenomenological global fits to high-energy data. A typical result <cit.>-[CT18] is listed in Position (2,A) of Table <ref>. Such fits may underestimate the valence quark momentum fraction, lodging too much momentum with the sea <cit.>. Notwithstanding that, combined with Eq. (<ref>) via Eq. (<ref>), this fraction yields the polarised gluon moment in Position (6,A) of Table <ref>: it is large because the valence quark momentum fraction is small. Experiments aimed at extracting the quark contribution to the proton spin measure the following non-Abelian anomaly corrected quantity <cit.>: ã_0p^ζ = a_0p^ζ - n_f α̂(ζ)/2πΔ G_p^ζ . Combining the results in Table <ref> – Column A, one arrives at the value in Position (7,A). In contrast, one may use the CSM prediction for the proton valence quark momentum fraction in combination with either the empirical value of a_0p^ζ, Eq. (<ref>), or the prediction from Ref. <cit.>. The results thereby obtained are listed in Columns B and C of Table <ref>. Reintroducing quark flavour thresholds, Eq. (<ref>) is unchanged, whereas on ζ > ϵ_ c, Eq. (<ref>) becomes, referring to Eq. (<ref>): Δ G_H^ζ = a_0H^ζ{1225[[ E_ V(ζ,ϵ_ c)]^-75/32-1 ] . + 49 [ E_ V(ζ,ϵ_ c)]^-75/32[[ E_ V(ϵ_ c,ϵ_ s)]^-81/32-1 ] + 1229 E_ V(ζ,ϵ_ c)]^-75/32 [ E_ V(ϵ_ c,ϵ_ s)]^-81/32 ×. [[ E_ V(ϵ_ s,ζ_ H)]^-87/32-1 ] } . As usual, formulae valid between thresholds may be obtained progressively by setting ϵ_ c→ζ and then ϵ_ s→ζ. The no-threshold result is recovered by setting ϵ_ c, s→ζ_ H. Using Eq. (<ref>) and, for α_1ℓ(k^2), the process-independent charge described in Ref. <cit.>-[Sec. 3] along with the entries in Table <ref> – Column D, one obtains the results for Δ G_p^ζ in Position (5,D). They correspond to the CSM predictions in Ref. <cit.>, after accounting for extrapolation of ∫_x_0^1 dx Δ g_p(x;ζ) to x_0=0. The result is somewhat larger than that obtained via Eq. (<ref>) because, as noted above, flavour thresholds work to suppress the number of final states available to a splitting gluon, thereby preserving strength/support in glue DFs. All results listed in Table <ref> for the anomaly corrected contribution of quark helicities to the proton spin are depicted in Fig. <ref> and compared with the value in Ref. <cit.>-[COMPASS]. Evidently, Eqs. (<ref>), (<ref>) and Eqs. (<ref>), (<ref>), deliver agreement with the COMPASS extraction, irrespective of the sources used for ⟨ x ⟩_ΔΣ_p^ζ, ⟨ x ⟩_ V_p^ζ. However, best agreement is found when using values produced by the parameter-free CSM analyses in Refs. <cit.>. 5.Summary. Every meson and baryon (hadron) is defined by its valence degrees of freedom. Yet, within quantum field theory, such hadrons may be seen as containing infinitely many additional partons. The associated distribution functions (DFs) are amongst the most important expressions of a hadron's structural characteristics. Such DFs depend on the scale whereat the hadron's structure is resolved, i.e., on the probe's wavelength. In this connection, we identified ζ_ H – the hadron scale – as that value of the resolving scale at which valence degrees of freedom carry all properties of every hadron and introduced an effective charge, α_1ℓ, which, when used to integrate the leading-order scale evolution equations, delivers an evolution scheme for all DFs – both unpolarised and polarised – that is all-orders exact. Building solely upon these propositions, we presented an algebraic scheme which, given the valence DFs for any hadron at ζ_ H, delivers predictions for all its DFs at ζ > ζ_ H. The all-orders scheme delivers results that are typically independent of both the value of ζ_ H and the pointwise form of α_1ℓ; hence, it has real predictive power. For instance, with Pauli blocking sufficient to explain the measured asymmetry of antimatter in the proton, one finds that the strange quark (s+s̅) sea in the pion exceeds the u+u̅ and d+d̅ components and predicts qualitatively equivalent outcomes in other hadrons. Furthermore, the scheme delivers a model-independent algebraic identity that relates the integrated strength of the in-proton gluon helicity DF, Δ G_p^ζ, to the analogous strength of the singlet polarised quark DF and the valence quark light-front momentum fraction, ⟨ x ⟩_ V_p^ζ. It entails that Δ G_p^ζ grows faster than 1/[⟨ x ⟩_ V_p^ζ]^2 as the valence momentum fraction decreases. Exploiting this identity, contemporary measurements of the quark helicity contribution to the proton spin are reconciled with theory predictions. Acknowledgments. Work supported by: National Natural Science Foundation of China (grant no. 12135007); Nanjing University of Posts and Telecommunications Science Foundation (grant no. NY221100); Natural Science Foundation of Jiangsu Province (grant no. BK20220122); Spanish Ministry of Science and Innovation (MICINN grant no. PID2019-107844GB-C22); and Junta de Andalucía (grant no. P18-FR-5057). 63 urlstyle [Gell-Mann(1964)]GellMann:1964nj authorM. Gell-Mann, titleA Schematic Model of Baryons and Mesons, journalPhys. Lett. volume8 (year1964) pages214–215. [Zweig(1964)]Zweig:1981pd authorG. Zweig, titleAn SU(3) model for strong interaction symmetry and its breaking. Parts 1 and 2 (year1964) pages(CERN Reports No. 8182/TH. 401 and No. 8419/TH. 412). [Taylor(1991)]Taylor:1991ew authorR. E. Taylor, titleDeep inelastic scattering: The Early years, journalRev. Mod. Phys. volume63 (year1991) pages573–595. [Kendall(1991)]Kendall:1991np authorH. W. Kendall, titleDeep inelastic scattering: Experiments on the proton and the observation of scaling, journalRev. Mod. Phys. volume63 (year1991) pages597–614. [Friedman(1991)]Friedman:1991nq authorJ. I. Friedman, titleDeep inelastic scattering: Comparisons with the quark model, journalRev. Mod. Phys. volume63 (year1991) pages615–629. [Friedman et al.(1991)Friedman, Kendall, and Taylor]Friedman:1991ip authorJ. I. Friedman, authorH. W. Kendall, authorR. E. Taylor, titleDeep inelastic scattering: Acknowledgements, journalRev. Mod. Phys. volume63 (year1991) pages629. [Ellis et al.(1991)Ellis, Stirling, and Webber]Ellis:1991qj authorR. K. Ellis, authorW. J. Stirling, authorB. R. Webber, titleQCD and collider physics, publisherCambridge University Press, Cambridge, UK, year1991. [Feynman(1972)]Feynman:1973xc authorR. P. Feynman, titlePhoton-hadron interactions, publisherW. A. Benjamin, New York, year1972. [Denisov et al.(2018)]Denisov:2018unj authorO. Denisov, et al., titleLetter of Intent (Draft 2.0): A New QCD facility at the M2 beam line of the CERN SPS . [Aguilar et al.(2019)]Aguilar:2019teb authorA. C. Aguilar, et al., titlePion and Kaon Structure at the Electron-Ion Collider, journalEur. Phys. J. A volume55 (year2019) pages190. [Brodsky et al.(2020)]Brodsky:2020vco authorS. J. Brodsky, et al., titleStrong QCD from Hadron Structure Experiments, journalInt. J. Mod. Phys. E volume29 (number08) (year2020) pages2030006. [Chen et al.(2020)Chen, Guo, Roberts, and Wang]Chen:2020ijn authorX. Chen, authorF.-K. Guo, authorC. D. Roberts, authorR. Wang, titleSelected Science Opportunities for the EicC, journalFew Body Syst. volume61 (year2020) pages43. [Anderle et al.(2021)]Anderle:2021wcy authorD. P. Anderle, et al., titleElectron-ion collider in China, journalFront. Phys. (Beijing) volume16 (number6) (year2021) pages64701. [Arrington et al.(2021)]Arrington:2021biu authorJ. Arrington, et al., titleRevealing the structure of light pseudoscalar mesons at the electron–ion collider, journalJ. Phys. G volume48 (year2021) pages075106. [Aoki et al.(2021)]Aoki:2021cqa authorK. Aoki, et al., titleExtension of the J-PARC Hadron Experimental Facility: Third White Paper – arXiv:2110.04462 [nucl-ex] noteExtension of the J-PARC Hadron Experimental Facility: Third White Paper – arXiv:2110.04462 [nucl-ex]. [Quintans(2022)]Quintans:2022utc authorC. Quintans, titleThe New AMBER Experiment at the CERN SPS, journalFew Body Syst. volume63 (number4) (year2022) pages72. [Brodsky and Lepage(1979)]Brodsky:1979gy authorS. J. Brodsky, authorG. P. Lepage, titlePerturbative Quantum Chromodynamics, journalProg. Math. Phys. volume4 (year1979) pages255–422. [Brodsky et al.(1998)Brodsky, Pauli, and Pinsky]Brodsky:1997de authorS. J. Brodsky, authorH.-C. Pauli, authorS. S. Pinsky, titleQuantum chromodynamics and other field theories on the light cone, journalPhys. Rept. volume301 (year1998) pages299–486. [Politzer(2005)]Politzer:2005kc authorH. D. Politzer, titleThe dilemma of attribution, journalProc. Nat. Acad. Sci. volume102 (year2005) pages7789–7793. [Gross(2005)]Gross:2005kv authorD. J. Gross, titleThe discovery of asymptotic freedom and the emergence of QCD, journalProc. Nat. Acad. Sci. volume102 (year2005) pages9099–9108. [Wilczek(2005)]Wilczek:2005az authorF. Wilczek, titleAsymptotic freedom: From paradox to paradigm, journalProc. Nat. Acad. Sci. volume102 (year2005) pages8403–8413. [Dokshitzer(1977)]Dokshitzer:1977sg authorY. L. Dokshitzer, titleCalculation of the Structure Functions for Deep Inelastic Scattering and e^+ e^- Annihilation by Perturbation Theory in Quantum Chromodynamics. (n Russian), journalSov. Phys. JETP volume46 (year1977) pages641–653. [Gribov and Lipatov(1971)]Gribov:1971zn authorV. N. Gribov, authorL. N. Lipatov, titleDeep inelastic electron scattering in perturbation theory, journalPhys. Lett. B volume37 (year1971) pages78–80. [Lipatov(1975)]Lipatov:1974qm authorL. N. Lipatov, titleThe parton model and perturbation theory, journalSov. J. Nucl. Phys. volume20 (year1975) pages94–102. [Altarelli and Parisi(1977)]Altarelli:1977zs authorG. Altarelli, authorG. Parisi, titleAsymptotic Freedom in Parton Language, journalNucl. Phys. B volume126 (year1977) pages298–318. [Holt and Roberts(2010)]Holt:2010vj authorR. J. Holt, authorC. D. Roberts, titleDistribution Functions of the Nucleon and Pion in the Valence Region, journalRev. Mod. Phys. volume82 (year2010) pages2991–3044. [Jaffe and Ross(1980)]Jaffe:1980ti authorR. L. Jaffe, authorG. G. Ross, titleNormalizing the Renormalization Group Analysis of Deep Inelastic Leptoproduction, journalPhys. Lett. B volume93 (year1980) pages313–317. [Lin et al.(2018)]Lin:2017snn authorH.-W. Lin, et al., titleParton distributions and lattice QCD calculations: a community white paper, journalProg. Part. Nucl. Phys. volume100 (year2018) pages107–160. [Eichmann et al.(2016)Eichmann, Sanchis-Alepuz, Williams, Alkofer, and Fischer]Eichmann:2016yit authorG. Eichmann, authorH. Sanchis-Alepuz, authorR. Williams, authorR. Alkofer, authorC. S. Fischer, titleBaryons as relativistic three-quark bound states, journalProg. Part. Nucl. Phys. volume91 (year2016) pages1–100. [Burkert and Roberts(2019)]Burkert:2017djo authorV. D. Burkert, authorC. D. Roberts, titleRoper resonance: Toward a solution to the fifty-year puzzle, journalRev. Mod. Phys. volume91 (year2019) pages011003. [Fischer(2019)]Fischer:2018sdj authorC. S. Fischer, titleQCD at finite temperature and chemical potential from Dyson–Schwinger equations, journalProg. Part. Nucl. Phys. volume105 (year2019) pages1–60. [Qin and Roberts(2020)]Qin:2020rad authorS.-X. Qin, authorC. D. Roberts, titleImpressions of the Continuum Bound State Problem in QCD, journalChin. Phys. Lett. volume37 (number12) (year2020) pages121201. [Ding et al.(2020)Ding, Raya, Binosi, Chang, Roberts, and Schmidt]Ding:2019lwe authorM. Ding, authorK. Raya, authorD. Binosi, authorL. Chang, authorC. D. Roberts, authorS. M. Schmidt, titleSymmetry, symmetry breaking, and pion parton distributions, journalPhys. Rev. D volume101 (number5) (year2020) pages054014. [Roberts et al.(2021)Roberts, Richards, Horn, and Chang]Roberts:2021nhw authorC. D. Roberts, authorD. G. Richards, authorT. Horn, authorL. Chang, titleInsights into the emergence of mass from studies of pion and kaon structure, journalProg. Part. Nucl. Phys. volume120 (year2021) pages103883. [Binosi(2022)]Binosi:2022djx authorD. Binosi, titleEmergent Hadron Mass in Strong Dynamics, journalFew Body Syst. volume63 (number2) (year2022) pages42. [Papavassiliou(2022)]Papavassiliou:2022wrb authorJ. Papavassiliou, titleEmergence of mass in the gauge sector of QCD, journalChin. Phys. C volume46 (number11) (year2022) pages112001. [Ding et al.(2023)Ding, Roberts, and Schmidt]Ding:2022ows authorM. Ding, authorC. D. Roberts, authorS. M. Schmidt, titleEmergence of Hadron Mass and Structure, journalParticles volume6 (number1) (year2023) pages57–120. [Ferreira and Papavassiliou(2023)]Ferreira:2023fva authorM. N. Ferreira, authorJ. Papavassiliou, titleGauge Sector Dynamics in QCD, journalParticles volume6 (number1) (year2023) pages312–363. [Grunberg(1980)]Grunberg:1980ja authorG. Grunberg, titleRenormalization Group Improved Perturbative QCD, journalPhys. Lett. B volume95 (year1980) pages70, note[Erratum: Phys. Lett. B 110, 501 (1982)]. [Grunberg(1984)]Grunberg:1982fw authorG. Grunberg, titleRenormalization Scheme Independent QCD and QED: The Method of Effective Charges, journalPhys. Rev. D volume29 (year1984) pages2315. [Deur et al.(2023)Deur, Brodsky, and Roberts]Deur:2023dzc authorA. Deur, authorS. J. Brodsky, authorC. D. Roberts, titleQCD Running Couplings and Effective Charges – arXiv:2303.00723 [hep-ph] . [Deur et al.(2022)Deur, Burkert, Chen, and Korsch]Deur:2022msf authorA. Deur, authorV. Burkert, authorJ. P. Chen, authorW. Korsch, titleExperimental determination of the QCD effective charge α_g_1(Q), journalParticles volume5 (number2) (year2022) pages171–179. [Binosi et al.(2017)Binosi, Mezrag, Papavassiliou, Roberts, and Rodríguez-Quintero]Binosi:2016nme authorD. Binosi, authorC. Mezrag, authorJ. Papavassiliou, authorC. D. Roberts, authorJ. Rodríguez-Quintero, titleProcess-independent strong running coupling, journalPhys. Rev. D volume96 (year2017) pages054026. [Cui et al.(2020a)Cui, Zhang, Binosi, de Soto, Mezrag, Papavassiliou, Roberts, Rodríguez-Quintero, Segovia, and Zafeiropoulos]Cui:2019dwv authorZ.-F. Cui, authorJ.-L. Zhang, authorD. Binosi, authorF. de Soto, authorC. Mezrag, authorJ. Papavassiliou, authorC. D. Roberts, authorJ. Rodríguez-Quintero, authorJ. Segovia, authorS. Zafeiropoulos, titleEffective charge from lattice QCD, journalChin. Phys. C volume44 (year2020a) pages083102. [Cui et al.(2021)Cui, Ding, Gao, Raya, Binosi, Chang, Roberts, Rodríguez-Quintero, and Schmidt]Cui:2020dlm authorZ.-F. Cui, authorM. Ding, authorF. Gao, authorK. Raya, authorD. Binosi, authorL. Chang, authorC. D. Roberts, authorJ. Rodríguez-Quintero, authorS. M. Schmidt, titleHiggs modulation of emergent mass as revealed in kaon and pion parton distributions, journalEur. Phys. J. A (Lett.) volume57 (number1) (year2021) pages5. [Cui et al.(2020b)Cui, Ding, Gao, Raya, Binosi, Chang, Roberts, Rodríguez-Quintero, and Schmidt]Cui:2020tdf authorZ.-F. Cui, authorM. Ding, authorF. Gao, authorK. Raya, authorD. Binosi, authorL. Chang, authorC. D. Roberts, authorJ. Rodríguez-Quintero, authorS. M. Schmidt, titleKaon and pion parton distributions, journalEur. Phys. J. C volume80 (year2020b) pages1064. [Lu et al.(2022)Lu, Chang, Raya, Roberts, and Rodríguez-Quintero]Lu:2022cjx authorY. Lu, authorL. Chang, authorK. Raya, authorC. D. Roberts, authorJ. Rodríguez-Quintero, titleProton and pion distribution functions in counterpoint, journalPhys. Lett. B volume830 (year2022) pages137130. [Chang et al.(2022)Chang, Gao, and Roberts]Chang:2022jri authorL. Chang, authorF. Gao, authorC. D. Roberts, titleParton distributions of light quarks and antiquarks in the proton, journalPhys. Lett. B volume829 (year2022) pages137078. [Glück et al.(1995)Glück, Reya, and Vogt]Gluck:1995 authorM. Glück, authorE. Reya, authorA. Vogt, titleDynamical parton distributions of the proton and small-x physics, journalZ. Phys. C volume67 (year1995) pages433–447. [Glück et al.(1998)Glück, Reya, and Vogt]Gluck:1998xa authorM. Glück, authorE. Reya, authorA. Vogt, titleDynamical parton distributions revisited, journalEur. Phys. J. C volume5 (year1998) pages461–470. [Field and Feynman(1977)]Field:1976ve authorR. D. Field, authorR. P. Feynman, titleQuark Elastic Scattering as a Source of High Transverse Momentum Mesons, journalPhys. Rev. D volume15 (year1977) pages2590–2616. [Dove et al.(2021)]SeaQuest:2021zxb authorJ. Dove, et al., titleThe asymmetry of antimatter in the proton, journalNature volume590 (number7847) (year2021) pages561–565. [Ashman et al.(1988)]EuropeanMuon:1987isl authorJ. Ashman, et al., titleA Measurement of the Spin Asymmetry and Determination of the Structure Function g_1 in Deep Inelastic Muon-Proton Scattering, journalPhys. Lett. B volume206 (year1988) pages364. [Anselmino et al.(1995)Anselmino, Efremov, and Leader]Anselmino:1994gn authorM. Anselmino, authorA. Efremov, authorE. Leader, titleThe Theory and phenomenology of polarized deep inelastic scattering, journalPhys. Rept. volume261 (year1995) pages1–124, note[Erratum: Phys.Rept. 281, 399–400 (1997)]. [Deur et al.(2019)Deur, Brodsky, and De Téramond]Deur:2018roz authorA. Deur, authorS. J. Brodsky, authorG. F. De Téramond, titleThe Spin Structure of the Nucleon, journalRept. Prog. Phys. volume82 (number076201). [Cabibbo et al.(2003)Cabibbo, Swallow, and Winston]Cabibbo:2003cu authorN. Cabibbo, authorE. C. Swallow, authorR. Winston, titleSemileptonic hyperon decays, journalAnn. Rev. Nucl. Part. Sci. volume53 (year2003) pages39–75. [Workman et al.(2022)]Workman:2022ynf authorR. L. Workman, et al., titleReview of Particle Physics, journalPTEP volume2022 (year2022) pages083C01. [Chen and Roberts(2022)]ChenChen:2022qpy authorC. Chen, authorC. D. Roberts, titleNucleon axial form factor at large momentum transfers, journalEur. Phys. J. A volume58 (year2022) pages206. [Hou et al.(2021)]Hou:2019efy authorT.-J. Hou, et al., titleNew CTEQ global analysis of quantum chromodynamics with high-precision data from the LHC, journalPhys. Rev. D volume103 (number1) (year2021) pages014013. [Cui et al.(2022)Cui, Ding, Morgado, Raya, Binosi, Chang, Papavassiliou, Roberts, Rodríguez-Quintero, and Schmidt]Cui:2021mom authorZ. F. Cui, authorM. Ding, authorJ. M. Morgado, authorK. Raya, authorD. Binosi, authorL. Chang, authorJ. Papavassiliou, authorC. D. Roberts, authorJ. Rodríguez-Quintero, authorS. M. Schmidt, titleConcerning pion parton distributions, journalEur. Phys. J. A volume58 (number1) (year2022) pages10. [Adolph et al.(2017)]COMPASS:2016jwv authorC. Adolph, et al., titleFinal COMPASS results on the deuteron spin-dependent structure function g_1^ d and the Bjorken sum rule, journalPhys. Lett. B volume769 (year2017) pages34–41. [Altarelli and Ross(1988)]Altarelli:1988nr authorG. Altarelli, authorG. G. Ross, titleThe Anomalous Gluon Contribution to Polarized Leptoproduction, journalPhys. Lett. B volume212 (year1988) pages391–396. [Cheng et al.(2023)Cheng, Yu, Xing, Chen, Cui, and Roberts]Cheng:2023kmt authorP. Cheng, authorY. Yu, authorH.-Y. Xing, authorC. Chen, authorZ.-F. Cui, authorC. D. Roberts, titlePolarised parton distribution functions and proton spin – arXiv:2304.12469 [hep-ph] .
http://arxiv.org/abs/2306.02032v1
20230603072235
ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems
[ "Vinjamoori Vikas", "Kuntal Deka", "Sanjeev Sharma", "A. Rajesh" ]
cs.IT
[ "cs.IT", "math.IT" ]
new spy style/.style=spy scope= magnification=2.8, size=1.25cm, connect spies, every spy on node/.style= rectangle, draw, , every spy in node/.style= draw, rectangle, L[1]> m#1 C[1]> m#1 R[1]> m#1 background foreground background,main,foreground int=[draw, fill=white!20, minimum size=2em] init = [pin edge=to-,thin,black] ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems Vinjamoori Vikas^1, Kuntal Deka^1, Sanjeev Sharma^2, and A. Rajesh^1 ^1Indian Institute of Technology Guwahati, India, ^2Indian Institute of Technology (BHU) Varanasi, India July 31, 2023 ========================================================================================================================================================================================== Large-scale multi-input multi-output (MIMO) code domain non-orthogonal multiple access (CD-NOMA) techniques are one of the potential candidates to address the next-generation wireless needs such as massive connectivity, and high reliability. This work focuses on two primary CD-NOMA techniques: sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One of the primary challenges in implementing MIMO-CD-NOMA systems is designing the optimal detector with affordable computation cost and complexity. This paper proposes an iterative linear detector based on the alternating direction method of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is converted into a sharing optimization problem. The set constraint in the ML detection problem is relaxed into the box constraint sharing problem. An alternative variable is introduced via the penalty term, which compensates for the loss incurred by the constraint relaxation. The system models, i.e., the relation between the input signal and the received signal, are reformulated so that the proposed sharing optimization problem can be readily applied. The ADMM is a robust algorithm to solve the sharing problem in a distributed manner. The proposed detector leverages the distributive nature to reduce per-iteration cost and time. An ADMM-based linear detector is designed for three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA), spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA (SM-CD-NOMA). The impact of various system parameters and ADMM parameters on computational complexity and symbol error rate (SER) has been thoroughly examined through extensive Monte Carlo simulations. Code domain-NOMA (CD-NOMA), dense code multiple access (DCMA), sparse code multiple access (SCMA), alternating direction method of multipliers (ADMM), single input multi output (SIMO), spatial multiplexing MIMO (SMX-MIMO), spatial modulation MIMO (SM-MIMO). § INTRODUCTION §.§ Motivation The existing literature extensively studies two types of code domain non-orthogonal multiple access (CD-NOMA) systems. The first type is sparse coded CD-NOMA systems, which utilize low-density spreading sequence design (e.g., low-density signatures (LDS)) or sparse codebook design (e.g., sparse code multiple access (SCMA)) <cit.>. The second type is densely coded CD-NOMA systems, which utilize dense spreading sequence design (e.g., overloaded code-division multiple access (CDMA)) or dense codebook design (e.g., dense code multiple access (DCMA)) <cit.>. Throughout the paper, dense codebook-based NOMA is referred to as DCMA, and dense spreading sequence-based NOMA is referred to as overloaded CDMA. SCMA and DCMA are critical enabling technologies to improve spectral efficiency and massive connectivity by overloading multiple user equipment's (UEs) data on a single resource element (RE). These techniques exploit the codebooks' multi-dimensional constellation (MDC) coding gain. Spreading sequence-based NOMA (LDS, overloaded CDMA) does not benefit from the coding gain of the MDC. Thus, codebook-based NOMA techniques offer substantial error rate performance gains over the spreading sequence-based NOMA. Multi-input multi-output (MIMO) technology is another crucial enabler in improving spectral efficiency and system reliability <cit.>. CD-NOMA systems using a MIMO system (MIMO-CD-NOMA) offer even greater spectral efficiency, reliability, and massive connectivity improvements. This paper considers three types of uplink (UL) MIMO-CD-NOMA systems: (1) Single-input multi-output (SIMO) aided CD-NOMA, where each UE has a single antenna <cit.>; (2)  Spatial multiplexing-CD-NOMA (SMX-CD-NOMA), where each UE is equipped with multiple transmit antennas <cit.>; and (3) Spatial modulated CD-NOMA (SM-CD-NOMA), where each UE activates a single transmit antenna out of multiple transmit antennas at the UE <cit.>. The primary challenge in implementing MIMO-CD-NOMA systems lies in designing a multiuser signal detector that offers excellent performance while maintaining affordable complexity. The message passing algorithm (MPA) is usually used in SCMA detection <cit.>. However, the MPA for MIMO-SCMA signal detection <cit.> needs to improve its computational complexity. MPA detection over the MIMO-SCMA system uses an extended factor graph due to the additional antennas at the UEs and the BS <cit.>. The MPA exhibits exponential complexity with the codebook size (M), the number of UEs (J), and the number of transmit antennas. Thus, MPA becomes impractical for highly overloaded SCMA systems over large-scale MIMO systems. Further, MPA often faces convergence issues when the factor graph contains cycles. To overcome the limited diversity issues (due to sparsity) in SCMA, researchers have recently started designing the DCMA systems <cit.>. However, MPA demands sparsity for the accurate detection of codewords. DCMA codewords do not hold sparsity properties. Due to enormous short cycles in the DCMA factor graph, MPA is no longer a suitable detector for DCMA systems. Thus, an alternate detection algorithm must be developed to achieve the full diversity offered by DCMA with minimal detection complexity. The generalized sphere decoder (GSD) is used as a detector for a spread sequence-based DCMA system by considering a single antenna at both the base station (BS) and the UE <cit.>. The GSD works on principles of the tree search algorithm. Its complexity depends on the number of tree nodes and floating point operations (FLOPs) at each node. The number of tree nodes will increase multi-fold in the codebook-based DCMA systems. Moreover, the complexity of GSD increases rapidly for large-scale MIMO DCMA systems. So, it is not an efficient detector for large-scale MIMO CD-NOMA systems. Designing a low-complexity multiuser detector (MUD) for large-scale MIMO-aided CD-NOMA systems with higher overloading factors (λ) and modulation order/codebook-size (M) is becoming essential. The existing research studies mainly nonlinear detectors (MPA and sphere decoder) for CD-NOMA systems <cit.>. On the other hand, minimum mean square error (MMSE) and zeros forcing (ZF) are commonly used linear detectors in standalone MIMO systems<cit.>. These detectors are simple to implement and they exhibit low computational complexity over nonlinear detectors. However, these detectors show poor symbol error rate (SER) performance over CD-NOMA systems with high λ values. In addition, they fail to detect the CD-NOMA signals as M increases. Thus, designing an efficient detector with low complexity is mandatory to fully unfurl the benefits of MIMO CD-NOMA systems with a practically viable approach. Further, the detector must perform better than the existing linear detectors for large-scale CD-NOMA parameters (λ, M). §.§ Related prior works Recently, the ADMM has been widely used to solve convex and non-convex problems in a distributed manner<cit.>. Glowinski and Marrocco first proposed the ADMM in the mid-1970s. Later on, Boyd et al. rigorously discussed various complex optimization problems that can be solved using ADMM in their tutorial <cit.>. The ADMM is formed by the composition of dual ascent and the method of multipliers. The dual decomposition properties of dual ascent make the ADMM solve in a distributed manner. ADMM has superior convergence properties and often is able to converge without the requirement of strict convexity. A comprehensive treatment of ADMM and its advancements are detailed in  <cit.>. The ADMM is extensively applied to solve the linear programming (LP) problem in low-density parity-check code's (LDPC) decoding<cit.>. The authors here have also shown that the ADMM significantly reduces the complexity of the decoder compared to the belief propagation (BP) methods. The MUD problem of UL grant-free NOMA is solved by using ADMM <cit.>. The ADMM-based infinity norm (ADMIN) iterative linear detector is proposed for massive MIMO systems<cit.>. The authors have also proposed VLSI architecture for the ADMIN detector. The results show that ADMIN outperforms all the linear detectors in terms of SER performance and low hardware cost compared to the nonlinear BP-based detectors. Similarly, an ADMM-based QAM signal detector for massive MIMO systems is proposed <cit.>. A distributed penalty sharing (DPS) ADMM method is designed to convert the maximum-likelihood (ML) detection problem into sharing optimization problem. This method shows good performance and complexity trade-offs for massive MIMO systems. The authors in <cit.> show that the ADMM-based detector performs MMSE equalization in the first iteration. The performance of the ADMM improves over MMSE as iterations progress. §.§ Contributions: This paper proposes an ADMM-based iterative linear detector that solves the MUD problem of large-scale MIMO-aided CD-NOMA systems. It exhibits the best trade-off between the SER performance and computational complexity for SCMA and DCMA systems. We have formulated the highly computationally complex ML detection problem of large-scale MIMO-CD-NOMA systems as the non-convex distributed optimization problem. To the best of the authors' knowledge, this is the first attempt to convert the ML detection problem of CD-NOMA into a sharing problem. Further, the ADMM approach is applied to solve the proposed sharing problem using distributed optimization. The proposed ADMM-based detector is guaranteed to improve the error rate performance over linear detectors with lower computational complexity than nonlinear detectors. The main contributions of this work are listed below: * The ML MUD problem is nondeterministic polynomial-time hard (NP-hard) and can be solved using exhaustive search <cit.>. However, the complexity of the exhaustive search increases exponentially as the number of UEs and codebook size increase. The ML problem is transformed into a non-convex sharing optimization problem to overcome this challenge. An efficient distributed optimization method is then applied to solve the proposed sharing problem. * The CD-NOMA system models, i.e., the relationships between the input signal and received signal, are reformulated so that the sharing ADMM algorithm can be readily applied for detection. For SIMO-CD-NOMA, the signal received by multiple antennas at BS is considered as a single observation vector for detection. For SMX-CD-NOMA and SM-CD-NOMA, resource-wise processing of the received signal is carried out during ADMM detection. * A low-complexity iterative linear detector is designed via the ADMM approach to solve the sharing optimization problem for large-scale MIMO-CD-NOMA systems. ADMM solves the sharing optimization problem through a distribution optimization framework. Indeed, distributive nature of sharing ADMM allows parallel processing in multiuser detection, reducing per-iteration computational time. * The proposed ADMM-based detector is applied to large-scale MIMO (SIMO, SMX-MIMO, and SM-MIMO) aided CD-NOMA systems. Thus, the proposed single ADMM-based MUD is capable of solving the detection problem of various CD-NOMA systems. * The complexity of the proposed ADMM-based detector is independent of the CD-NOMA system's codebook size/modulation order (M), unlike the conventional MPA detector. Thus, it can be applied to large-size codebooks. The impact of the different MIMO-CD-NOMA system parameters is also thoroughly examined. The ADMM-based detector exhibits polynomial complexity with all other parameters, such as the numbers of antennas, UEs, and REs. * The comparison of the error rate performance and receiver complexity of the proposed ADMM-based detector with all other conventional detectors is thoroughly examined. Comprehensive Monte Carlo simulations indicate that the ADMM-based detector substantially reduces the computational complexity while maintaining an error performance comparable to the MPA in SCMA detection. For spread sequence-based DCMA, the ADMM gives superior performance than GSD. §.§ Organization: The paper is organized as follows. The preliminaries of the existing CD-NOMA system models and sharing ADMM problem are discussed in Section <ref>. The proposed system models and ADMM-based detection for UL MIMO-CD-NOMA systems are provided in Section <ref>. Section  <ref> presents a detailed computational complexity analysis. The simulation results for different MIMO-CD-NOMA systems are discussed in Section <ref>. Section <ref> concludes the paper and mentions future scopes. Notations: Lower case, bold lower case, and bold upper case letters denote scalars, vectors, and matrices, respectively. (·)^T and (·)^H denote transpose and Hermitian transpose, respectively. ‖ (·) ‖ denotes the Euclidean norm of a vector. ∏_[-α,α] (·) denotes the Euclidean projection onto the interval [-α,α]. ·· denotes the inner product, and Re(·) denotes the real part of a complex variable. ℝ^n and ℂ^n denote n-dimensional real and complex vector spaces, respectively. 𝒞𝒩(0,σ^2) denotes complex Gaussian distribution with zero mean and variance σ^2. § PRELIMINARIES This section mainly discusses the existing CD-NOMA system models and steps to solve the sharing optimization problem via the ADMM approach. The CD-NOMA techniques can be broadly divided into sparse-coded and dense-coded NOMA. Low-density signature (LDS) <cit.> and SCMA <cit.> belong to the first group, whereas overloaded CDMA <cit.> and DCMA <cit.> belong to the second group. LDS and overloaded CDMA are sequence-based CD-NOMA techniques where mapping and spreading are performed separately. LDS utilizes sparse sequences, while overloaded CDMA employs dense sequences. These techniques suffer from error performance loss due to limited coding gain. On the other hand, SCMA and DCMA are codebook-based CD-NOMA techniques. In these systems, the data of each UE is mapped to a multi-dimensional codeword. SCMA utilizes sparse codewords, while DCMA applies dense codewords. These techniques offer error performance benefits through the use of MDCs <cit.>. §.§ SCMA system model SCMA is a sparse codebook-based NOMA technique. Each UE's data is sent in the form of sparse codewords. Consider a CD-NOMA system having J UEs accessing K REs, where J>K ensures the overloading nature of SCMA. Each UE in the SCMA system has access to d_v<K active REs among K REs. Thus, d_v non-zero elements are present in the K-dimensional sparse codewords. The SCMA encoder maps each UE bitstream to a sparse codeword 𝐱^K× 1 in a pre-designed SCMA codebook 𝒳^K× M of the respective UE. The sparse nature of SCMA codewords enforces d_f< J number of overlapping UEs on each RE. The diversity order of SCMA is essentially limited to d_v, i.e., far less than the maximum possible diversity order K. For example, the codebook structure for 150 % overloaded SCMA system (d_v=2, and d_f=3) is shown in Fig. <ref>. The received signal vector is given by 𝐫= ∑_j=1^J diag(𝐡_j)𝐱_j+𝐰, where * 𝐰∼𝒞𝒩(0,σ^2I_K) denotes the additive white Gaussian noise (AWGN) at the receiver. * 𝐡_j∼𝒞𝒩(0,I_K) denotes the Rayleigh fading channel vector for the jth UE. * diag(𝐡=[h[1],…, h[k], …, h[K]]^T) denotes diagonal matrix and h[k] is the kth diagonal element. * 𝐱_j is the K-dimensional codeword of jth UE. §.§ DCMA system model The error performance of SCMA suffers from limited diversity order. This limitation of SCMA can be overcome by using dense codebooks in the DCMA system <cit.>. Each UE in the DCMA system has access to all K REs. No zero entries are present in the K-dimensional dense codewords. Therefore, the DCMA system exploits the full diversity of multi-dimensional codewords. For example, the codebook structure of 150 % overloaded DCMA system is shown in Fig. <ref>. The received signal vector for the DCMA system is similar to that of the SCMA system, as described in (<ref>). In addition to the codebook-based DCMA system, the spread sequence-based DCMA can be designed with non-orthogonal spreading sequences. The idea is similar to the overloaded CDMA systems. The number of UEs is larger than the length of the spreading sequence. Uni-modular spreading sequences have been used in this paper to achieve full diversity of the dense sequences <cit.>. §.§ Shared ADMM problem This subsection introduces the fundamental concepts of solving sharing problems using ADMM within a distributed optimization framework. A novel method, referred to as ADMM, was proposed to address both convex and non-convex sharing optimization problems <cit.>. The subsequent subsection delves into the details and ideas behind the sharing ADMM problem. The ADMM is formed by combining the superior properties of dual ascent and the method of multipliers. This combination ensures the robustness of ADMM <cit.>. The generic sharing problem is min ∑_i=1^N f_i(𝐱_i)+g(∑_i=1^N 𝐱_i), where 𝐱_i∈ℝ^n, i=1,, N, the associated local cost function f_i(𝐱_i) (f_i: ℝ^n→ℝ) of subsystem i is handled by processor i, and g (g: ℝ^n→ℝ) is the shared objective, whose argument is the sum of N variables. Each variable 𝐱_i is involved in minimizing the individual cost f_i(𝐱_i), as well as the shared objective g(∑_i=1^N 𝐱_i). The sharing problem can be converted into an ADMM problem by introducing an alternative variable, 𝐳_i∈ℝ^n. Thus, the cost function is minimized over 𝐱_i and 𝐳_i, alternatively. The problem (<ref>) is converted to the following problem: min ∑_i=1^N f_i(𝐱_i)+ g(∑_i=1^N 𝐳_i ) s.t. 𝐱_i-𝐳_i=0 i=1,,N. The augmented Lagrangian function for (<ref>) can be written as ℒ({𝐱_i,𝐳_i,𝐲_i}_i=1^N)=∑_i=1^N f_i(𝐱_i)+g(∑_i=1^N 𝐳_i ) +∑_i=1^N 𝐱_i-𝐳_i𝐲_i+ρ/2∑_i=1^N ‖𝐱_i-𝐳_i‖_2^2 where 𝐱_i, 𝐳_i are called primal variables, 𝐲_i∈ℝ^n is the Lagrangian variable and ρ>0 is called the penalty parameter. The function in (<ref>) can be minimized by the ADMM steps <cit.> 𝐱_i^t+1 :=_𝐱_i(f_i(𝐱_i)+ρ/2‖𝐱_i-𝐳_i^t+𝐮_i^t‖_2^2) 𝐳_i^t+1 :=_𝐳_i g(∑_i=1^N 𝐳_i )+ ρ/2∑_i=1^N‖𝐳_i-𝐮_i^t-𝐱_i^t+1‖_2^2 𝐮_i^t+1 :=𝐮_i^t+𝐱_i^t+1-𝐳_i^t+1 where 𝐮_i is called a dual variable (scaled version of Lagrangian variable, 𝐮_i=𝐲_i/ρ). The problems (<ref>) and (<ref>) can be solved in parallel for i=1,,N. Let 𝐪_i=𝐮_i^t+𝐱_i^t+1. Then (<ref>) can be rewritten as min g(∑_i=1^N 𝐳_i )+ρ/2∑_i=1^N ‖𝐳_i-𝐪_i‖_2^2 s.t 𝐳̅=1/N∑_i=1^N 𝐳_i with fixed variable 𝐳̅∈ℝ^n. The problem (<ref>) has the following solution 𝐳_i=𝐪_i+𝐳̅-𝐪̅, where 𝐪̅=1/N∑_i=1^N 𝐪_i=𝐮̅^t+𝐱̅^t+1, 𝐮̅^t= 1/N∑_i=1^N 𝐮_i^t, and𝐱̅^t+1= 1/N∑_i=1^N 𝐱_i^t+1. From (<ref>) and (<ref>), the 𝐳̅-update step can be simplified to the following unconstrained problem: min g(N𝐳̅)+ρ/2∑_i=1^N ‖𝐳̅-𝐪̅|. Substituting (<ref>) for 𝐳_i^t+1 into (<ref>), the 𝐮̅-update step is 𝐮_i^t+1=𝐮̅^t+𝐱̅^t+1-𝐳̅^t+1 The dual variables {𝐮_i^t}_i=1^N are equal and can be replaced by a single variable 𝐮^t. The ADMM steps are simplified as follows 𝐱_i^t+1 :=_𝐱_i(f_i(𝐱_i)+ρ/2‖𝐱_i-𝐱_i^t+𝐱̅^t-𝐳̅^t+u^t‖_2^2) 𝐳̅^t+1 :=_𝐳̅(g(N𝐳̅)+ Nρ/2‖𝐳̅-u^t-𝐱̅^t+1‖_2^2) 𝐮^t+1 :=𝐮^t+𝐱̅^t+1-𝐳̅^t+1 The original sharing problem (<ref>) is decomposed into a three-step iterative optimization problem. The problem (<ref>) can be solved in parallel for i=1,, N. The step (<ref>) solves the shared objective, and (<ref>) is the dual variable update step. § ADMM-BASED DETECTION FOR UL CD-NOMA SYSTEMS This section focuses on reformulating the relation between the input and received signals of three multi-antenna CD-NOMA system models. These models are 1. SIMO-CD-NOMA, 2. Spatial multiplexing CD-NOMA (SMX-CD-NOMA), and 3. Spatial modulated CD-NOMA (SM-CD-NOMA). The reformulated models are readily applied for sharing ADMM-based detection process. These models, along with the derivation of the ADMM steps, are described in the following. §.§ SIMO CD-NOMA Consider a UL scenario with the BS equipped with N_r antennas and each UE having a single antenna. The idea is to exploit the spatial diversity gain offered by multiple receive antennas at BS <cit.>. The error rate performance of the CD-NOMA system with N_r antennas at the receiver is significantly improved <cit.> due to higher diversity gain. The K× 1 observation vector at n_rth BS antenna can be expressed as 𝐫^(n_r)=∑_j=1^J diag(𝐡_j^(n_r))𝐱_j+𝐰^(n_r), n_ r=1,,N_ r where * 𝐰^(n_r)∼𝒞𝒩(0,σ^2𝐈_K), K× 1 vector assumed to be independent and identically distributed (i.i.d.), AWGN at the n_rth BS antenna. * 𝐡_j^(n_r)∼𝒞𝒩(0,𝐈_K), K× 1 vector assumed to be i.i.d. complex Gaussian random vector, denotes Rayleigh fading channel coefficients between the jth UE and n_rth BS antenna. 𝐡_j^(n_r)=[h[1]… h[k] … h[K]]^T and diag(𝐡_j^(n_r)) denotes diagonal matrix with h[k] being the kth diagonal element. * 𝐱_j=[x_j[1]… x_j[k] … x_j[K]]^T is a codeword from the jth UE's codebook, 𝒳_j^K× M. The overall KN_r× 1 observation vector at the BS for the SIMO CD-NOMA system is given by 𝐫=𝐇𝐱_mu+𝐰, where * 𝐫=[𝐫^(1)^T𝐫^(n_r)^T,, 𝐫^(N_r)^T]^T. * 𝐰∼𝒞𝒩(0,σ^2𝐈_KN_r), N_rK× 1 vector denotes the AWGN vector at the BS. * 𝐱_mu is the J N_e× 1 multi-user concatenated transmitted signal. 𝐱_mu=[x_1[1] x_1[N_e] x_j[1] x_j[N_e] x_J[1] x_J[N_e]]^T, where N_e is the number of nonzero elements in codeword. We have N_e=K and N_e=d_v for DCMA and SCMA, respectively. The transmitted signal 𝐱_mu can be rewritten to facilitate the sharing-based detection problem in Section <ref> i.e., 𝐱_mu=∑_j=1^J 𝐱_0j. The variable 𝐱_0j=[0 0 x_j[1] x_j[2] x_j[N_e] 0 0]^T represents the jth UE codeword. * The channel matrix for DCMA system of size KN_r× JK is given by 𝐇= [ diag(𝐡_1^(1)) diag(𝐡_j^(1)) diag(𝐡_J^(1)); diag(𝐡_1^(2)) diag(𝐡_j^(2)) diag(𝐡_J^(2)); ⋮ ⋮ ⋮; diag(𝐡_1^(N_r)) diag(𝐡_j^(N_r)) diag(𝐡_J^(N_r)) ]. * The channel matrix for SCMA system of size KN_r× Jd_v is given by 𝐇= [ diag(𝐡_1^(1)) diag(𝐡_j^(1)) diag(𝐡_J^(1)); diag(𝐡_1^(2)) diag(𝐡_j^(2)) diag(𝐡_J^(2)); ⋮ ⋮ ⋮; diag(𝐡_1^(N_r)) diag(𝐡_j^(N_r)) diag(𝐡_J^(N_r)) ]. where diag(𝐡_j^(N_r)) is K× d_v matrix after removing the columns in diag(𝐡_j^(N_r)) corresponding to zero elements of jth UE codeword 𝐱_j (inactive REs of jth UE). Consider the factor graph matrix for the 150 % overloading SCMA system with K=4, J=6, d_f=3, d_v=2: 𝐅= [ 1 0 1 0 1 0; 0 1 1 0 0 1; 1 0 0 1 0 1; 0 1 0 1 1 0 ]. For the 1st UE and the n_rth receiving antenna, the channel matrix is given below: diag(𝐡_1^(n_r))=[ h_1^(n_r)[1] 0 0 0 0 h_1^(n_r)[3] 0 0 ]. The transmitted signal from all UEs is given by x_mu = [ x_1[1] x_1[3]_UE 1 x_2[2] x_2[4]_UE 2 x_3[1] x_3[2] _UE 3 x_4[3] x_4[4]_UE 4 x_5[1] x_5[4] _UE 5 x_6[2] x_6[3]_UE 6]^T. The 3rd UE codeword 𝐱_03 in x_mu is given by 𝐱_03 = [ 0 0_UE 1 0 0_UE 2 x_3[1] x_3[2] _UE 3 0 0_UE 4 0 0 _UE 5 0 0_UE 6]^T. The variables {𝐱_0j}_j=1^J are similarly represented as 𝐱_03. Note that, the transmitted signal 𝐱_mu can be written as, 𝐱_mu=∑_j=1^6𝐱_0j. §.§ SMX-CD-NOMA The spectral efficiency of the UL CD-NOMA system is further enhanced by placing multiple antennas at the transmitter <cit.>. Multiple antennas at both transmitter and receiver of the CD-NOMA system are considered in this section. Multiple antennas at the transmitter exploit multiplexing gain offered by spatially multiplexing several data streams onto the MIMO channel. Consider a scenario where each UE is equipped with N_t transmitting antennas, and the BS is equipped with N_r receiving antennas. The total input data at each UE, N_tlog_2(M) bits, are divided into N_t parallel data streams. Each data stream is fed to the CD-NOMA encoder, as shown in Fig. <ref>. Note that in the SMX-CD-NOMA system, all the UE antennas transmit data simultaneously. The main challenge of SMX-CD-NOMA in practical implementation is the computational complexity at both transmitter and receiver. Further, detecting the data transmitted from multiple antennas of multiple UEs is a complex operation. The proposed method performs resource-wise processing at the BS via the ADMM algorithm, as shown in Fig. <ref>. The received signal at the BS over the kth RE is given by 𝐫_k=𝐇_k𝐱_smx,k+𝐰_k, where * 𝐫_k=[r_k^1 r_k^n_r r_k^N_r]^T, N_r× 1 observation vector. * 𝐰_k∼𝒞𝒩(0,σ^2𝐈_N_r), N_r× 1 vector denotes the AWGN at the BS over kth RE. * 𝐱_smx,k is the N_uN_t× 1 transmitted vector on kth RE. 𝐱_smx,k=[𝐱_1,k^T𝐱_j,k^T𝐱_N_u,k^T]^T, and each 𝐱_j,k=[x_j,k,1 x_j,k,n_t x_j,k,N_t]^T is N_t× 1 vector corresponding to jth UE. Each x_j,k,n_t∈𝐱_ j^n_t is the symbol transmitted from n_tth antenna of jth UE over kth RE, where 𝐱_ j^n_t is the codeword of jth UE transmitted from n_tth antenna. The set ζ_k represents the set of UEs overlapping on kth RE given by ζ_k={ j: 𝐱_j[k]≠ 0;1≤ j≤ J}, and|ζ_k|= N_u, where N_u=d_f and N_u=J for SCMA and DCMA, respectively. The transmitted signal 𝐱_smx,k can be rewritten to formulate the sharing-based detection problem in Section <ref> i.e., 𝐱_smx,k= ∑_j=1^N_u𝐱_0j,k. The variable 𝐱_0j,k=[0 0 x_j,k,1x_j,k,2 x_j,k,N_ t0 0]^T is the N_uN_t× 1 vector, and represents the jth overlapped UE on kth RE. The nonzero elements in 𝐱_0j,k are symbols transmitted from N_ t antennas. * 𝐇_k is the N_r× N_uN_t matrix given by 𝐇_k=[𝐇_1,k𝐇_j,k𝐇_N_u,k] where 𝐇_j,k represents the jth UE MIMO channel matrix of size N_r× N_t. Consider N_t=2, and according to the factor graph matrix given in (<ref>). The transmitted signal over the first RE in the SCMA system is 𝐱_smx,1 = [ 𝐱_1,1^T 𝐱_3,1^T 𝐱_5,1^T ]^T. For DCMA, due to the dense structure of codebooks, all J UEs overlap on each RE <cit.>. The transmitted signal over the first RE is given by 𝐱_smx,1 = [ 𝐱_1,1^T 𝐱_2,1^T 𝐱_5,1^T 𝐱_6,1^T ]^T, where each 𝐱_j,1 is a 2× 1 (N_t× 1) vector given by 𝐱_j,1 = [ x_j,1,1 x_j,1,2 ]^T. The transmitted signal over each remaining RE (𝐱_smx,k, k=2,,K) has a similar representation as in (<ref>) and (<ref>) for SCMA and DCMA, respectively. For SCMA, 𝐱_03,1 represents the 3rd UE's symbols as given by 𝐱_03,1 = [ 0 0_UE 1 x_3,1,1 x_3,1,2 _UE 3 0 0 _UE 5]^T. For DCMA, 𝐱_03,1 can be written as 𝐱_03,1 = [ 0 0_UE 1 0 0_UE 2 x_3,1,1 x_3,1,2 _UE 3 0 0_UE 4 0 0 _UE 5 0 0_UE 6]^T. The variables {𝐱_0j,k}_k=1^K are similarly represented as 𝐱_03,1. The transmitted signal 𝐱_smx,k can be written as, 𝐱_smx,k= ∑_j=1^3 𝐱_0j,k and 𝐱_smx,k= ∑_j=1^6 𝐱_0j,k for SCMA and DCMA, respectively. §.§ SM-CD-NOMA Due to the requirement of multiple radio frequency (RF) chains, the SMX-CD-NOMA system is not affordable for various applications. Further, inter-channel interference is the main limitation of this system. Spatial modulation (SM) MIMO systems are promising technology to overcome the limitations of SMX MIMO systems. SM for single-UE communications is well studied in the literature <cit.>. Due to the demands of future wireless networks, it is important to study and analyze SM in multiuser scenarios. In <cit.>, the authors studied and analyzed SM sparse CDMA. SM-aided CD-NOMA (SM-CD-NOMA) is guaranteed to improve the spectral efficiency of the CD-NOMA with feasible complexity at both transmitter and receiver <cit.>. Fig. <ref> shows the system model for the SM-CD-NOMA system. Each UE is equipped with N_t transmitting antennas, in which only one antenna is active at any time, as shown in Fig. <ref>. The active antenna index of the jth UE is denoted by n_ a^j. All other antennas remain silent in that particular time slot. Thus, the active antenna index is a spatial modulation symbol to transmit extra information bits. The information bit stream 𝐛_j of the jth UE, is split into two parts [𝐛_ja,𝐛_jc] having log_2(N_t) and log_2(M) bits, respectively. Each UE transmits an overall log_2(N_t)+log_2(M) bits from the active antenna. The observation vector over the kth RE at the BS is an N_r× 1 vector given by 𝐫_k=𝐇_k𝐱_sm,k+𝐰_k, where * 𝐫_k,𝐰_k, and 𝐇_k have the similar forms as in SMX-CD-NOMA for both DCMA and SCMA systems. * 𝐱_sm,k is the N_uN_t× 1 vector 𝐱_sm,k =[𝐱_1,k^T𝐱_j,k^T𝐱_N_u,k^T]^T, and each 𝐱_j,k=[0 x_j,k,n_ a^j 0]^T is N_t× 1 vector corresponding to jth UE. The nonzero element x_j,k,n_ a^j is the symbol transmitted from n_ a^jth active antenna. The transmitted signal 𝐱_sm,k can be rewritten to formulate the sharing-based detection problem in Section <ref> i.e., 𝐱_sm,k= ∑_j=1^N_u𝐱_0j,k^n_ a^j. The variable 𝐱_0j,k^n_ a^j=[00 x_j,k,n_ a^j000]^T represents the jth overlapped UE on kth RE. The nonzero element in 𝐱_0j,k^n_ a^j is the symbol transmitted from n_ a^jth active antenna. Fig. <ref> depicts the kth RE processing at the BS through the extended factor graph. The thick and dotted lines correspond to the active and inactive antennas, respectively. Consider N_t=2, and the factor graph matrix given in (<ref>). Suppose the active antenna indices of the 6-users are {2,1,2,2,1,1}. The structure of transmit codewords from J UEs for SM-CD-NOMA is as follows <cit.>: 𝐱_tx = [ 0_ K× 1 𝐱_2_K× 1^1 0_ K× 1 0_ K× 1 𝐱_5_K× 1^1 𝐱_6_K× 1^1; 𝐱_1_K× 1^2 0_ K× 1 𝐱_3_K× 1^2 𝐱_4_K× 1^2 0_ K× 1 0_ K× 1; ]_N_tK× J =8× 6 . Each 𝐱_j__K× 1^n_ a^j in 𝐱_tx represents the transmitted codeword from jth UE, and each column of 𝐱_tx carries information about active transmit antenna (n_ a^j) as well as the codeword. In (<ref>), 0_ K× 1 indicates zero power transmitted by a deactivated antenna. To facilitate the resource-wise processing at the receiver via ADMM, the transmitted signal over the first RE in the SCMA system is modeled as 𝐱_sm,1 = [ 𝐱_1,1^T 𝐱_3,1^T 𝐱_5,1^T ]^T. The transmitted signal over the first RE in the DCMA system is modeled as 𝐱_sm,1 = [ 𝐱_1,1^T 𝐱_2,1^T 𝐱_5,1^T 𝐱_6,1^T ]^T, where 𝐱_1,1=[0 x_1,1,2]^T, 𝐱_2,1=[x_2,1,1 0]^T, 𝐱_3,1=[0 x_3,1,2]^T, 𝐱_4,1=[0 x_4,1,2]^T, 𝐱_5,1=[x_5,1,1 0]^T, and 𝐱_6,1=[x_6,1,10]^T. The transmitted signals over the remaining REs (𝐱_sm,k, k=2,,K ) have similar representations as (<ref>) and (<ref>) for SCMA and DCMA, respectively. For SCMA, 𝐱_03,1^2 represents the symbol transmitted from 2nd active antenna of 3rd UE on 1st RE. 𝐱_03,1^2 can be written as 𝐱_03,1^2 = [ 0 0_UE 1 0 x_3,1,2 _UE 3 0 0 _UE 5]^T. For DCMA, 𝐱_03,1^2 can be written as 𝐱_03,1^2 = [ 0 0_UE 1 0 0_UE 2 0 x_3,1,2 _UE 3 0 0_UE 4 0 0 _UE 5 0 0_UE 6]^T. The variables {𝐱_0j^n_ a^j}_k=1^K are similarly represented as 𝐱_03,1^2. The transmitted signal 𝐱_sm,k can be written as, 𝐱_sm,k= ∑_j=1^3 𝐱_0j,k^n_ a^j and 𝐱_sm,k= ∑_j=1^6 𝐱_0j,k^n_ a^j for SCMA and DCMA, respectively Observe from equations (<ref>), (<ref>), and (<ref>) that the system models of three MIMO-CD-NOMA systems are similar to those of the conventional MIMO system model. This model is given by 𝐫=𝐇𝐱+𝐰, 𝐱∈𝒳^N_t, where 𝒳 is the signal constellation, 𝐫 is the N_r× 1 observation vector, 𝐇 is N_r× N_t channel matrix (N_r>N_t), 𝐱 is N_t× 1 transmitted vector and 𝐰 is i.i.d. AWGN vector with each component being distributed as 𝒞𝒩(0,σ^2). The ML detection problem can be formulated as min_𝐱∈𝒳^N_ t‖𝐫-𝐇𝐱‖^2. The ML detection problem for SIMO CD-NOMA system in (<ref>) is given by min_𝐱_mu∈𝒳_mu^JN_ e ‖𝐫-𝐇𝐱_mu‖^2, where 𝒳_mu^JN_ e denotes the multi-user signal constellation and it consists of J-UEs concatenated codewords. However, solving ML detection problems, in general, is NP-hard. The exhaustive search method can be used to solve the ML detection problem. However, the exhaustive search is exponentially complex as per 𝒪(M^J) <cit.> and thus it is not feasible. The ML detection problem (<ref>) can be solved with a polynomial complexity using a distributed optimization framework. The above problem needs to be converted into a sharing problem that can be solved using distributed optimization methods. In the next section, we apply an efficient method based on the ADMM algorithm to solve the sharing-based detection problem in a distributed manner. §.§ Large-scale UL multi-antenna CD-NOMA detection via ADMM This section discusses the design of ADMM-based MUD for two CD-NOMA techniques, i.e., SCMA and DCMA. The ADMM-based detector is applied to three different MIMO systems, namely, SIMO-CD-NOMA, spatial multiplexed CD-NOMA (SMX-CD-NOMA), and spatially modulated CD-NOMA (SM-CD-NOMA). The implications of the proposed method are discussed in subsequent sections. §.§.§ SIMO CD-NOMA The ML detection problem in (<ref>) can be converted into a sharing problem. Further, this problem can be solved in a distributed manner via the ADMM approach, as discussed in Section <ref>. ADMM allows parallel processing in MUD problems, guaranteeing a minimal computational time at the receiver. Recall from Section <ref>, the transmitted signal 𝐱_mu is given by 𝐱_mu=∑_j=1^J 𝐱_0j. The problem (<ref>) can be rewritten as min_𝐱_0j ‖𝐫-𝐇(∑_j=1^J 𝐱_0j)‖^2 s.t 𝐱_0j∈𝒳_mu^JN_e j=1,,J. Here, for the channel matrix 𝐇^N_rK× JN_e, we consider N_rK>JN_e. Each entry's real and imaginary parts in 𝐱_0j are restricted by set constraints. The set constraint must be converted into an interval constraint to convert (<ref>) into a sharing problem. Box constraint relaxation (BCR) is used to relax the set constraints of each entry in the codeword<cit.>. Hence, each entry's real and imaginary parts in the jth UE codeword belong to [-α_j,α_j] and [-β_j,β_j]. After relaxation, each element in the jth UE codebook is defined as 𝒳̃_j={x_j=x_jR+ix_jI| x_jR∈ [-α_j,α_j],x_jI∈[-β_j,β_j] }, α_j=max|ℝ(𝒳_j)|,β_j=max|ℐ(𝒳_j)|. The highly complex MIMO-CD-NOMA ML detection problem in (<ref>) is now ready to be converted into a non-convex distributed optimization problem. However, the constraint relaxation degrades the detection performance due to the losses introduced by the interval constraints. The losses can be compensated by adding a set of quadratic penalty functions ∑_j=1^J γ_j/2‖𝐱_0j‖_2^2 to the objective function where γ_j≥ 0 is a penalty parameter. The penalty functions are selected so that each variable, 𝐱_0j, in the penalty term minimizes the individual penalty and the shared objective. The added penalty term makes the solution as sparse as possible. However, the sparse vectors with specific numbers (d_v for SCMA and K for DCMA) of non-zero entries only will minimize the shared objective function. The favorable solutions are the sparse vectors with d_v and K non-zero elements for SCMA and DCMA, respectively. The sharing ADMM problem can be written as min_𝐱_0j ‖𝐫-𝐇(∑_j=1^J 𝐱_0j)‖^2+∑_j=1^J γ_j/2‖𝐱_0j‖_2^2 s.t 𝐱_0j∈𝒳̃_mu^JN_e, j=1,,J. The problem in (<ref>) is similar to the sharing ADMM problem in (<ref>). Proceeding similarly as in Section <ref>, (<ref>) can be written as min_𝐱_0j,𝐳_0j ‖𝐫-𝐇(∑_j=1^J 𝐱_0j)‖^2+∑_j=1^J γ_j/2‖𝐳_0j‖_2^2 s.t 𝐱_0j=𝐳_0j, 𝐱_0j∈𝒳̃_mu^JN_e, ∀ j=1,,J For the formulation of the ADMM steps, the augmented Lagrangian function for the problem (<ref>) is considered as shown below: ℒ({𝐱_0j,𝐳_0j,y_j}_j=1^J)=‖r-𝐇(∑_j=1^J 𝐱_0j)‖_2^2+∑_j=1^J γ_j/2‖𝐳_0j‖_2^2 +∑_j=1^J Re𝐱_0j-𝐳_0jy_j+ρ/2∑_j=1^J‖𝐱_0j-𝐳_0j‖_2^2 where y_j∈ℂ^JN_e is Lagrangian multiplier of the jth UE. Letting 𝐮_j=y_j/ρ, the scaled form of ADMM steps are as follows <cit.> : 𝐳_0j^t+1:= _𝐳_0j∈𝒳̃_mu^JN_ eγ_j/2‖𝐳_0j‖_2^2+ρ/2‖𝐳_0j-𝐱_0j^t+𝐮_j^t‖_2^2 𝐱_0j^t+1:= _𝐱_0j∈𝒳̃_mu^JN_ e‖𝐫-𝐇(∑_j=1^J 𝐱_0j)‖^2+ρ/2∑_j=1^J‖𝐳_0j^t+1-𝐱_0j+𝐮_j^k‖_2^2 𝐮_j^t+1:= 𝐮_j^k+(𝐳_0j^t+1-𝐱_0j^t+1) The variables 𝐳_0j and 𝐮_j can be updated independently in parallel for each j=1,,J. Following a similar sequence of steps as discussed in Section <ref>, the steps in (<ref>), (<ref>), and (<ref>) are simplified as 𝐳_0j^t+1:= _𝐳_0j∈𝒳̃_mu^JN_ eγ_j/2‖𝐳_0j‖_2^2+ρ/2‖𝐳_0j-𝐳_0j^t-𝐱̅_0^t+𝐮^t+𝐳̅_0^t‖_2^2 𝐱̅_0^t+1:= _𝐱̅_0‖𝐫-J𝐇𝐱̅_0‖_2^2+ρ J/2‖𝐱̅_0-𝐱̅_1^t+1-𝐮^t‖_2^2 𝐮^t+1:= 𝐮^t+𝐳̅_0^t+1-𝐱̅_0^t+1 where 𝐱̅_0=1/J∑_j=1^J 𝐱_0j and 𝐳̅_0=1/J∑_j=1^J 𝐳_0j. The solutions obtained by solving (<ref>) and (<ref>) are as follows 𝐳_0j^t+1 =∏_[-α_j,α_j]&[-β_j,β_j]ρ/(ρ+γ_j)(𝐳_0j^t+𝐱̅_0^t-𝐮^t-𝐳̅_0^t),∀ j=1,,J. 𝐱̅_0^t+1 =(𝐇^H 𝐇 J+ρ𝐈)^-1(𝐇^H 𝐫+ρ(𝐳̅_0^t+1+𝐮^t)), where ∏_[-α_j,α_j]&[-β_j,β_j](.) denotes the projection of the real part of each entry of the vector onto [-α_j,α_j] and imaginary part of each entry of the vector onto [-β_j,β_j]. The step in (<ref>) can be solved in parallel, for j=1,,J. 𝐈 is identity matrix of size Jd_v× Jd_v and JK× JK for SCMA and DCMA, respectively. From the definition of 𝐱̅_0 given above, the estimated multi-user codeword is given by, 𝐱̂_mu=J𝐱̅_0. Each UE's concatenated sparse codeword 𝐱̂_0j can be extracted from the 𝐱̂_mu . Let 𝐱̃_0j be the jth UE codeword after removing zeros from 𝐱̂_0j. The minimum Euclidean distance (MED) rule is applied to detect the transmitted codeword index corresponding to each UE, p̂_j=_ p‖𝐱̃_0j-x_j,p‖, j=1,,J where x_j,p represents pth codeword of the jth UE codebook x_j,p∈𝒳_j^K× M. Note that for SCMA, the zeros from x_j,p are removed to find p̂_j. Algorithm <ref> details the steps for detection. §.§.§ SMX-CD-NOMA The detector of SMX-CD-NOMA needs to detect the signals transmitted from N_ t antennas of J UEs. Here, the detection is more complex than the SIMO case. The resource-wise processing is adapted to simplify the ADMM-based detection problem. Recall from Section <ref>, that the SMX codeword is given by, 𝐱_smx,k=∑_j=1^N_u𝐱_0j,k. The ADMM processing first estimates the resource-wise spatial multiplexed transmitted vector 𝐱̂_smx,k, for k=1,, K. Then, the transmitted codeword indices are detected via the MED rule. The ML detection problem is converted into sharing problem as follows min_𝐱_0j,k ‖𝐫_k-𝐇_k(∑_j=1^N_u𝐱_0j,k)‖_2^2+∑_j=1^N_uγ_j/2‖𝐱_0j,k‖_2^2 s.t. 𝐱_0j,k∈𝒳̃_j,smx^N_uN_t where γ_j≥ 0 is the penalty parameter. By introducing alternative variable 𝐳_0j,k=𝐱_0j,k, the ADMM steps for (<ref>) are as follows: 𝐳_0j,k^t+1:= _𝐳_0j,k∈𝒳̃_j,smx^N_uN_tγ_j/2‖𝐳_0j,k‖_2^2+ρ/2‖𝐳_0j,k-𝐱_0j,k^t+𝐮_j^t‖_2^2 𝐱_0j,k^t+1:= _𝐱_0j,k‖𝐫_k-𝐇_k(∑_j=1^N_u𝐱_0j,k)‖_2^2+ρ/2∑_j=1^J‖𝐳_0j,k^t+1-𝐱_0j,k+𝐮_j^k‖_2^2 𝐮_j^t+1:= 𝐮_j^k+(𝐳_0j,k^t+1-𝐱_0j,k^t+1). The solutions for (<ref>) and (<ref>) are as follows: 𝐳_0j,k^t+1= ∏_[-α_j,α_j]&[-β_j,β_j]ρ/(ρ+γ_j)(𝐳_0j,k^t+𝐱̅_0,k^t-𝐮^t-𝐳̅_0,k^t) 𝐱̅_0,k^t+1= (𝐇_k^H 𝐇_k N_u+ρ𝐈)^-1(𝐇_k^H 𝐫_k+ρ(𝐳̅_0,k^t+1+𝐮^t)) where 𝐈 is identity matrix of size N_uN_t× N_uN_t and the definitions of 𝐱̅_0,k and 𝐳̅_0,k are given below 𝐱̅_0,k= 1/N_u∑_j=1^N_u𝐱_0j,k 𝐳̅_0,k=1/N_u∑_j=1^N_u𝐳_0j,k. From the definition of 𝐱̅_0,k, the estimated codeword corresponding to the kth RE is given by 𝐱̂_smx,k=N_u𝐱̅_0,k. The sparse vector 𝐱̂_0j,k corresponding to jth UE can be extracted from 𝐱̂_smx,k. Let 𝐱̃_j,k be a N_t× 1 vector formed after removing the zeros from sparse vector 𝐱̂_0j,k. Let 𝐱̂_ j^n_t be the estimated codeword of jth UE corresponding to n_tth transmit antenna. 𝐱̂_ j^n_t is formed after resource-wise processing is finished. MED rule is applied to detect the transmitted codeword index corresponding to each UE and transmit antenna as follows: p̂_j^n_t=_p ‖𝐱̂_j^n_t-𝐱_j,p‖, j=1,,J, n_t=1,,N_t. where 𝐱_j,p represents the pth codeword of the jth UE codebook 𝒳_j^K× M. The detailed ADMM-based detection procedure is given in Algorithm <ref>. §.§.§ SM-CD-NOMA This section discusses the formulation of the ADMM-based detection problem for the SM-CD-NOMA system. The modulation happens in both the signal domain and the spatial domain. The SM-CD-NOMA system transmits information on the codeword and active antenna indices simultaneously. The receiver needs to estimate these two quantities. The variable 𝐱_sm,k from Section <ref>, is given by 𝐱_sm,k= ∑_j=1^N_u𝐱_0j,k^n_ a^j. The proposed method estimates the resource-wise spatial modulated transmitted vector 𝐱̂_sm,k for k=1,, K via ADMM processing. Then, the L1-norm rule is applied to detect the antenna index, and the MED rule is applied to detect the transmitted codeword index. The sharing problem of the SM-CD-NOMA system is similar to the problem in (<ref>). Following the steps (<ref>), (<ref>), and (<ref>), the sharing ADMM problem of SM-CD-NOMA can be solved. The solutions obtained for the sharing ADMM problem are similar to (<ref>) and (<ref>) and are repeated here for clarity: 𝐳_0j,sm,k^t+1= ∏_[-α_j,α_j]&[-β_j,β_j]ρ/(ρ+γ_j)(𝐳_0j,sm,k^t+𝐱̅_0,sm,k^t-𝐮^t-𝐳̅_0,sm,k^t) 𝐱̅_0,sm,k^t+1= (𝐇_k^H 𝐇_k N_u+ρ𝐈)^-1(𝐇_k^H 𝐫_k+ρ(𝐳̅_0,sm,k^t+1+𝐮^t)). The definitions of 𝐱̅_0,sm,k and 𝐳̅_0,sm,k are similar to (<ref>). The estimated SM codeword corresponding to kth RE is 𝐱̂_sm,k=N_u𝐱̅_0,sm,k. After resource-wise processing, the structure of the transmitted codewords for J UEs can be retrieved. For Example <ref>, the detected codewords for 6 UEs is given by 𝐱̂_tx = [ 𝐱̂_1^1 𝐱̂_2^1 𝐱̂_3^1 𝐱̂_4^1 𝐱̂_5^1 𝐱̂_6^1; 𝐱̂_1^2 𝐱̂_2^2 𝐱̂_3^2 𝐱̂_4^2 𝐱̂_5^2 𝐱̂_6^2; ]_N_tK× J (8× 6) where 𝐱̂_ j^ n_t indicates the estimated codeword of jth UE over the n_tth transmit antenna. In the presence of noise, the detected active transmit antenna index is given by n̂_ a^j=_n_t (‖𝐱̂_ j^n_t‖_1), ∀ j where ‖(·)‖_1 denotes the L1-norm of (·). Let 𝐱̂_j^n̂_ a^j be the estimated codeword of the jth UE transmitted from n̂_ a^j active antenna. The MED rule is applied to detect the codeword index corresponding to each UE p̂_j=_ p‖𝐱̂_j^n̂_a^j-𝐱_j,p‖, ∀ j. Similar steps of Algorithm <ref> are followed in the detection of the SM-CD-NOMA system. § COMPUTATIONAL COMPLEXITY This section analyses the computational complexity of the SIMO-CD-NOMA system. The detection algorithm's computational complexity determines its practical viability. The number of FLOPs (mainly complex multiplications) is a useful metric to analyze the complexity of the detector <cit.>. The number of calculations in the proposed CD-NOMA detection via ADMM consists of two parts. Part-1 is iteration-independent (Pre-processing) steps, described in lines 6 and 7 in Algorithm <ref>, and Part-2 is iteration-dependent steps, from lines 9 to 12. The calculations in Part-1 are performed only once, i.e., before the ADMM iterations. In the SIMO-CD-NOMA system, Part-1 contains three steps of calculations such as 𝐇^H 𝐇, (𝐇^H 𝐇+ρ𝐈)^-1, and 𝐇^H 𝐫. The size of 𝐇 is N_rK× Jd_v and N_rK× JK for SCMA and DCMA, respectively. Further, the numbers of FLOPs required to perform these three steps over the SCMA system are (N_r K)(Jd_v)^2,(Jd_v)^3 and (N_r K)(J d_v). The numbers of FLOPs required to perform the same steps over the DCMA system are (N_r K)(JK)^2,(JK)^3, and (N_r K)(J K). The calculations in Part-2 need to be repeated in every iteration and contain mainly two steps. For SCMA system, these steps involve scalar multiplication of Jd_v× 1 vector for J UEs parallelly in (<ref>), multiplication of Jd_v× Jd_v matrix with Jd_v× 1 and scalar multiplication with Jd_v× 1 vector in (<ref>), in total J^2d_v+(Jd_v)^2+Jd_v FLOPs approximately. For DCMA system, (J^2K+(JK)^2+JK) FLOPs are required. The approximate total computational cost to implement the ADMM-based detector over the SIMO-SCMA system and SIMO-DCMA system are (N_r K)(Jd_v)^2+(Jd_v)^3+ (N_r K)(J d_v)+T(J^2d_v+(Jd_v)^2+Jd_v) and (N_r K)(JK)^2+(JK)^3+(N_r K)(J K)+T(J^2K+(JK)^2+JK) respectively. TABLE <ref> compares the total complexity of the sharing ADMM-based detection problem with other known detection schemes. In TABLE <ref>, NA stands for Not applicable. MPA is widely used for SCMA systems, and its complexity is exponential with M and d_f, as mentioned in TABLE <ref> <cit.>. The computational burden will grow further for large-scale SCMA systems. Further, due to exponential complexity, MPA is not feasible in large-scale MIMO systems. Additionally, the lack of sparsity in DCMA makes MPA impractical for detection. A single tree search (STS) based soft-in soft-out (SISO) GSD <cit.> is applied to detect the signals of a spreading sequence-based DCMA system, also known as an overloaded CDMA system <cit.>. GSD performs key pre-processing steps to convert the rank deficient system into full rank one <cit.>. These steps include 𝐇^H𝐇 in (𝐐=𝐇^H𝐇+λ𝐈_J), Cholesky decomposition 𝐐=𝐃^H𝐃, and (𝐇𝐃^-1)^H 𝐫. These steps require (N_rK)J^2, J^3/3, and (J^3+(N_rK)J^2+JN_rK) FLOPs, respectively. The rank-deficient linear system is thus converted into a full-rank one, and standard sphere decoding (SD) can be readily applied. The SD performs pre-processing steps, including QR decomposition and 𝐐^H 𝐫. Further, the numbers of FLOPs to perform these steps are J^3 and 2J^2, respectively. The major complexity of the SD lies in the tree search algorithm <cit.>. The expected complexity of SD is E_FLOPS=∑_j=1^J f_p(j)N_j, where N_j is the average number of nodes visited in level-j of the tree and f_p(j) is the number of FLOPs needed in level-j. It is given in <cit.>, f_p(j)=2j+11, and N_j is roughly cubic in the number J of unknowns to be solved. Fig. <ref> depicts the computational complexity comparison of various detection schemes for different CD-NOMA systems with N_r=4, K=4, J=6,d_v=2, and M=4. It can be observed that the complexity of ADMM-based detector is almost equal to that of the low-complexity MMSE detector. every axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=5cm, legend style=font=, legend pos= north west § SIMULATIONS AND DISCUSSIONS This section presents wide-ranging simulation results of various MIMO-CD-NOMA systems, including SER performance and selection of ADMM parameters (T,{γ}_j=1^J). The simulations are performed for various MIMO-CD-NOMA systems described in Section  <ref> with varying parameters λ, M, N_r, and N_t. Further, the performance of the ADMM-based detector is compared with that of the MPA detector with ten iterations (T=10) in the case of the SCMA system. The DCMA and overloaded CDMA systems' detection is carried out using the MMSE and the GSD detectors, respectively. The codebooks designed in <cit.> and <cit.> are considered, for SCMA and DCMA, respectively. Table <ref> further details the simulation parameters. The sample simulation codes for this work are available at https://github.com/vikas2020-del/ADMM-based-detector-for-NOMA. §.§ SER performance of SIMO CD-NOMA system Fig. <ref> and Fig. <ref> show the SER performance of SIMO-SCMA system for 150 % and 200 % overloading, respectively. every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= south west Each curve represents the SER performance for a specific number N_r of the receive antennas at the BS and codebook size M. Fig. <ref> and Fig. <ref> show that the SER performance improves as N_r increases. Thus, the proposed ADMM-based detector is able to exploit the diversity gain provided by the SIMO-CD-NOMA system. However, for M=8, the SER performance is slightly worse than M=4 due to the inevitable decrease in the minimum Euclidean and minimum product distance <cit.>. Fig. <ref> and Fig. <ref> also compare the SER performance of the MMSE and ADMM-based detectors. For a small-scale SCMA system (M=4, λ=150 %), the performance of the MMSE detector is close to that of the ADMM-based detector. However, for λ=200 %, the MMSE performance is inferior to ADMM. Observe from (<ref>) that as the number J of UEs increases, the number Jd_v of unknowns approaches the number KN_r of observations. The MMSE performance degrades when the KN_r to Jd_v ratio approaches unity. ADMM provides approximately 2 dB SNR gain at SER=10^-3 over the MMSE detector for 200 % overloaded SCMA system, as observed from Fig. <ref>. Fig. <ref> and Fig. <ref> show the SER performance of M=4 and M=16 overloaded SIMO-DCMA systems, respectively. Fig. <ref> compares the SER performance of the MMSE and ADMM-based detectors. ADMM provides approximately 2 dB SNR gain at SER=10^-3 over the MMSE detector for the 200 % overloaded DCMA system, as observed from Fig. <ref>. Therefore, ADMM is a more suitable detector compared with MMSE for 200 % overloaded CD-NOMA systems. Furthermore, simulation attempts indicate that MMSE cannot detect the CD-NOMA signals for M = 8 and M = 16 size codebooks. Therefore, these plots are not shown in Fig. <ref> and Fig. <ref>. every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= north east §.§ SER performance of SMX CD-NOMA systems Fig. <ref> and Fig. <ref> show the SER performance of SMX-SCMA system for 150 % and 200 % overloading, respectively. Observe from Fig. <ref> and Fig. <ref>, that the higher the value of N_r, the better is the SER performance. As the value of N_t increases, N_r is increased to maintain N_r>d_fN_t in (<ref>). The improvement in the SER performance is maximum for N_r=64 case due to more observations at the BS. This improvement comes at the cost of a marginal computational complexity increment (TABLE <ref>) in ADMM detection. Fig. <ref> depicts the SER performance of SMX-DCMA systems. For SCMA, N_ u=d_ f UEs overlap on each RE. On the other hand, for DCMA N_ u=J (J>d_ f) UEs overlap on each RE, as shown in Fig. <ref>. This significant overlapping increases the number of unknowns in a DCMA system compared to the SCMA one, as observed in (<ref>). To maintain the number of observations higher than the number of unknowns (N_r>JN_t), DCMA requires larger N_r at the BS as compared to SCMA for a fixed N_t. As a result, the computational complexity of DCMA detection significantly increases. The tree search-based SD algorithms are highly complex in codebook-based DCMA, as explained in Section <ref>. Further, the ADMM-based detector shows similar complexity as that of MMSE, as shown in Fig. <ref>. Therefore, compared with MMSE, the ADMM-based detector performs well with almost the same complexity. every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= south west §.§ SER performance of SM CD-NOMA systems Fig. <ref> and Fig. <ref> show the SER performance of SM-SCMA system for 150 % and 200 % overloading, respectively. The performance of the ADMM-based detector mainly depends on N_t, N_r, and M, as we have seen in the previous subsections. The M=4 size codebook exhibits superior performance over M=8 due to its better distance properties (Euclidean distance and product distance). In SM-CD-NOMA systems, each UE's codeword contains the active antenna and codeword index information of that particular UE, as given in (<ref>). Fig. <ref> and Fig. <ref> depict the efficiency of ADMM in detecting the information both in the signal domain and space domain. Note that the codewords in (<ref>) contain zero-column vectors to indicate zero power transmission from the inactive antenna. As a result, the number of observations goes below the number of unknowns (N_r<N_uN_t). Thus, it leads to performance loss for an ADMM-based detector. However, increasing the number of observations (N_r) at the BS can compensate for this loss. Therefore, the performance of the ADMM-based detector can be maintained by a marginal increment in the computational complexity as given in TABLE <ref>. The observations mentioned above for SMX-DCMA and SM-SCMA systems apply to SM-DCMA systems, as shown in Fig. <ref>. Further, N_r=32, λ=200 % provides significantly better performance than N_r=16, λ=150 %, as the former has more observations (N_ r) than the latter. So the ADMM-based detector is more appropriate for highly overloaded DCMA systems equipped with large-scale MIMO parameters. every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= south west §.§ SER performance: ADMM vs. MPA Fig. <ref> shows the SER performance comparison between the ADMM and MPA detectors for the SIMO-SCMA system. Existing research shows that the MPA is a near-optimal detection with high computational complexity <cit.>. MPA gives a maximum SNR gain of around 2.5 dB over ADMM at SER=10^-2 for N_r=4 and M=4, as shown in Fig. <ref>. As the modulation order M increases, the performance of ADMM becomes close to MPA, as observed for N_r=4, M=8 in Fig. <ref>. Therefore, for M=8, codebook distance properties influence MPA and ADMM-based detectors similarly. Further, for N_r=8, M=8, the ADMM-based detector shows approximately 2 dB SNR gain at SER=10^-3 over the MPA detector. The increase in the number of observations significantly improves the ADMM performance compared to MPA for large-size codebooks (i.e. with large M). every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= south west every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= north east §.§ SER performance: ADMM vs. GSD Fig. <ref> shows the SER performance comparison between ADMM and GSD detectors for the SIMO-overloaded CDMA system. Observe from Fig. <ref> that, at low SNR regions, GSD outperforms the ADMM with a maximum of around 2 dB SNR gain at SER=10^-4. As the SNR increases, GSD performance degrades compared to ADMM. Unlike GSD, as M increases, ADMM performance is enhanced by doubling the number of received antennas as shown in Fig. <ref>. Note that ADMM performance improves as the number of observations increases. §.§ SER performance: Imperfect channel state information (CSI) All the above simulations are analyzed by considering perfect CSI at the receiver. In practical scenarios, obtaining perfect CSI at the receiver is not feasible due to channel estimation errors (CEEs). Thus, the proposed detector's performance in the presence of CEEs is an important metric to show its practical feasibility. An imperfect channel can be modeled as <cit.> 𝐇̂=𝐇+e Ω where Ω is the CEE and is considered to be uncorrelated with 𝐇. The entries of Ω are independent and i.i.d. complex Gaussian random variables with zero mean and unit variance, i.e., 𝒞𝒩(0,1). The quantity e in (<ref>) determines the variance of the CEE. Fig. <ref> depicts the performance of the proposed ADMM-based detector for the imperfect CSI with e=0 %, e=5 %, e=10 %, and e=20 %. The simulations for SCMA and DCMA systems are shown in Fig. <ref> and Fig. <ref>, respectively. Observe that the impact of the CEE on the ADMM-based detector's performance is minimal. Therefore, the ADMM-based detector can be applied in imperfect CSI scenarios. §.§ Convergence of ADMM and selection of parameters The convergence analysis of the ADMM-based detector in the DCMA system is performed based on the Monte Carlo simulations. Fig. <ref> shows the impact of ADMM iterations on the SER performance. The proposed detector exploits the iterative nature of ADMM to converge. every semilogy axis/.append style= line width=0.7 pt, tick style=line width=0.7pt, width=8cm,height=7cm, legend style=font=, legend pos= north east The SER performance improves as the number of iterations (T) increases. The improvement in SER becomes marginal after a certain number of iterations. For all considered E_b/N_0 values, ADMM converges after 15 iterations, as shown in Fig. <ref>. The penalty parameters {γ_j}_j=1^J and ρ are selected to maximize the SER performance. The augmented Lagrangian parameter ρ is selected as the reciprocal of SNR <cit.>. The impact of γ on SER performance in the ADMM-based detector is analyzed through Monte Carlo simulations, and the results are given in Fig. <ref>. For low values of γ, the ADMM performance degrades, and for large values, the variation in SER becomes inconsequential. When γ is close to zero, the impact of penalty terms is nullified. Thus, the alternative variables in the penalty function lose importance in minimizing the objective function. Observe from the Fig. <ref> that the ADMM performs better in the range {γ_j}_j=1^J∈ [50 100]. § CONCLUSIONS AND FUTURE SCOPES This paper proposed new system models and a low-complexity iterative linear detector for large-scale MIMO-CD-NOMA systems. The optimal ML detection is converted into a sharing problem, which is efficiently solved via the ADMM algorithm using distributed optimization framework. The proposed ADMM-based detector enables the detection of large-scale MIMO-CD-NOMA systems with high overloading factors (λ) and modulation orders (M) while maintaining low complexity. By leveraging the proposed ADMM, the CD-NOMA systems achieve significantly increased connectivity. Further, the impact of ADMM parameters, such as the number of iterations and penalty parameters, is analyzed. Exhaustive simulation results are presented to validate the effectiveness of the proposed ADMM-based detector when compared to conventional detectors such as MPA, GSD, and MMSE detectors. In addition, the results demonstrate that the ADMM-based detector offers excellent performance with low complexity across various CD-NOMA system variants. This paper has explored the concept of ADMM detection specifically for uncoded CD-NOMA systems. Designing a soft decision detector using the ADMM algorithm for coded CD-NOMA systems is an interesting future work. IEEEtran
http://arxiv.org/abs/2306.01928v1
20230602220032
New sextics of genus 6 and 10 attaining the Serre bound
[ "Annamaria Iezzi", "Motoko Qiu Kawakita", "Marco Timpanella" ]
math.AG
[ "math.AG" ]
Recent Advances of Local Mechanisms in Computer Vision: A Survey and Outlook of Recent Work Qiangchang Wang, Yilong Yin Q. Wang, Y. Yin are with Shandong University, China (E-mail: [email protected], [email protected]). =================================================================================================================================================== § INTRODUCTION In the 1970s Goppa discovered algebraic geometric codes, where efficient codes are constructed from explicit curves over finite fields with many rational points (see <cit.>). Let q be a power of a prime number p. For a (projective, geometrically irreducible, smooth) curve C of genus g defined over a finite field _q, the Hasse–Weil bound states that #C(_q) ≤ q + 1 + 2g√(q), where #C(_q) denotes the number of _q-rational points of C. A curve attaining this bound is said to be maximal and examples of maximal curves clearly exist only when q is an even power of p. In 1983, Serre improved this bound as #C(_q) ≤ q + 1 + g ⌊ 2√(q)⌋, where ⌊·⌋ denotes the integer part. We refer to this bound as the Serre bound. There is a wide literature about maximal curves; see for instance <cit.>, <cit.> and their references. However there are only a few examples of curves attaining the Serre bound and which are not maximal. In <cit.>, we generalise the Wiman and Edge sextics and we find a new example of curve of genus 6 attaining the Serre bound over _5^7. Because we were not able to completely split the Jacobian of the sextics, we only made computer search on somewhat small finite fields. In this paper, under some assumptions, we completely split the Jacobian of some sextics in <cit.>. Hence we find new examples of curves attaining the Serre bound over larger finite fields. We provide new examples of curves of genus 6 or 10 attaining the Serre bound. They all belong to the family of sextics introduced in <cit.> as a a generalization of the Wiman sextics <cit.> and Edge sextics <cit.>. Our approach is based on a theorem by Kani and Rosen which allows, under certain assumptions, to fully decompose the Jacobian of the curve. With our investigation we are able to update several entries in <http://www.manypoints.org> (<cit.>). § INTRODUCTION Let 𝔽_q be a finite field with q elements, where q is a power of a prime p. A projective, absolutely irreducibile, non-singular, algebraic curve C defined over _q is called _q-maximal if the number of its _q-rational points, denoted by #C(_q), attains the Hasse-Weil upper bound #C(_q) ≤ q + 1 + 2g√(q), where g is the genus of the curve. A classical and well-studied example of maximal curve is the so called Hermitian curve ℋ_q of affine equation x^q+x=y^q+1. This curve is 𝔽_q^2-maximal, and it has the largest possible genus for an 𝔽_q^2-maximal curve <cit.>. By a result commonly referred to as the Kleiman–Serre covering result <cit.>, any curve defined over _q and _q-covered by an _q-maximal curve, is _q-maximal. This result provides a strong tool for constructing new examples of maximal curves from the known ones: indeed, if C is an _q-maximal curve, also the quotient curve C/G, where G is a finite subgroup of the automorphism group of C, is _q-maximal. Most of the known _q^2-maximal curves are obtained, following this approach, as quotient curves of the Hermitian curve ℋ_q, see for instance <cit.> and the references therein. In the research community, for a while, it was even speculated that all maximal curves could be obtained as quotient curves of the Hermitian curve. However, in 2009, Giulietti and Korchmáros <cit.> proved that this is false, as they constructed an _q^6-maximal curve not covered by the Hermitian curve when q>2. Since then, a few other examples of maximal curves not covered by the Hermitian curve have been provided <cit.>, but a complete solution of the classification problem for maximal curves seems to be out of reach, and looking for new examples is a very active line of research. Clearly, 𝔽_q-maximal curves can only exist when q is a square. In 1983, Serre provided a non-trivial improvement of the Hasse-Weil bound when q is not a square, namely #C(_q) ≤ q + 1 + g ⌊ 2√(q)⌋, where ⌊·⌋ is the floor function (see <cit.>). We refer to this bound as the Serre bound. Curves attaining the Hasse-Weil or the Serre bound are interesting objects in their own right, but also for their applications in Coding Theory. Indeed, in <cit.>, Goppa described a way to use algebraic curves to construct linear error correcting codes, the so called algebraic geometric codes (AG codes). As the relative Singleton defect of an AG code from a curve C is upper bounded by the ratio g/N, where g is the genus of C and N can be as large as the number of 𝔽_q-rational points of C, it follows that curves with many rational points with respect to their genus are of great interest in Coding Theory. For this reason, maximal curves and curves attaining the Serre bound have been widely investigated in the last years, see for instance <cit.>. The aim of this paper is to provide new explicit examples of curves attaining the Hasse-Weil or the Serre bound. We stress the fact that while there is a wide literature about maximal curves, there are only a few examples of curves attaining the Serre bound and which are not maximal. All the examples that we provide in this paper have genus 6 or 10 and belong to the following generalization of the families of the Wiman sextics <cit.> and Edge sextics <cit.> introduced in <cit.> x^6+y^6+1 +a (x^4y^2+x^2+y^4) +b(x^2y^4+x^4+y^2) +c x^2y^2=0, with a,b,c∈_q. Our approach is based on a theorem by Kani and Rosen (see <cit.>), which provides a decomposition of the Jacobian of a curve under certain conditions on its automorphism group. This theorem has already been used to find examples of curves with many rational points with respect to their genus, see for instance <cit.>. The paper is organized as follows. In Section <ref> we recall the defining equations of the Wiman's and Edge's sextics and the statement of a theorem by Kani and Rosen. In Section <ref> we prove a useful result to decompose completely the Jacobian of degree-6 hyperelliptic curve under certain assumptions (see Theorem <ref>). In Section <ref> and <ref> we use Theorem <ref> to decompose the Jacobians of certain Wiman's and Edge's sextics, and we find new explicit examples of curves attaining the Hasse-Weil bound or the Serre bound. Our investigation allows to update several entries in <http://www.manypoints.org> (<cit.>). Finally, in Section <ref>, we prove that one of the maximal curves that we exhibit in the previous section is not Galois-covered by the Hermitian curve. § FAMILIES OF WIMAN'S AND EDGE'S SEXTICS In 1896 Wiman introduced in <cit.> the family of sextics defined by the equation W_a,b x^6+y^6+1 +a (x^4y^2+x^2y^4+x^4+x^2+y^4+y^2) +b x^2y^2=0, where a,b∈ℂ. Similarly, in 1981, Edge considered in <cit.> the family of sextics described by the equation E_α x^6+y^6+1+(x^2+y^2+1)(x^4+y^4+1)-12x^2y^2+α(y^2-1)(1-x^2)(x^2-y^2)=0, with α∈ℂ. The geometry of the Wiman–Edge pencil is studied in <cit.>. By considering Wiman's and Edge's sextics defined over a finite field rather than ℂ, in <cit.> and <cit.> it has been shown that they provide several examples of 𝔽_p^2-maximal curves, or of curves attaining the Serre bound over 𝔽_p and 𝔽_p^3. This allowed to update several entries in <http://www.manypoints.org> (<cit.>). By combining the defining equations of the Wiman's and Edge's sextics, a generalization of these two families was introduced and studied in <cit.>: S x^6+y^6+1 +a (x^4y^2+x^2+y^4) +b(x^2y^4+x^4+y^2) +c x^2y^2=0. It is readily seen that if a=b then S is a Wiman's sextic, whereas if a=(1-α)/2, b=(1+α)/2 and c=-6 then S is an Edge's sextic. In this paper we focus on S_a,b, obtained from S by setting c=-3(a+b+1), (see Section <ref>) and on W_a,b (see Section <ref>). The idea for studying the number of 𝔽_q-rational points of S_a,b and W_a,b is to decompose their Jacobian varieties. For this we will make use of the following theorem by Kani and Rosen. <cit.> Let C be a curve and let G be a finite subgroup of the automorphism group Aut(C) such that G=H_1∪⋯∪ H_n, where the H_i's are subgroups of G with H_i∩ H_j={1_G} for i≠ j. Then the following isogeny relation holds: Jac(C)^n-1×Jac(C/G)^g≃Jac(C/H_1)^h_1×⋯×Jac(C/H_n)^h_n, where g=|G| and h_i=|H_i|. As we will see in Sections <ref> and <ref> we can always decompose Jac(S_a,b) and Jac(W_a,b) as a product of elliptic curves and Jacobians of hyperelliptic curves of degree 6. This is why in the next section, in order to fully decompose the Jacobians as a product of elliptic curves, we first study the decomposition of the Jacobian variety of a degree-6 hyperelliptic curve. § DECOMPOSING THE JACOBIAN OF CERTAIN CURVES OF GENUS TWO In this section, we deal with the decomposition of the Jacobian variety Jac(D) of a hyperelliptic curve D y^2=f(x), where f(x) is a degree 6 polynomial defined over _q. In particular we prove that, under certain assumptions on f(x), the Jacobian Jac(D) splits completely as a product of two elliptic curves. The idea of the proof comes from Section 1.2 of <cit.>, where the authors work over an algebraically closed field, rather than a finite field. Assume that the polynomial f(x) factors completely over the finite field _q, i.e. f(x)=c ·Π_i=1^6(x-a_i) with a_i ∈_q for i=1, …, 6, a_i ≠ a_j when i ≠ j. Assume, moreover, that there exists a permutation of the roots such that (a_2-a_4)(a_1-a_6)(a_3-a_5)=(a_2-a_6)(a_1-a_5)(a_3-a_4). and set λ=(a_1-a_3)(a_2-a_4)(a_2-a_3)(a_1-a_4)μ=(a_1-a_3)(a_2-a_5)(a_2-a_3)(a_1-a_5), θ=c·(a_2-a_3)(a_1-a_4)(a_1-a_5)(a_1-a_6). Then, we have the following: (a) The hyperelliptic curve D is isomorphic to y^2=θ x(x-1)(x-λ)(x-μ)(x-λ(1-μ)1-λ) over the finite field _q. (b) Assume that there exists a square root of λ(λ-μ) in the finite field _q. Then the Jacobian of the hyperelliptic curve D decomposes over _q as Jac(D) ∼ E_σ× E_τ, where E_σ and E_τ are the elliptic curves defined by the equations s^2=θ(1-μ)1-λt(t-1) (t-(1-λ) (μ-2λ±2(λ^2-λμ)^1/2)μ-1). Moreover, the number of rational points of D satisfies #D(_q) = #E_σ(_q)+#E_τ(_q)-q-1. (a) Let us consider the map defined by x ↦(a_1-a_3)(a_1-a_2)+x-a_1(x-a_1)(a_2-a_3), y ↦(a_1-a_2)^2(a_1-a_3)^2y(a_2-a_3)^2(x-a_1)^3. With this map, D is isomorphic to: y^2=θ x(x-1)(x-λ)(x-μ)(x-ν), with ν=(a_1-a_3)(a_2-a_6)(a_2-a_3)(a_1-a_6). Since ν=λ(1-μ)1-λ is equivalent to (a_2-a_4)(a_1-a_6)(a_3-a_5)=(a_2-a_6)(a_1-a_5)(a_3-a_4), we have (a). (b) Using Equation (<ref>), the following maps define three automorphisms of the hyperelliptic curve D: σ x ↦λ(x-μ)x-λ, y ↦ y λ^3/2(λ-μ)^3/2(x-λ)^3, ι x ↦ x, y ↦ -y, and τ= σ·ι. Moreover, as λ(λ-μ) has a square root in 𝔽_q, these automorphisms are defined over 𝔽_q. Let E_σ and E_τ be the quotient curves D⟨σ⟩ and D⟨τ⟩, respectively. By setting t=x+λ(x-μ)x-λ and s=yx-(λ∓(λ^2-λμ)^1/2)(x-λ)^2, we have the following defining equations for E_σ and E_τ s^2=θ(t-μ)(t-1-λμ1-λ) (t-2(λ∓(λ^2-λμ)^1/2)), which are birationally equivalent to s^2=θ(1-μ)1-λt(t-1) (t-(1-λ) (μ-2λ±2(λ^2-λμ)^1/2)μ-1). Hence, we have that Jac(D) ∼ E_σ× E_τ by Theorem B of <cit.>. Let us now prove that #D(_q) = #E_σ(_q)+#E_τ(_q)-q-1. It is well known that #D(_q)=q+1-t, where t is the trace of the Frobenius endomorphism acting on a Tate module of Jac(D). Since Jac(D) ∼ E_σ× E_τ, then the Tate module of D is isomorphic to the direct sum of the Tate modules of E_σ and E_τ. Hence t=t_1+t_2, where t_1 and t_2 are the traces of the Frobenius on the Tate modules of E_σ and E_τ respectively. The result follows by recalling that t_1=q+1-#E_σ(F_q) and t_2=q+1-#E_τ(_q). § NEW SEXTICS OF GENUS SIX ATTAINING THE SERRE BOUND In this section we consider the family of sextics S_a,b x^6+y^6+1 +a (x^4y^2+x^2+y^4) +b(x^2y^4+x^4+y^2) -3(a+b+1)x^2y^2=0. We recall the following two results from <cit.>. <cit.> Let k be a field of characteristic p≥ 5. Let a,b∈ k and let D_a,b y^2=f_a,b(x) be the hyperelliptic curve with f_a,b(x)= -(x^3+bx^2+ax+1)((a+b+2)x^3-(a+2b+3)x^2+(b+3)x-1). Then, the Jacobian variety of the sextic S_a,b decomposes over k as Jac(S_a,b) ∼Jac(D_a,b) ×Jac(D_b,a)^2. Note that if a+b+2, -4a^3 + a^2b^2 + 18ab - 4b^3 - 27, and the resultant of x^3+bx^2+ax+1 and (a+b+2)x^3-(a+2b+3)x^2+(b+3)x-1 are not 0 then the genus of the hyperelliptic curve D_a,b is 2, and therefore S_a,b has genus 6. Moreover, since #D_b,a(_q) =#D_a,b(_q), we have the following corollary. <cit.> When a,b∈𝔽_q, then the number of 𝔽_q-rational points of the sextic S_a,b is #S_a,b(_q) = 3 #D_a,b(_q)-2q-2. By Corollary <ref>, in order to compute #S_a,b(_q), it is enough to compute #D_a,b(_q). When the polynomial f_a,b(x) satisfies the assumptions in Theorem , then the number of rational points of the sextic S_a,b is #S_a,b(_q) = 3 #E_σ(_q)+3#E_τ(_q)-5q-5, where E_σ and E_τ are defined as in Theorem (b). The claim follows from Theorem  and Corollary <ref>. Because of Corollary <ref>, we can determine the number of rational points of the sextic S_a,b by computing the number of rational points of the two elliptic curves E_σ and E_τ. Using this fact and a MAGMA(<cit.>)-aided search, we were able to find new examples of curves of genus 6 attaining the Serre bound. The sextic S_444, 469 x^6+y^6+1+444(x^4y^2+x^2+y^4)+469(x^2y^4+x^4+y^2)+1239x^2y^2=0 has 1760 rational points over the finite field _1327 and therefore it attains the Serre bound. Its automorphism group has 12 elements. We can apply Theorem <ref> to the hyperelliptic curve D_444, 469 y^2=f_444,469(x), where f_444,469(x)=c(x-a_1)(x-a_2)(x-a_3)(x-a_4)(x-a_5)(x-a_6), with c=412, a_1=548, a_2=541, a_3=289, a_4=364, a_5=344, a_6=28. Note that the order of a_i for i= 1, …, 6 is important. With the notation of Theorem <ref>, we have λ=611, μ=656 and θ=696. Hence, Jac(D_444,469) ∼ E_σ× E_τ where E_σ s^2=247t(t-1)(t-811) and E_τ s^2=247t(t-1)(t-1084). So #S_444,469(_q) = 3 #E_σ(_q)+3#E_τ(_q)-5q-5. The sextic S_0, 7 x^6+y^6+1+7(x^2y^4+x^4+y^2)+35x^2y^2=0 is maximal over the finite field _59^2. Its automorphism group has 12 elements. We can apply Theorem <ref> to the hyperelliptic curve D_0, 7 y^2=f_0,7(x) with f_0,7(x)=50(x-39)(x-25)(x-8)(x-27)(x-23)(x-4). Therefore, with the notation of Theorem <ref>, we have λ=13, μ=5 and θ=33. Hence Jac(D_0,7) ∼ E_σ× E_τ with E_σ s^2=11t(t-1)(t-30) and E_τ s^2=11t(t-1)(t-37). So #S_0,7(_q) = 3 #E_σ(_q)+3#E_τ(_q)-5q-5. Other examples for which S_a,b is maximal over the finite field _p^2 are obtained for (p,a,b)∈{(59,0,7), (71,13,23), (79,9,18), (83,36,52), (107,53,58), (139,103,115), (167,7,52), (179,131,154), (191,61,85), …}. We also observed that all the curves corresponding to the listed triplets have automorphism group of order 12. The sextic S_558, 2522 x^6+y^6+1+558(x^4y^2+x^2+y^4)+2522(x^2y^4+x^4+y^2)+2425x^2y^2=0 attains the Serre bound over _2917^3. Its automorphism group has 12 elements. We apply Theorem <ref> to the hyperelliptic curve D_558, 2522 y^2=f_558,2522(x) with f_558,2522(x)=2752(x-2694)(x-2107)(x-2027)(x-1853)(x-2095)(x-879). Therefore, λ=171, μ=932 and θ=893. Hence, Jac(D_558,2522) ∼ E_σ× E_τ with E_σ s^2=2677t(t-1)(t-1194) and E_τ s^2=2677t(t-1)(t-1495). So #S_558,2522(_q) = 3 #E_σ(_q)+3#E_τ(_q)-5q-5. For (p,a,b)∈{(67,6,62), (101,25,59), (673,40,460), (677,1,76), (1153,65,957), (2113,287,438), (2311,431,1253), (2707,143,2372), (2909,174,1715), (2917,558,2522), (3361,1062,1788), ⋯}, the sextic S_a,b attains the Serre bound over the finite field _p^3. The automorphism group of the curves corresponding to the above listed examples has order 12, except when (p,a,b)=(67,6,62), in which case it has order 60. § THE WIMAN SEXTICS OF GENUS TEN ATTAINING THE SERRE BOUND We consider in this section the family of sextics W_a,b: x^6+y^6+1+a(x^4y^2+x^2y^4+x^4+x^2+y^4+y^2)+bx^2y^2=0. In <cit.>, the case b=-6a-3 was investigated, and new examples of curves of genus 6 attaining the Serre bound were provided. In <cit.>, the Jacobian of the sextic W_0,b over the finite field _p^2 was completely decomposed, and this allowed to find new maximal curves of genus 10. Moreover, in <cit.>, the Jacobian of W_0,b was completely decomposed over any field k whose characteristic > 5, bringing to find a new sextic of genus 10 attaining the Serre bound. With a computer search on the family of genus 10 sextics W_a,b, we found new examples of curves with genus 10 and many rational points. We list such examples in Table <ref> and we highlight how they improve the old entries from <http://www.manypoints.org> (<cit.>). In <cit.>, we also proved the following result on the decomposition of the Jacobian of W_a,b over a field k whose characteristic > 5. <cit.> The Jacobian of the Wiman sextic W_a,b over a field k satisfies the following isogeny relation Jac(W_a,b) ∼ V_1^3 × V_2 ×Jac(V_3)^3, where V_1, V_2 and V_3 are defined by V_1 y^2=((3a-b-3)x-a+3)(1+(a-3)x(1-x)), V_2 x^3+y^3+1+a(x^2y+xy^2+x^2+x+y^2+y)+bxy=0, V_3 y^2=-((a+1)x^3+(2a+b)x^2+4ax+4)(x^3+ax^2+ax+1). In the cases where Theorem <ref> applies to the hyperelliptic curve V_3 of genus 2, we can completely decompose the Jacobian of V_3, and hence we can completely decompose the Jacobian of W_a,b. With a computer search we were able to find the following examples. The sextic W_5,17:x^6+y^6+1+5(x^4y^2+x^2y^4+x^4+x^2+y^4+y^2)+17x^2y^2=0 has 990 rational points over _23^2 and it is a maximal curve of genus 10. Its automorphism group has 24 elements. In Section <ref>, we will prove that this maximal curve is not a quotient of the Hermitian curve ℋ_23 defined over 𝔽_23^2. From Propostion <ref>, we have that Jac(W_5,17) ∼ V_1^3 × V_2 ×Jac(V_3)^3. By applying Theorem <ref> to V_3 y^2=17(x-22)(x-14)(x-13)(x-11)(x-6)(x-5), we obtain that Jac(V_3) ∼ E_1 × E_2 where E_1:s^2=20t(t-1)(t-21), E_2:s^2=20t(t-1)(t-22). Since V_1, V_2 are birational equivalent to E_3:s^2=t(t-1)(t-3), E_4:s^2=6t(t-1)(t-22). respectively, we have that Jac(W_5,17) ∼ E_1^3 × E_2^3 × E_3^3 × E_4. For (p,a,b)∈{(167,27,40), (191,49,131), (239,119,216),(263,51,123), (431,257,322), (503,33,274), (599,352,358), (719,254,557), (887,388,609), …}, the Wiman sextic W_a,b is maximal over the finite field _p^2. The automorphism group of all the explicit examples listed here has 24 elements. The sextic W_7,120:x^6+y^6+1+7(x^4y^2+x^2y^4+x^4+x^2+y^4+y^2)+120x^2y^2=0 has 7242678 rational points over _193^3. It has genus 10 and it attains the Serre bound. Its automorphism group has 24 elements. From Proposition <ref>, we have that Jac(W_7,120) ∼ V_1^3 × V_2 ×Jac(V_3)^3. Applying Theorem <ref> to V_3 y^2=185(x-192)(x-122)(x-101)(x-110)(x-89)(x-86), we have that Jac(V_3) ∼ E_1 × E_2 where E_1:s^2=38t(t-1)(t-56), E_2:s^2=38t(t-1)(t-7). Since V_1, V_2 are birational equivalent to E_3:s^2=14t(t-1)(t-164), E_4:s^2=88t(t-1)(t-173) respectively, we have that Jac(W_7,120) ∼ E_1^3 × E_2^3 × E_3^3 × E_4. For (p,a,b)= (2909,2271,2350), (4349,1169,4282) the sextic W_a,b attains the Serre bound over the finite field _p^3. Both automorphism groups have 24 elements. § W_5,17 IS NOT GALOIS-COVERED BY ℋ_23 OVER 𝔽_23^2 In the previous section we proved that the sextic W_5,17 is an _23^2-maximal curve of genus 10. In this section, we prove that this maximal curve is not a quotient curve of the Hermitian curve ℋ_23 defined over 𝔽_23^2. Recall that every subgroup G of the automorphism group of ℋ_q, which is isomorphic to (3,q), produces a quotient curve _q/G, and the cover _q→_q/G is a Galois cover defined over 𝔽_q^2. The degree of the different divisor Δ of this covering is given by the Riemann-Hurwitz formula <cit.>, Δ=(2g(_q)-2)-|G|(2g(_q/G)-2). On the other hand, Δ=∑_σ∈ G∖{id}i(σ), where i(σ)≥ 0 is given by the Hilbert's different formula <cit.>, namely i(σ)=∑_P∈_q(𝔽̅_q)v_P(σ(t)-t), where t is a local parameter at P. By analyzing the geometric properties of the elements σ∈(3,q), it turns out that there are only a few possibilities for i(σ). This is stated in the next results on how an element of a given order in (3,q) acts on the set of 𝔽_q^2-rational points of _q, denoted by _q(𝔽_q^2). We recall that a linear collineation σ of (2,𝕂) is a (P,ℓ)-perspectivity, if σ preserves each line through the point P (the center of σ), and fixes each point on the line ℓ (the axis of σ). A (P,ℓ)-perspectivity is either an elation or a homology according as P∈ℓ or P∉ℓ. (<cit.>) For a nontrivial element σ∈(3,q), one of the following cases holds. (A) ord(σ)|(q+1) and σ is a homology whose center P is a point of _q and whose axis ℓ is a chord of _q(𝔽_q^2) such that (P,ℓ) is a pole-polar pair with respect to the unitary polarity associated to _q(𝔽_q^2). (B) ord(σ) is coprime to p and σ fixes the vertices P_1,P_2,P_3 of a non-degenerate triangle T. (B1) The points P_1,P_2,P_3 are -rational, P_1,P_2,P_3∉_q and the triangle T is self-polar with respect to the unitary polarity associated to _q(𝔽_q^2). Also, (σ)|(q+1). (B2) The points P_1,P_2,P_3 are -rational, P_1∉_q, P_2,P_3∈_q. Also, (σ)|(q^2-1) and (σ)∤(q+1). (B3) The points P_1,P_2,P_3 have coordinates in 𝔽_q^6∖𝔽_q^2, P_1,P_2,P_3∈_q. Also, (σ)| (q^2-q+1). (C) ord(σ)=p and σ is an elation whose center P is a point of _q and whose axis ℓ is a tangent of _q(𝔽_q^2); here (P,ℓ) is a pole-polar pair with respect to the unitary polarity associated to _q(𝔽_q^2). (D) ord(σ)=p with p2, or ord(σ)=4 and p=2. In this case σ fixes an -rational point P, with P ∈_q, and a line ℓ which is a tangent of _q(𝔽_q^2); here (P,ℓ) is a pole-polar pair with respect to the unitary polarity associated to _q(𝔽_q^2). (E) p| ord(σ), p^2∤ ord(σ), and ord(σ) p. In this case σ fixes two -rational points P,Q, with P∈_q, Q∉_q. In the rest of the section, a nontrivial element of (3,q) is said to be of type (A), (B), (B1), (B2), (B3), (C), (D), or (E), as given in Lemma <ref>. (<cit.>) For a nontrivial element σ∈(3,q) one of the following cases occurs. * If (σ)=2 and 2|(q+1), then i(σ)=q+1. * If (σ)=3, 3 |(q+1) and σ is of type (B3), then i(σ)=3. * If (σ) 2, (σ)|(q+1) and σ is of type (A), then i(σ)=q+1. * If (σ) 2, (σ)|(q+1) and σ is of type (B1), then i(σ)=0. * If (σ)|(q^2-1) and (σ)∤(q+1), then σ is of type (B2) and i(σ)=2. * If (σ)3 and (σ)|(q^2-q+1), then σ is of type (B3) and i(σ)=3. * If p=2 and (σ)=4, then σ is of type (D) and i(σ)=2. * If (σ)=p, p 2 and σ is of type (D), then i(σ)=2. * If (σ)=p and σ is of type (C), then i(σ)=q+2. * If (σ) p, p|(σ) and (σ)4, then σ is of type (E) and i(σ)=1. Finally, the following useful corollary can be deduced from the proof of Theorem 5 in <cit.>. Let be the quotient curve _q/G, where G is a subgroup of PGU(3,q). Then |_q(𝔽_q^2)|/|(𝔽_q^2 )|≤ |G|≤2g(_q)-2/2g()-2. W_5,17 is not Galois-covered by ℋ_23 over 𝔽_23^2. Assume, by the way of contradiction, that W_5,17 is Galois-covered by ℋ_23 over 𝔽_23^2. So, there exists G≤PGU(3,23) such that W_5,17 is the quotient curve ℋ_23/G. First, by Proposition <ref>, it follows that 12<|_23(𝔽_23^2)|/|W_5,17(𝔽_23^2 )|≤ |G|≤2g(_23)-2/2g(W_5,17)-2=28. Furthermore, as |G| must divide |PGU(3,23)|=2^7· 3^3· 11 · 13^2· 23^3, we have that |G|∈{13, 16,18,22,23,24,26,27}. Observe that, by the Riemann-Hurwitz formula, the degree of the different divisor Δ can be computed as Δ=(2g(_23)-2)-|G|(2g(W_5,17)-2). To conclude the proof, we will rule out each of these possibilities for |G|. * |G|=13. In this case G contains exactly 12 elements of order 13, and hence Lemma <ref> yields Δ=12· 3=36, a contradiction with Equation (<ref>). * |G|=16. A MAGMA aided computation shows that there are 9 subgroups (up to conjugation) of order 16 in (3,23), namely G_i=S[i] where S:=Subgroups((3,23): OrderEqual:=16) and i∈{1,…,9}. Let N_i be the normalizer of G_i in (3,23) and Q_i be the factor group N_i/G_i. Then, from Galois theory, Q_i is a subgroup of Aut(W_5,7) ≅ S_4. By direct check with MAGMA, the order of Q_i is not a divisor of 24 for i∈{1,…,5}, a contradiction. In the remaining cases we have |Q_6|=|Q_7|=24 and |Q_8|=|Q_9|=12. However, all such groups are abelian, a contradiction with the structure of S_4. * |G|=18. By Lemma <ref>, G does not contain any non-trivial elements of orders 9 and 18. Therefore, by Sylow theorems, it contains exactly 8 elements of order 3, and Lemma <ref> yields 180=Δ=24· i_2+8· k+(17-8-i_2)· l, where i_2∈{1,3,9} is the number of involutions in G, k∈{0,3,24} and l∈{0,24}. This is a contradiction as 180 is not divisible by 8. * |G|=22. In this case G is isomorphic to either the cyclic or the dihedral group of order 22. Now, Lemma <ref> yields Δ=24+10· 2+10· 2, in the former case, and Δ=24· 11+10· 2, in the latter case. In both cases this is a contradiction with Δ=108. * |G|=23. In this case G contains 22 elements of order 23, whence Lemma <ref> yields Δ=22· i, where i∈{2,25}. This is a contradiction with Δ=90. * |G|=24. First observe that if G contains more than 3 involutions, Lemma <ref> yields Δ>24· 3, a contradiction with Δ=72. Checking with MAGMA, there are 20 subgroups (up to conjugation) of order 24 in (3,23) with at most 3 involutions, namely G_i=S[i] where S:=Subgroups((3,23): OrderEqual:=24); and i∈{1,…,18}∪{22,23}. For i∈{1,…,18}∪{22,23}, let N_i be the normalizer of G_i in (3,23) and Q_i be the factor group N_i/G_i. Then, if i∈{1,…,10}, the order of Q_i is not a divisor of 24, a contradiction with Q_i being a subgroup of S_4. On the other hand, in the remaining cases Q_i contains an abelian subgroup of order 12, a contradiction with the structure of S_4. * |G|=26. By Sylow theorems G contains exactly 12 elements of order 13. Therefore Lemma <ref> yields Δ≥ 12· 3+24· i, where i≥ 1 is the number of involutions in G. This is a contradiction with Δ=36. * |G|=27. Checking with MAGMA, there is a unique (up to conjugation) subgroup of order 27 in (3,23), which is generated by the following automorphisms of ℋ_23: {[x:y:z]→ [a_1x:a_2y:a_3z] | a_i^3=1, i=1,2,3 } and [x:y:z]→ [z:x:y]. A MAGMA computation shows that the quotient curve of _23 by this group has genus 7, a contradiction with W_5,17 having genus 10. § ACKNOWLEDGEMENTS This research was supported by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM). The second author is funded by JSPS Grant-in-Aid for Scientific Research (C) 17K05344. The third author is funded by the project “Metodi matematici per la firma digitale ed il cloud computing" (Programma Operativo Nazionale (PON) “Ricerca e Innovazione" 2014-2020, University of Perugia). abbrv
http://arxiv.org/abs/2306.03042v2
20230605170623
SERT: A Transfomer Based Model for Spatio-Temporal Sensor Data with Missing Values for Environmental Monitoring
[ "Amin Shoari Nejad", "Rocío Alaiz-Rodríguez", "Gerard D. McCarthy", "Brian Kelleher", "Anthony Grey", "Andrew Parnell" ]
cs.LG
[ "cs.LG", "cs.AI" ]
1]Amin Shoari Nejad* 2]Rocío Alaiz-Rodríguez 3]Gerard D. McCarthy 4]Brian Kelleher 4]Anthony Grey 1]Andrew Parnell SHOARI NEJAD et al [1]Hamilton Institute, Insight Centre for Data Analytics, Maynooth University, Kildare, Ireland [2]Department of Electrical, Systems and Automation, University of León, Spain [3] ICARUS, Department of Geography, Maynooth University, Maynooth, Ireland [4] Organic Geochemical Research Laboratory, Dublin City University, Glasnevin Campus, Dublin 9, Ireland *Amin Shoari Nejad, [email protected] [Abstract]Environmental monitoring is crucial to our understanding of climate change, biodiversity loss and pollution. The availability of large-scale spatio-temporal data from sources such as sensors and satellites allows us to develop sophisticated models for forecasting and understanding key drivers. However, the data collected from sensors often contain missing values due to faulty equipment or maintenance issues. The missing values rarely occur simultaneously leading to data that are multivariate misaligned sparse time series. We propose two models that are capable of performing multivariate spatio-temporal forecasting while handling missing data naturally without the need for imputation. The first model is a transformer-based model, which we name SERT (Spatio-temporal Encoder Representations from Transformers). The second is a simpler model named SST-ANN (Sparse Spatio-Temporal Artificial Neural Network) which is capable of providing interpretable results. We conduct extensive experiments on two different datasets for multivariate spatio-temporal forecasting and show that our models have competitive or superior performance to those at the state-of-the-art. SERT: A Transfomer Based Model for Spatio-Temporal Sensor Data with Missing Values for Environmental Monitoring [ July 31, 2023 ================================================================================================================ § INTRODUCTION The importance of spatio-temporal forecasting has increased significantly in recent years due to the availability of large-scale spatio-temporal data from various sources such as sensors and satellites <cit.>. Spatio-temporal forecasting involves predicting how data vary over space and time, which is critical for a wide range of applications such as water quality forecasting <cit.>. A common approach to modelling spatio-temporal data is to use a multivariate time series structure, where each time series is associated with a variable at a specific location <cit.>. Spatio-temporal data often contain missing values which is a common problem in environmental monitoring, and can be caused by sensor failure, malfunction or communication problems (for example see Figure <ref>). A common remedy for forecasting with missing data is to impute the missing values using a variety of methods <cit.>. However, these methods are not always effective, can reduce the signal in the data, and thus make the modelling or forecasting task more challenging and unreliable. Therefore it is important to design models that can handle missing data naturally whilst simultaneously learning the underlying patterns in the data without resorting to imputation. We propose a new model that is capable of performing multivariate spatio-temporal forecasting named SERT (Spatio-temporal Encoder Representations from Transformers). Our model is an extension of the well-known transformer architecture that has shown remarkable success in natural language processing and also image analysis <cit.>. SERT is designed to capture the complex joint temporal and spatial dependencies among the input variables. An important feature of our proposed model which differentiates it from the many other available methods is its ability to handle missing data more naturally without requiring any missing value imputation. In addition to the SERT model, we introduce an interpretable simplified version that provides insights into the underlying factors that drive the predicted values, which can assist in decision and policy-making. Our proposed simplified model is named SST-ANN (Sparse Spatio-Temporal Artificial Neural Network) and removes the transformer layers from the SERT structure. Despite being less accurate than SERT, it is capable of providing insightful results with faster computation time while similarly being able to handle missing values. Depending on the complexity of the problem, the required accuracy and the available computational resources, the user can fit both SERT and SST-ANN, using the former to provide more accurate forecasts and the latter to forecast and gain insights about how the results were obtained. To evaluate the performance of our proposed models, we conducted extensive experiments on two different datasets for multivariate spatio-temporal forecasting. We fitted our models to a simulated dataset to assess their ability to function under different levels of sparsity. We also evaluated the performance of the models on a real-world dataset, including missing values, of environmental variables in Dublin Bay for 7 hour ahead forecasting. Our experimental results show that our models are competitive with state-of-the-art models for multivariate spatio-temporal forecasting. Our paper is organized as follows. In Section <ref>, we provide a brief overview of related work on models developed for analysing sequential data in general and spatio-temporal forecasting applied to environmental monitoring in particular. In Section <ref>, we describe the proposed SERT and SST-ANN models in detail. In Section <ref>, we present the experimental results and analysis. Finally, in Section <ref>, we conclude the paper and discuss future directions of research. § RELATED WORK In this section we provide a brief overview of the recent developments in deep learning models for sequential data analysis and spatio-temporal models for environmental monitoring, and also methods for handling missing data and adding interpretability to deep learning models applied to time series data. §.§ Deep Learning Models for Sequential Data Recurrent Neural Networks (RNNs) have been one of the most popular deep learning models for sequential data <cit.>. However, RNNs suffer from the vanishing gradient problem which makes them unable to learn long-term dependencies in the data. To address this problem, Long Short-Term Memory (LSTM) networks <cit.> and Gated Recurrent Unit (GRU) networks <cit.> were introduced. These models have been applied to various tasks such as machine translation <cit.>, speech recognition <cit.>, and time series forecasting <cit.>. More recently a new type of deep learning model named transformers was introduced <cit.>. Transformers are based on the attention mechanism which enables them to learn the dependencies between the input and output sequences. Transformers are comprised of an encoder and a decoder network. The encoder network is responsible for learning the representation of the input sequence, while the decoder network is responsible for generating the output sequence based on the learned representation. Models developed on the transformer architecture include BERT <cit.>, which uses only the encoder part, and GPT <cit.> which uses only the decoder part. These models have been applied to various tasks in natural language processing such as question answering <cit.>, text classification <cit.>, and text summarization <cit.>. Overall, transformers have proven themselves to be more effective than recurrent based models in many applications, especially in natural language processing. §.§ Deep Learning Models for Spatio-Temporal data Spatio-temporal forecasting is often framed as multivariate time series forecasting where each time series is associated with a variable at a specific location. The application of deep learning models for spatio-temporal forecasting is not new. For example, <cit.> used an LSTM to forecast daily land surface temperature, and <cit.> developed an ensemble quadratic echo state network for forecasting Pacific sea surface temperature. However, the application of transformers to spatio-temporal forecasting is relatively new and challenging because transformers have been mainly developed in the field of natural language processing (NLP). Nonetheless researchers were inspired by the success of transformers in NLP and started adapting them to spatio-temporal forecasting which can be formulated as a sequence-to-sequence problem, where the input is a sequence of historical observations of multiple variables at different locations, and the output is a sequence of future predictions of the same variables at the same locations. A common approach for sequence-to-sequence modeling is to use an encoder-decoder architecture, where an encoder network maps the input sequence into a latent representation, and a decoder network generates the output sequence from the latent representation <cit.>. <cit.> used this idea to develop a new model called Spacetimeformer and applied it to traffic prediction and weather forecasting. However, to the best of our knowledge, the application of transformers to environmental monitoring is limited to the recent work by <cit.> who used a transformer-based model for hourly PM_2.5 forecasting in Los Angeles. §.§ Addressing Missing Values in Modelling As mentioned in the introduction, a major challenge in spatio-temporal forecasting in the environmental monitoring context is dealing with missing values. A common approach for dealing with the missing values is imputation <cit.> before conducting any analysis. In time series modelling care needs to be taken to avoid introducing bias; last observation carried forward is a common approach. An alternative used in the literature is that of a Bayesian framework which enables defining a prior distribution over the missing values so that they can be inferred with the other unobserved parameters when fitting the models. For example, <cit.> proposed a Bayesian model (called VARICH) for spatio-temporal modelling of turbidity data with many missing values. However, this approach is computationally expensive and requires a large number of samples from the posterior distribution to obtain acceptable results and thus is not suitable for large spatio-temporal datasets. To address the missing data problem in time series, <cit.> proposed a novel approach to encode multivariate time series using set functions and introduced a new model called SeFT for classifying time series with irregularly sampled clinical data. More recently, <cit.> proposed a new transformer based model called STraTS that represents each observation as a triplet of the form (time, variable name, value). As opposed to SeFT, STraTS uses a learnable positional encoding and a Continuous Value Embedding (CVE) scheme that is a one-to-many feed-forward network. STraTS was developed to perform multivariate time series forecasting during the pre-training phase and classification as the final task on irregularly sampled clinical data. It has been shown to have higher accuracy than its predecessor, the SeFT model, when applied to the classification of clinical time series. Both SeFT and STraTS can handle missing values without requiring any imputation. To the best of our knowledge, none of these novel methods have been applied to environmental monitoring challenges; we adapt STraTS to a spatio-temporal setting. §.§ Interpretability of Deep Learning Models Deep learning models are often considered as black-box models because they are hard to interpret <cit.>. However, in many applications, it is important to understand the model's decision making process <cit.>. Some authors have proposed methods to help interpret the results of deep learning models applied to time series data. For example, SeFT uses the attention mechanism in its architecture and the authors of the work showed that the attention weights can be used to gain insights into the importance of input data, including multiple variables. <cit.>, inspired by <cit.> and <cit.>, proposed an interpretable version of the STraTS model, called STraTS-I, which uses an almost identical structure to their STraTS model but instead uses encoded inputs directly to the output layer as opposed to STraTS that uses the contextualized inputs to the output layer. Their approach allows for the calculation of a contribution score for each input observation towards the prediction, achieved by multiplying the encoded input, attention weights, and output layer weights. This modification aims to compensate accuracy for interpretability while both models have similar computational complexity. We follow a similar simplification routine in the creation of our SST-ANN approach explained in Section <ref>. § PROPOSED METHODS In this section we first define the problem followed by the details of the general model architectures that we use to build SERT and SST-ANN to address the problem. We then introduce a modification for encoding location information in the models' input data. Finally, we describe the masked loss function that we use for training the models. §.§ Problem Definition We have a dataset D = {(T_j, Y_j, M_j)}_j=1^J where T_j is a multivariate time series consisting of triples (time, variable, value) written as {(t_i, f_i, v_i), i = 1,…,N}. Y_j is the values (and associated variables) for a future time horizon for which we want the model to forecast. M_j is a binary vector indicating whether each of element in Y_j is observed for sample j in the dataset. M_j is used in the loss function (see Section <ref> for details) for masking the unobserved values in the forecast window. A schematic of a sample from the dataset is shown in Figure <ref>. The goal is to learn a model G that maps T_j to Y_j, i.e., G(T_j) = Y_j, without imputing the missing values in T_j or aligning the time series. §.§ SERT We first describe the data encoding scheme and then the model architecture including an encoder network and a linear layer. The schematic diagram of the model is shown in Figure <ref>. We will slightly modify this structure to show an alternative approach for encoding the location information in Section <ref>. Data Encoding Scheme The input data to our model is a multivariate time series dataset T that consists of N time series, each of which is associated with a time series variable f which is a sequence of observation values v's. Accordingly, an individual data point i is represented as a triplet (t_i, f_i, v_i) where t_i is time, f_i is the variable name and v_i is the measured value of the data point. To use the triplet in the model, we encode each component into an embedding and then add the embeddings together. Let e_i^f∈ R^d be the embedding of the variable name f_i which can be encoded similar to words using a lookup table, e_i^t∈ R^d be the embedding of the time index t which can be encoded using a continuous value embedding (CVE) scheme which is a one-to-many feed forward neural network <cit.> and e_i^v∈ R^d be the embedding of the value v_i which can also be encoded using the CVE. The embedding of the triplet i is then defined as e_i = e_i^f + e_i^t + e_i^v. The size of the embedding vector d is a hyperparameter of the model. Encoder Network Similar to the well-known BERT model <cit.>, the main component of our model is the encoder part of the transformer model introduced by <cit.>. Since transformers have become very common, we omit the details of the architecture and refer the reader to <cit.> for the full description. Intuitively, we can think of the encoder network as layers that take the triplet embeddings of the input data and transform them into contextualized embeddings that capture the long-range dependencies within a time series as well as cross dependencies between different time series. Linear Layer After obtaining the contextualised embeddings of the input data using the encoder network, we then flatten the embeddings and apply a linear layer to them to generate the predictions. The linear layer is a feed-forward network with a single hidden layer and a ReLU activation function. §.§ SST-ANN The SST-ANN model is a simplified version of the SERT model that consists of only the triplet encoding and a linear layer to the output with no transformer structure in between. SST-ANN first encodes the input data using the triplet encoding scheme and then uses the embeddings as the input to single layer feed forward network to generate the predictions. Since there is no transformer structure in between, the SST-ANN model is much faster than the SERT model and, using the embeddings and the weights of the linear layer, we can compute a contribution score for each observation to the final prediction. This is useful for interpretability and variable importance analysis. More formally the output of the model can be expressed as follows: ŷ_k = ∑_i=1^N c_i + b, c_i = W_ik^T· e_i, where ŷ_k is the prediction of variable k, c_i is the contribution of the triplet (t_i, f_i, v_i) to the prediction, N is the number of observations in the input sample, b is the bias term, e_i are the embeddings of the triplet and W_ik is the vector of output weights associated with the embedding e_i and the target variable k. Using the contribution scores, we can define a variable importance index. We first calculate the average contribution value of all observations belonging to the same variable as the average contribution of that variable. This calculation can be performed for a single sample to gain insights into the importance of the predictor variable for a specific target prediction, or for multiple samples used in multiple predictions to obtain an overall understanding of the predictor variable's importance in general. Next, we compute the importance of each variable by normalizing the absolute value of the average contribution values for the variables. More formally, we can express this as: I_k = |c̅_k|/∑_k=1^K |c̅_k|× 100 where I_k is the importance (in percentage) and c̅_k is the average contribution value of the variable k. §.§ Location Encoding We consider two different approaches to encode the location information in the input data. The first approach is to encode the location information together with the variable name f_i in the triplet encoding scheme. For example, our naming scheme for the variables can be f_i = {location}_i·{variable}_i where {location}_i is the location of the time series i and {variable}_i is the variable name of the time series i (e.g. Tolka.Turbidity). This approach uses the exact same architecture explained in Section <ref>. The second approach is to encode the location information separately from the variable name. This way the input time series are all assumed to arise from the same location and the location embedding is concatenated to the contextualized embeddings before the linear layer is used for prediction. In this approach, we need to structure the dataset such that all the time series from the same location are grouped together. Formally, we can define the grouped dataset as D^' = {(T_j^L, Y_j^L, M_j^L)}_j=1^J^' where L ∈ S is the location and S is the set of all locations. This approach needs a minimal modification to the previously described architecture and its schematic diagram is shown in Figure <ref>. The first approach is similar to how the Spacetimeformer model <cit.> encodes the location information while the second approach is similar to how the STraTS model <cit.> encodes the non-temporal information (patient demographics in their work). §.§ Masked Loss Function We use a masked Mean Squared Error (MSE) loss function to train our models. Masking in the loss function is used to handle the missing values in the output data. The masked MSE loss function is defined as follows: ℒ =1/|J|∑_j=1^J∑_k=1^K m_k^j(𝐲̃_k^j-𝐲_k^j)^2 where 𝐲_k^j is the ground truth, 𝐲̃_k^j is the predicted value, and m_k^j is the mask value of the target variable k in sample j. § EXPERIMENTS In this section, we will describe the experiments we conducted to evaluate the performance of our proposed models. We evaluated our models using both a simulated dataset and a real-world dataset. The primary objective of the simulation experiment is to investigate how the models perform under different sparsity levels. The real-world experiment aimed to serve as a proof of concept for the models' ability to conduct spatiotemporal multi-step ahead forecasting in real-world scenarios. To assess the effectiveness of our models, we compared them with a baseline Naive forecaster model, which simply uses the present hour's observation as the next hour forecast. We also compare our models against the LSTM model and the STraTS model. Since the Naive forecaster and LSTM model cannot handle missing values, we first imputed these values using a forward filling method <cit.>. The source code for our experiments is available at https://github.com/Aminsn/SERT2023https://github.com/Aminsn/SERT2023. §.§ Sparsity Analysis We first simulated a dataset that consists of 16 time series each with 40,000 observations. We denote Y_t∈ R^16 as the vector of observations at time t generated from the following process: Y_t = 2 + 0.4 Y_t-1 + X_t + s_t, s_t∼ MVN(0, Σ) where s_t is spatial random effect with mean zero and variance-covariance matrix Σ that is generated with: Σ = U · U^T, U ∈ R^16 × 16, U_i,j∼ Uniform(-1, 1), and X_t∈ R^16 is the vector of temporal effects generated as: X_t^T = [ 10sin(p_1t), cos(p_2t), p_3t, -p_3t+10sin(p_1t), 5sin(p_2t), 12cos(p_2t), 7sin(p_2t) , 8cos(p_2t), . . 2sin(p_2t), 3cos(p_2t), 12sin(p_2t), 18cos(p_2t), 4sin(p_2t), 15cos(p_2t), 11sin(p_2t), 10cos(p_2t) ] where p_1 = 0.005, p_2 = 0.0005 and p_3 = 0.002. We use the first 37,000 time steps to train the models and the remaining 3,000 time steps to evaluate the performance of them. We used the model structure shown in Figure <ref> to train our proposed models. We trained all models for 1 step ahead forecasting using the previous 10 hours of observations. We consider five different sparsity levels to fit the models. Accordingly, we remove n% of the observations randomly for n = {0%, 20%, 40%. 60%, 80%}. We use the root mean squared error (RMSE) as the evaluation metric. The results are presented in Figure <ref>. §.§ Real Dataset; Environmental Monitoring in Dublin bay, Ireland The dataset includes hourly measurements from 2017-01-01 to 2021-12-31. An example of the data is shown in Figure <ref>. The locations of the data are shown in Figure <ref>. We use the data from the first four years to train the models and the last year to evaluate their performance. We train the models using the previous 10 hours of observations as the input and forecasting seven hours ahead (forecasting only 1 or 2 hours ahead is a relatively trivial task, and forecasting >12 hours ahead reduces the performance of models that cannot account for non-seasonality). We use the same evaluation metric as in the simulated dataset. We tried both location encoding approaches (explained in Section <ref>) with our proposed models on the real-world dataset and found that the second approach, depicted in Figure <ref>, performed better and here we only report the results of the superior approach. The results are presented in Table <ref>. We utilized the same computational resources (a single NVIDIA P100 GPU) to train the models. The specifications and speed performance details of the models are reported in the Table <ref>. §.§ Interpretability of the SST-ANN model The results of the variable importance for the real-world dataset experiment are presented in Figure <ref>. The sign of the average contribution score c̅_j (explained in Section <ref>) is multiplied by the importance index I_j to give an insight into the direction of the average contribution of the predictor variable to the prediction of the target variable. In this example, we only consider the contributions of water level, temperature, wind speed and precipitation to predictions of turbidity, disolved oxygen and salinity, since we know that the former variables could affect the latter variables but not vice versa. According to the results, temperature followed by precipitation are the most important variables in predicting the target variables. § CONCLUSION In this paper, we proposed two novel models for spatio-temporal forecasting called SERT and SST-ANN. SERT is a transformer based model while SST-ANN is a simple ANN model combined with triplet encoding of the input data. Furthermore, we showed that STraTS, a model originally developed for sparse and irregularly sampled clinical time series classification, can be used for spatio-temporal forecasting, especially when missing values are present in the data. The proposed approaches do not require aggregation or missing value imputation techniques, and avoid the problems introduced by such methods. We evaluated the performance of the proposed models on a simulated dataset with varying levels of sparsity and showed that in general increasing sparsity has a negative effect in the performance of all the models, but SERT followed by STraTS and SST-ANN are more robust to the increase in sparsity. We also evaluated the performance of the proposed models on a real-world dataset of environmental variables in Dublin Bay, Ireland. The results indicate that SERT outperformed the other models in 7-hour ahead forecasting for 6 out of the 7 variables, with 3 of them being on par with STraTS while demonstrating significantly faster performance. We then showed how SST-ANN can be used to interpret the predictions of the model by calculating and using the contribution score of the input data to develop an importance index using the average contribution scores. We introduced two different methods to encode the location information in our proposed models, including encoding the time series variable name with the location name simultaneously and encoding the location name separately. However, neither of these methods takes into account the distance between the locations, which is a limitation of our work. We believe future research should focus on incorporating this information into the models, as it has the potential to improve forecasting performance and be utilized for spatiotemporal interpolation tasks. § ACKNOWLEDGEMENT We would like to express our sincere gratitude to Dublin Port Company for providing us with the real dataset. This work was supported by an SFI Investigator award (16/IA/4520).
http://arxiv.org/abs/2306.09869v2
20230616143041
Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
[ "Geon Yeong Park", "Jeongsol Kim", "Beomsu Kim", "Sang Wan Lee", "Jong Chul Ye" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "cs.LG" ]
An ontological description for relativistic, massive bosons. Gerard 't Hooft Faculty of Science, Department of Physics Institute for Theoretical Physics Princetonplein 5, 3584 CC Utrecht The Netherlands http://www.staff.science.uu.nl/hooft101 ================================================================================================================================================================================= Despite the remarkable performance of text-to-image diffusion models in image generation tasks, recent studies have raised the issue that generated images sometimes cannot capture the intended semantic contents of the text prompts, which phenomenon is often called semantic misalignment. To address this, here we present a novel energy-based model (EBM) framework. Specifically, we first formulate EBMs of latent image representations and text embeddings in each cross-attention layer of the denoising autoencoder. Then, we obtain the gradient of the log posterior of context vectors, which can be updated and transferred to the subsequent cross-attention layer, thereby implicitly minimizing a nested hierarchy of energy functions. Our latent EBMs further allow zero-shot compositional generation as a linear combination of cross-attention outputs from different contexts. Using extensive experiments, we demonstrate that the proposed method is highly effective in handling various image generation tasks, including multi-concept generation, text-guided image inpainting, and real and synthetic image editing. § INTRODUCTION Diffusion models (DMs) have made significant advances in controllable multi-modal generation tasks <cit.>, including text-to-image synthesis. The recent success of text-to-image diffusion models, e.g., Stable Diffusion <cit.>, Imagen <cit.>, etc., is mainly attributed to the combination of high-fidelity DMs with high-performance large language models (LMs). Although text-to-image DMs have shown revolutionary progress, recent studies have shown that the current state-of-the-art models often suffer from semantic misalignment problems, where the generated images do not accurately represent the intended semantic contents of the text prompts. For example, <cit.> discovered the catastrophic neglect problem, where one or more of the concepts of the prompt are neglected in generated images. Moreover, for a multi-modal inpainting task with text and mask guidance, <cit.> found that the text-to-image DMs may often fail to fill in the masked region precisely following the text prompt. Therefore, this work focuses on obtaining a harmonized pair of latent representations and text embeddings, i.e., context vectors, to generate semantically aligned images. To address this problem, we introduce a novel Bayesian framework, namely energy-based cross-attention (EBCA), which approximates maximum a posteriori probability (MAP) estimates of context vectors given observed latent representations. Inspired from the energy-based perspective of attention mechanism <cit.>, we first formulate energy functions in the latent space of the intermediate cross-attention layer of a denoising auto-encoder, i.e., cross-attention space. Based on these functions, we obtain the gradient of the log posterior of context vectors. With these gradients, we show that a nested hierarchy of energy functions can be implicitly minimized by cascading the updated contexts to the subsequent cross-attention layer. Beyond the previously mentioned benefits, the energy-based perspective of cross-attention also illuminates the process of zero-shot compositional generation due to the inherent compositionality of Energy-Based Models (EBMs). This involves the convenient integration of multiple distributions, each defined by the energy function of a specific text embedding. In practical terms, this amalgamation can be executed as a straightforward linear combination of cross-attention outputs that correspond to all selected editing prompts. We demonstrate the effectiveness of the proposed EBM framework in various text-to-image generative scenarios, including multi-concept generation, text-guided inpainting and compositional generation. The proposed method is training-free, easy to implement, and can potentially be integrated into most of the existing text-to-image DMs. § PRELIMINARIES §.§ Diffusion Model Diffusion models <cit.> aims to generate samples from the Gaussian noise by iterative denoising processes. Given clean data _0∼ p_data(_0), diffusion models define the forward sampling from p(_t|_0) as _t = √(α̅_t)_0 + √(1-α̅_t)_t, where _t∼𝒩(0, I), t∈ [0, 1]. Here, for the Denoising Diffusion Probabilistic Model (DDPM) <cit.>, the noise schedule β_t is an increasing sequence of t, with α̅_t := ∏_i=1^t α_t, α_t := 1 - β_t. The goal of diffusion model training is to obtain a neural network _θ^* that satisfies θ^* = _θE__t∼ p(_t|_0),_0∼ p_data(_0),∼𝒩(0, 𝐈)[_θ(_t, t) - ^2_2 ], so that the reverse sampling from q(_t-1|_t, _θ^*(_t, t)) is achieved by _t-1=1/√(α_t)(_t-1-α_t/√(1-α̅_t)ϵ_θ^*(_t,t))+σ_tz_t, where _t∼(0,) and σ^2 is variance which is set as β in DDPM. With iterative process, we can sample the image _0 from initial sample _T∼(0,). Since diffusion models require the iterative sampling on high dimensional space, they are computationally expansive and time consuming. To mitigate this limitation, Latent Diffusion Model (LDM) <cit.> has proposed diffusion processes on the compressed latent space using pre-trained auto-encoder. Furthermore, by introducing language model-based cross-attention to diffusion model's U-Net neural backbone <cit.>, LDM enables token-based conditioning method such as text-to-image. §.§ Energy-based Perspective of Attention Mechanism Before delving into the details, we provide some mathematical notation. Given an N-dimensional vector , its and are defined as (,β) β^-1log(∑_i = 1^N exp(v_i)), () exp( - (,1)). where v_i denotes the i-th element of . For a given a matrix , its _i() and _i() are understood as taking the corresponding operation along the i-th dimension of . So, for instance, i=1, _i() consist of column vectors that sum to 1. Energy model. Following the success of the attention mechanism, several studies have focused on establishing its theoretical foundation. One promising avenue is interpreting the attention mechanism using the Energy-Based Model (EBM) framework. This line of research begins with recent research on modern Hopfield networks <cit.>, which gradually builds up to the self-attention mechanism of the Transformer model. The Hopfield network is a dense associative memory model that aims to associate an input with its most similar pattern. Specifically, it constructs an energy function to model an energy landscape that contains basins of attraction around desired patterns. Recently, modern Hopfield networks <cit.> has introduced a new family of energy functions, which aim to improve its pattern storage capacity or make it compatible with continuous embeddings. To this end, <cit.> proposed the following energy function of a state pattern (query) ∈ℝ^d parameterized by N-stored (key) patterns = [_1 …, _N] ∈ℝ^d × N and β > 0: (; ) = 1/2^T - (^T , β). Intuitively, the first term ensures the query remains finite, while the second term measures the individual alignment of the query with every stored pattern. Based on the Concave-Convex-Procedure (CCCP) <cit.>, <cit.> derives the update rule for a state pattern and time t as follows: Define the update rule f: ^d →^d as follows: ^new = f() = (β^T ) Then, the update rule converges globally. In another word, for ^(t+1) = f(^(t)), the energy E(^(t)) → E(^*) for t →∞ and a fixed point ^*. Note that (<ref>) is equivalent to a gradient descent update to minimize (<ref>) with a step size η=1: ← - η∇_(; ) = - η( - (β^T )). Connection to the attention of the transformer. Remarkably, this implicit energy minimization is closely related to the attention mechanism as shown in <cit.>. To see this, suppose that _i, _i ∈^d is given as stored (key) and state (query) patterns, respectively. Let _K, _Q ∈^d × d_H represent linear maps for _i and _i, respectively. Then, we introduce _i = _K^T _i ∈^d_H and _i = _Q^T _i ∈^d_H. Let = (_1, …, _N)^T ∈^N × d, and = (_1, …, _S)^T ∈^S × d. We define ^T = = _K ∈^N × d_H, and = _Q ∈^S × d_H. By plugging ^T = and =_i into (<ref>) for all i, we obtain that: ^T= ^T _1(β^T) ∈^d_H × S. By taking the transpose, we obtain = _2(β^T). Let ∈^N × d_V denote the value matrix as = _K _V = _V, which will replace the outside of _2. Then,we finally obtain ^new = _2 (β^T) , which is exactly the transformer attention with β=1/√(d_H). This connection affords us the insightful theoretical ground of the attention mechanism: the update step of the attention mechanism in a Transformer layer acts as an inner-loop optimization step, minimizing an implicit energy function that is determined by queries, keys, and values. § ENERGY-BASED CROSS-ATTENTION Recall that our objective is to generate semantically correct images by harmonizing latent (U-Net) representations and context vectors within the denoising autoencoder. To achieve this, we propose a simple but effective Bayesian framework, namely energy-based cross-attention (EBCA). Specifically, we perform test-time optimization of context vectors within the cross-attention spaces of a denoising autoencoder to implicitly minimize a specially designed energy function. Note that this is a significant departure from the conventional text-guided diffusion models which have leveraged fixed context vectors embedded by pre-trained CLIP <cit.> to all cross-attention layers. §.§ Bayesian Context Update Energy function. Our focus is on the cross-attention space of a time-dependent denoising auto-encoder, utilizing the conventional U-Net neural architecture. Here, we refer to latent representations as the representations of intermediate layers in U-Net unless otherwise specified. Let L be the number of cross-attention layers. For each l-th layer at time t, we define the queries matrix _t,l∈^P_l^2 × d_l, and the keys and values matrices _l ∈^N × d_l and _l ∈^N × d_l, respectively. Here, P_l represents the spatial dimension of latent representations in the l-th layer. Given context vectors ∈^N × d_c, we map _l and _l with _K, l, _V, l∈^d_c × d_l, such that _l= _K, l and _l= _V, l. In the following, time t and layer index l are omitted in notations _t, l and _l for simplicity. The main goal is to obtain a maximum a posteriori probability (MAP) estimate of context vectors given a set of observed latent representations of queries _t, l. Motivated by the energy function in (<ref>), we first introduce a new conditional energy function w.r.t as follows: (; ) = α/2(^T) - ∑_i=1^N (_i^T, β), where _i denotes the i-th row vector of , α≥ 0, and () = ∑_i=1^N A_i,i for a given ∈^N × N. Intuitively, (_i^T, β) term takes a smooth maximum alignment between latent representations _j,j=1,⋯,P^2 and a given text embedding _i. Let j^* = max_j _j_i^T. Then, the implicit minimization of - term encourages each _i to be semantically well-aligned with a corresponding spatial token representation _j^*. In turn, this facilitates the retrieval and incorporation of semantic information encoded by _i into the spatial region of _j^*. The (^T) term in (<ref>) serves as a regularizer that constrains the energy of each context vector _i, preventing it from exploding during the maximization of (_i^T, β), thereby ensuring that no single context vector excessively dominates the forward attention path. We also propose a new -independent prior energy function () given by: () = (1/2(^T), 1 ) = log∑_i=1^N exp(1/2_i _i^T ). Instead of penalizing each norm uniformly, it primarily regularizes the smooth maximum of _i which improves the stability in implicit energy minimization. Although our energy function is built based on the energy function, in contrast to (<ref>) the proposed one is explicitly formulated for implicit energy minimization w.r.t keys and the associated context vectors , which is different from the theoretical settings in Section <ref> w.r.t <cit.>. It is worth noting that the two energy functions are designed to serve different purposes and are orthogonal in their application. MAP estimation. Based on the energy functions (<ref>) and (<ref>), the posterior distribution of can be defined by using Bayes' rule: p([] ) = p([] ) p() / p(), where (<ref>) and (<ref>) are leveraged to model the distribution p([] ) and p(), respectively. Then, in order to obtain a MAP estimation of , we approximate the posterior inference using the gradient of the log posterior. Specifically, the gradient of the log posterior can be estimated by using the facts: ∇_log p([] ) = ∇_log p([] ) + ∇_log p() = - ( ∇_( ; ) + ∇_() ). For the energy functions (<ref>) and (<ref>), the gradient of the log posterior is given by: ∇_log p([] ) = _2 (β^T ) - (α + ( (1/2(^T) ) ) ), where (·) is a vector-to-diagonal-matrix operator. Then, by using the chain rule the update rule of context vectors is derived as follows: _n+1 = _n + γ( _2 (β^T ) - (α + ( (1/2(^T ) ) ) ))_K^T, where γ > 0 is a step size. In practice, we empirically observed that the nonzero α in (<ref>) often leads to an over-penalization of contexts, which can ultimately cause some contexts to vanish. To address this, we set α=0. Moreover, we found it beneficial to assign distinct step sizes γ_ and γ_ as follows: _n+1 = _n + γ__2 (β^T )_K^T _Attention - γ_(α + ( (1/2(^T ) ) ) )_K^T_Regularization, where the first and second terms are named as attention and regularization term, respectively. Our theoretical observations offer valuable insights into implicit energy minimization by modifying the forward path of cross-attention. Inspired by these observations, we have transplanted the energy-based cross-attention layers to pre-trained text-to-image diffusion models. We first illustrate the challenges associated with adopting EBCA in a deep denoising auto-encoder and subsequently demonstrate how to overcome them in practice. If a single recurrent Transformer block were available, a single energy function would be minimized for a given cross-attention layer by recurrently passing the updated context _n+1. However, in practical applications, there are multiple layer- and time-dependent energy functions in conventional deep denoising auto-encoder, which makes it infeasible to minimize each energy function individually. To address this challenge, we implicitly minimize a nested hierarchy of energy functions in a single forward pass based on our theoretical observations. More details are followed. Algorithms. At a given time step t, we initialize the context vectors _1, t with text embeddings _CLIP obtained from a pre-trained CLIP. Then, within the n-th cross-attention layer with _n, t (1 ≤ n ≤ L), we compute the updated context vectors _n+1, t using the gradients in Theorem <ref> and (<ref>). We then cascade _n+1, t to the next (n+1)-th cross-attention layer. We do not forward the final context _L+1, t at time t to time t+1, as the distributions of _t and _t+1 can be significantly different due to the reverse step of the diffusion model. Instead, we reinitialize _1, t+1 with _CLIP. The pseudo-code for the proposed framework is provided in appendix. The sequence of energy-based cross-attention layers is designed to minimize nested energy functions. While a single-layer update in a single step may not lead to optimal convergence, the proposed layer-cascade context optimization can synergistically improve the quality of context vectors. Notably, our proposed method does not incur additional computational costs in practice since the gradient term in Theorem <ref> relies on the existing keys, queries, and attention maps in the cross-attention layer, which can be obtained for free in the forward path. §.§ Compositional Averaging of Cross-Attention In addition to the above key advantages, the cross-attention space EBMs shed new light on the zero-shot compositional generation. Recent studies <cit.> have demonstrated that the underlying distributions of EBMs can be combined to represent different compositions of concepts. For example, given a data point and independent concept vectors _̧1, …, _̧n, the posterior likelihood of given a conjunction of specific concepts is equivalent to the product of the likelihood of each individual concept as follows: p([] _̧1, …, _̧n) = ∏_i p([] _̧i) ∝ e^- ∑_i (_̧i), where each (_̧i) represent independently trained EBM with concept code _̧i. While it is an appealing solution to the controllable generation, it is notoriously difficult to train EBMs <cit.> in a way to make themselves scalable to high-resolution image generation. Instead of directly training EBMs in pixel space, we leverage the cross-attention space EBMs and the generative power of state-of-the-art pre-trained DMs to achieve high-fidelity compositional generations. More specifically, assume that we have main context vectors _1 embedded from a main prompt, e.g. "", and a set of independent editing context vectors, = {_2, …, _M}, each embedded from different editorial prompt, e.g. "", "", etc. Then, we define the keys _l, s for context s within a cross-attention layer of index l as _l, s = _s_K, l , ∀ s ∈{1, 2, …, M}. The index l will be omitted for notational simplicity. Then, for a given key _s, we consider the energy function in cross-attention space w.r.t queries : (; _s) = 1/2 (^T) - ∑_i=1^P^2(_s_i^T , β), which is directly motivated by (<ref>). We then introduce the compositional energy function , for the concept conjunction of as in (<ref>) and the updated rule as done in (<ref>): (; {_s}_s=1^M) = 1/M∑_s=1^M (; _s) = 1/2 (^T) - 1/M∑_s=1^M ∑_i=1^P^2(_s _i^T, β), ^new_TF = 1/M∑_s=1^Mα_s _2 (β_TF_TF,s^T ) _TF, where _TF, _TF,s and _TF directly follows the definition in (<ref>) and the degree and direction of s-th composition can be controlled for each concept individually by setting the scalar α_s, with α_s<0 for concept negation <cit.>. Note that this is exactly a linear combination of transformer cross-attention outputs from different contexts with β=1/√(d_H). We refer to this process as Compositional Averaging of Cross-Attention Output (CACAO). This update rule implies that the compositional energy can be minimized implicitly through modifications to the forward path of EBCA, thereby guiding the integration of semantic information conveyed by editorial prompts. Notably, this is a significant departure from the existing energy-based compositional methods <cit.>. Specifically, no training or fine-tuning is required. Instead, cross-attention outputs of the main and editorial prompts are simply obtained in parallel, averaged, and propagated to the next layer. Moreover, the introduction of EBMs in cross-attention space is orthogonal to <cit.>, which conducts compositional generation by trating a pre-trained diffusion model _θ^* itself as an implicitly parameterized EBM. § EXPERIMENTAL RESULTS To verify our claim of energy minimization via modified cross-attention, we conduct various experiments in text-guided image generation to verify semantic alignment, namely (1) multi-concept generation <cit.>, (2) text-guided image inpainting <cit.>, and (3) compositional generation <cit.> which includes real and synthetic image editing. In this work, while the first and third applications address similar challenges, we categorize them separately based on the implementation derived from the energy-based interpretation. Specifically, the Bayesian Context Update (BCU) is applied to every task, and Compositional Averaging of Cross-Attention Output (CACAO) is additionally leveraged in the third task. Note that all applications have been done without additional training. Setup. The proposed framework can be widely mounted into many state-of-the-art text-to-image DMs due to its unique functionality of context updates and cross-attention map compositions. Here, we verify the effectiveness of energy-based cross-attention with Stable Diffusion (SD) <cit.>. The SD is an LDM that is pre-trained on a subset of the large-scale image-language pair dataset, LAION-5B <cit.> followed by the subsequent fine-tuning on the LAION-aesthetic dataset. For the text embedding, we use a pre-trained CLIP <cit.> model following the Imagen <cit.>. The pre-trained SD is under the creativeML OpenRAIL license. Detailed experimental settings are provided in appendix. §.§ Analysis on Energy during the sampling We perform a comprehensive analysis on (<ref>) and (<ref>) during the forward path through the modified cross-attention, which offers insights into the energy behavior for real applications. Specifically, we examine the energy dynamics involved in the multi-concept image generation that is straightforward and could be readily applied to other applications with minimal effort. We record computed energy values along layers and sampling steps 30 times with a fixed text prompt and then display them in Figure <ref> with the generated samples, where red and green lines denote the energy involved with the Stable-Diffusion (SD) and the proposed method, respectively. For each sampling step block that is alternately shaded, the energy values are plotted through 16 cross-attention layers. Across all layers and sampling steps, the energy associated with the proposed method remains consistently lower than that of the SD. This is in line with the semantic alignment between intermediate denoised estimates and the given text prompt. In both cases of the SD and the proposed method, E(; ) decreases over sampling steps. This implies that the iterative refinement of the updated query carried over to subsequent sampling steps, resulting in a better match to the given context. Note that the proposed method even achieves lower energy with the BCU. More analyses are provided in the appendix including the ablation study. §.§ Multi-Concept Generation We empirically demonstrate that the proposed framework alleviates the catastrophic neglect and attribute binding problems defined in the existing literature <cit.>. As shown in Figures <ref> and <ref>, the BCU effectively mitigates these problems by promoting attention to all relevant concepts in the prompt. Specifically, the regularization term introduced by the prior energy prevents a single context from dominating the attention, while the attention term updates the context vectors in the direction of input queries, improving semantic alignment and facilitating the retrieval of related information. §.§ Text-guided Image Inpainting In addition, we have evaluated the efficacy of the proposed energy-based BCU on a text-guided inpainting task. Although existing state-of-the-art DMs such as DALLE and SD can be employed for inpainting by exploiting the ground-truth unmasked area <cit.>, they usually require computationally expensive fine-tuning and tailored data augmentation strategies <cit.>. In contrast, as shown in Figure <ref>, our proposed energy-based framework significantly enhances the quality of inpainting without any fine-tuning. Specifically, we incorporate the energy-based cross-attention into the Stable-Repaint (SR) and Stable-Inpaint (SI) models, which can struggle to inpaint multiple objects (e.g., and ) or unlikely combinations of foreground and background objects (e.g., Teddy bear on the Starry Night painting). In contrast, the proposed approach accurately fills in semantically relevant objects within the target mask region. §.§ Compositional Generation We demonstrate that the proposed framework improves the controllability and compositionality of DMs, which is a significant challenge for generative models. To assess this, we split the task into synthetic and real image editing. Synthetic image editing. Text-to-image DMs provide impressive results, but it is difficult to generate images perfectly aligned with user intentions <cit.>. Although modifying prompts can guide generation, it can also introduce unintended changes in the generated content. Our results, shown in Figure 5, demonstrate that the proposed method can edit image contents while maintaining the original-prompt-related identity of the generated images thanks to the BCU, which continuously harmonizes the latent representations and text prompts. While composable-diff <cit.> can generate compositions, they often fail to preserve the original properties of images, such as the color in the second row, or fail to compose the desired concept, such as the boat in the first row. Real image editing. We further demonstrate that the proposed framework can also edit real images, which is especially challenging as existing methods often dramatically alter input content or introduce unexpected variations. To do this, we integrate our framework with DDIM inversion <cit.>, which inverts images with meaningful text prompts into the domain of pre-trained diffusion models. First, we use the image captioning network BLIP <cit.> to automatically caption the interested image, following <cit.>. Next, we obtain the inverted noise latents of the input image using Diffusion Pivotal Inversion <cit.>. Then, we apply CACAO and BCU while denoising the inverted latents. The optimized unconditional textual embedding vector is also targeted for additional BCUs. The results in Figure <ref> demonstrate that our method achieves better editing performance by avoiding undesired changes. § CONCLUSION, LIMITATIONS AND SOCIETAL IMPACTS Conclusion. In this work, we formulated the cross-attention with an energy perspective and proposed the BCU, a modified cross-attention that could implicitly minimize the energy in the latent space. Furthermore, we proposed CACAO which is inspired by energy-based formulation of multiple text composition. The proposed method is versatile as shown by multiple applications, theory-grounded, easy to implement, and computationally almost free. Limitations and Societal Impacts. The framework presented in this study generates images based on user intentions, which raises concerns regarding potential misuse for creating deepfakes or other forms of disinformation. It is crucial to ensure that these methods are implemented ethically and regulated appropriately. Additionally, while the framework performs well across various tasks, it requires pre-trained deep models, rendering it challenging to apply to out-of-domain datasets, such as medical images. abbrv § PROOF OF THEOREM 1 In computing a derivative of a scalar, vector, or matrix with respect to a scalar, vector, or matrix, we should be consistent with the notation. In this paper, we follow the denominator layout convention as described in <cit.>. To make the paper self-contained, we briefly introduce the denominator layout and associate calculus. The main motivation for using the denominator layout is from the derivative with respect to the matrix. More specifically, for a given scalar c and a matrix ∈^m× n, according to the denominator layout, we have ∂ c/∂= [ ∂ c/∂ w_11 ⋯ ∂ c/∂ w_1n; ⋮ ⋱ ⋮; ∂ c/∂ w_m1 ⋯ ∂ c/∂ w_mn ]∈^m× n . Furthermore, this notation leads to the following familiar result: ∂^⊤/∂ = ∂^⊤/∂ = . Accordingly, for a given scalar c and a matrix ∈^m× n, we can show that ∂ c/∂ := (∂ c/∂()) ∈^m× n, in order to be consistent with (<ref>), where () and () refer to the vectorization operation and its reverse, respectively. Under the denominator layout notation, for given vectors ∈^m and ∈^n, the derivative of a vector with respect to a vector given by ∂/∂= [ ∂ y_1/∂ x_1 ⋯ ∂ y_n/∂ x_1; ⋮ ⋱ ⋮; ∂ y_1/∂ x_m ⋯ ∂ y_n/∂ x_m ]∈^m× n . Then, the chain rule can be specified as follows: ∂ c(())/∂ = ∂/∂∂()/∂∂ c()/∂ . Finally, the following property of the Kronecker delta product is useful throughout the paper <cit.> () =(^T⊗)() (⊗)^T =^T⊗^T Using this, we first prove the following key lemmas. For a given column vector ∈^N, we have ∂(, β)/∂ = (β) ∂(, β)/∂ =β^-1∂log∑_j=1^Nexp (β x_j)/∂ =[ exp(β x_1)/∑_j=1^Nexp(β x_j); ⋮; exp(β x_P^2)/∑_j=1^Nexp(β x_j) ] = (β) Q.E.D. Let _i denote the i-th row vector of ∈^N× d. Then, we have ∂_i_i^T/∂=2_N^i(_N^i)^T where _N^i represents a N-dimensional column vector where only the i-th entry is 1, with all other entries set to zero. First, note that the expression _i^T can be equivalently rewritten as ^T^i. Then, we have _i_i^T=(_N^i)^T^T_N^i Furthermore, using (<ref>), we have ^T_N^i=((_N^i)^T⊗_d)(^T) so that ∂^T^i/∂(^T)=(_N^i ⊗_d) where _d denotes the d× d identity matrix. Using the chain rule, we have ∂_i_i^T/∂(^T) =∂^T^i/∂(^T)∂_i_i^T/∂^T^i= 2(_N^i ⊗_d) ^T_N^i Thus, we have (∂_i_i^T/∂(^T)) =2((_N^i ⊗_d) ^T_N^i) =2 ^T_N^i(_N^i)^T where we again use (<ref>). Therefore, by taking the transpose, we have ∂_i_i^T/∂=2_N^i(_N^i)^T. Q.E.D. ∇_(_i^T, β)= _N^i((β_i^T))^T Note that _i^T=^T_N^i=^T_N^i=((_N^i)^T⊗)(^T). Hence, using (<ref>) we have ∂^T^i/∂(^T)= _N^i⊗^T Furthermore, using Lemma <ref> we have ∂(_i^T, β)/∂(^T) =∂_i^T/∂(^T)∂(_i^T, β)/∂_i^T = (_N^i⊗^T) (β_i^T) leading to the following: ∂(_i^T, β)/∂^T =(∂(_i^T, β)/∂(^T)) =((_N^i⊗^T) (β_i^T)) = ^T (β_i^T) (_N^i)^T where we again use (<ref>). Now, by taking the transpose, we can prove (<ref>). Q.E.D. ∂(^T)/∂ = 2 ∂log∑_i=1^N exp (1/2_i _i^T)/∂ = [ [()]_1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ [()]_N ] where () denotes the sum of the diagonal element, i.e. trace of , and _i is a column matrix whose i-th element is given by _i_i^T/2, and [·]_i dentoes the i-th element. First, c=(^T)=∑_i,j K_ij^2 where K_ij denotes the (i,j)-th element. Therefore, using the denominator layout in (<ref>), it is trivial to show that ∂ c/∂=2 . Second, let us construct a column vector whose i-th element is given by x_i:=_i_i^T/2. Then, using Lemmas <ref> and <ref> and the chain rule, we have ∂log∑_i=1^N exp (1/2_i _i^T)/∂ =∑_i ∂ x_i/∂∂(, 1)/∂ x_i = ∑_i _N^i(_N^i)^T[()]_i = ∑_i _N^i_i[()]_i = [ [()]_1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ [()]_N ] This concludes the proof. 1 For the energy functions (; )=α/2(^T) - ∑_i=1^N (_i^T, β) and E()=log∑_i=1^N exp (1/2_i _i^T), the gradient of the log posterior is given by: ∇_log p([] ) = _2 (β^T ) - ( α + ( (1/2(^T) ) ) ) , Then, by using the chain rule the update rule of context vectors is derived as follows: _n+1 = _n + γ( _2 (β^T ) - (α + ( (1/2(^T ) ) ) ))_K^T, where γ > 0 is a step size, and (·) is a vector-to-diagonal-matrix operator. Based on the Bayes' theorem, the gradient of the log posterior is derived as: ∇_log p([] ) = - ( ∇_(; ) + ∇_() ). Using Lemmas <ref>,<ref>,<ref> and <ref>, we have ∇_(; ) = α - ∑_i _N^i((β_i^T))^T = α - _2(β^T) Second, by noting that () = log∑_i=1^N exp(1/2_i _i^T), Lemma <ref> informs us ∇_() = [ [()]_1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ [()]_N ] =( (1/2(^T)) ) where (·) is a vector-to-diagonal-matrix operator that takes N-dimensional vector as an input and returns a N × N diagonal matrix with values as main diagonal entries. Therefore, one can finally obtain: ∇_log p([] ) = _2 (β^T ) - ( α + ( (1/2(^T) ) ) ) . By using the chain rule with = _K, the update rule of context vectors is derived as in (<ref>). § PSEUDO-CODE FOR BCU AND CACAO This section provides the description of the pseudocode for the proposed Bayesian Context Update (BCU) and Compositional Averaging of Cross-Attention Output (CACAO). Algorithm <ref> outlines the cascaded context propagation across cross-attention layers within the UNet model during the sampling step t. Note that the context is reinitialized at the beginning of each sampling step. On the other hand, Algorithm <ref> details the BCU implemented in each cross-attention layer. Specifically, the proposed BCU provides a significant computational efficiency by reusing the similarity ^T, which requires computational cost 𝒪(N^2), to compute ∇_ E(; ). Consequently, there is only a small amount of additional computational overhead associated with the proposed BCU. Algorithm <ref> outlines the pseudocode for the CACAO implemented for M given contexts. For the simplicity, we exclude the BCU from the algorithm. Nontheless, the BCU and the CACAO could be leveraged together. § EXPERIMENTAL SETUPS In this section, we describe detailed experimental setups for three applications including baseline method, hyper-parameter of the proposed method, and dataset if it is the case. Code: <https://github.com/EnergyAttention/Energy-Based-CrossAttention>. §.§ Common experimental setup We mainly leverage pre-trained Stable Diffusion v1-5 (except Table <ref>: v1-4) which is provided by diffusers, a Python library that offers various Stable Diffusion pipelines with pre-trained models. All images are sampled for 50 steps via PNDM sampler <cit.> using NVIDIA RTX 2080Ti. In every experiment, we set the parameter α in Equation (<ref>) to zero, focusing solely on controlling the values of γ_attn and γ_reg. BCU is applied to every task, and CACAO is additionally employed in <ref>. Different learning rate for each token It is worth noting that the γ_attn and γ_reg could be expressed as vectors. In other words, if the context C∈R^N× d_c is given, γ_attn and γ_reg are N-dimensional vectors. Hence, we have the flexibility to adjust the learning rate γ_{·}, allowing us to increase or decrease the impact of certain tokens based on the user's intent. Unless otherwise noted, γ_attn and γ_reg is set to a constant for each text token. Learning rate scheduling Since the proposed BCU is leveraged for the diffusion model, one can readily introduce scheduling strategies for γ_attn and γ_reg along the sampling step t. We implement multiple variants such as `constant', `step', and `exponential decay' as follows. [constant] γ(t) = γ_0 [step] γ(t) = γ_0 ·ReLu(t-τ) [exp-decay] γ(t) = γ_0·λ^t where γ_0 is the initial value, ReLu(x)=0 if x≤0, otherwise 1, τ denotes the temporal threshold, and λ denotes the decay ratio. Unless stated otherwise, the scheduling strategy is set to the `constant'. §.§ Multi-concept image generation We compared the performance of the proposed method with Structured Diffusion <cit.> which does not require additional training as our method. We leveraged the open-sourced official implementation [<https://github.com/weixi-feng/Structured-Diffusion-Guidance>]. For the proposed method, we set the γ_attn and γ_reg differently for each sample within [1e-2, 1.5e-2, 2e-2]. As shown in the following ablation studies <ref>, large γ_attn tends to generate saturated images while large γ_reg results in mixed/vanished contents. We found that using different learning rates for each context token is useful for multi-concept generation, especially when a single concept tends to dominate with a constant learning rate. For example, given the main prompt , we set the γ_attn for the to 3e-2, while γ_attn is set to 1.5e-2 for other tokens. We have observed that doubling the γ_attn for a text token to be emphasized is sufficient to achieve balanced multi-concept image generation for most cases. §.§ Text-guided image inpainting Additionally, we conducted a performance comparison between our proposed method and two alternative approaches: (a) Stable Inpaint[<https://huggingface.co/runwayml/stable-diffusion-inpainting>], which fine-tunes the weights of Stable Diffusion through inpainting training, and (b) Stable Repaint[<https://github.com/huggingface/diffusers/tree/main/examples/community##stable-diffusion-repaint>], which leverages the work of Lugmayr et al. <cit.> on the latent space of Stable Diffusion for the inpainting task. In the case of Stable Repaint, the mask is downsized and transferred into the latent space. We applied the Bayesian Context Update (BCU) technique to both methods, resulting in improved results compared to their respective baselines. Masked BCU. To further enhance the performance for the inpainting task, we introduce the concept of masked Bayesian Context Update (masked BCU). Specifically, let M ∈ℝ^P_l^2 × P_l^2 represent a diagonal matrix where the main diagonal values are derived from the downsampled inpainting mask for the l-th cross-attention layer, with an output spatial size of P_l^2. In Equation (<ref>), we modify the attention term (<ref>) by incorporating the downsampled mask, effectively covering the query matrix as follows: _n+1 = _n + γ( _2 (β^T ) - (α + ( (1/2(^T ) ) ) ))_K^T. As evident in Equation (<ref>), the attention term updates the context vectors, aligning _i towards _j, j=1, …, P_l^2, while considering the alignment strength between each _j and _i. However, in the inpainting task, we have prior knowledge that the context vectors should be most aligned with the semantically relevant masked regions. Therefore, we mask out unrelated background spatial representations, allowing for the context vectors to be updated with a specific focus on the masked regions. This approach facilitates the incorporation of semantic information encoded by _i specifically into the spatial mask regions. In our proposed method, we set different values for γ_attn and γ_reg for each sample, selected from the set [1e-2, 1.5e-2, 2e-2, 2.5e-2], to account for variations in the input samples. §.§ Image editing via compositional generation We present empirical evidence demonstrating the effectiveness of our energy-based framework for compositional synthetic and real-image editing. The Bayesian Context Update (BCU) technique can be readily applied to both the main context vector (_1 in Section 3.2, s=1) and editorial context vectors (_s>1). Each BCU operation influences the attention maps used in Compositional Averaging of Cross-Attention Output (CACAO), enhancing the conveyance of semantic information associated with each context. Note that α_s in  (<ref>) represents the degree of influence of the s-th concept in the composition. In practice, we fix α_1=1 for the main context, while α_s>1 is tuned within the range of (0.5, 1.0). Let γ_attn, s and γ_reg, s denote the step sizes for BCU of the s-th context vector. If the editing process involves changing the identity of the original image (e.g., transforming a "cat" into a "dog"), we set both γ_attn, 1 and γ_reg, 1 to zero. Otherwise, if the editing maintains the original identity, we choose values for γ_attn, 1 and γ_reg, 1 from the range of (5e-4, 1e-3), similar to γ_attn, (s>1) and γ_reg, (s>1). All hyperparameters, including α_s and γ_s, are fixed during the quantitative evaluation process (more details in Section <ref> and Table <ref>). To ensure consistent results, we maintained a fixed random seed for both real and synthetic image editing. For real image editing, we employed null-text pivotal inversion <cit.> to obtain the initial noise vector. During the reverse diffusion process in Sections <ref> and <ref>, we kept γ fixed as a constant value. However, for compositional generation, we utilized step scheduling (Equation <ref>) for γ_s and α_s. After converting the initial noise vector for real images or using a fixed random seed for synthetic images, BCU and CACAO are applied after a threshold time τ_s > 0 for the s-th editorial context. This scheduling strategy helps to preserve the overall structure of generated images during the editing process. In our observations, a value of τ_s ∈ [10, 25] generally produces satisfactory results, considering a total number of reverse steps set to 50. However, one can increase or decrease τ_s for more aggressive or conservative editing, respectively. The exemplary real images presented in Figures 5 and 6 of the main paper were sampled from datasets such as FFHQ <cit.>, AFHQ <cit.>, and ImageNet <cit.>. For a detailed quantitative analysis, please refer to Section <ref>. § QUANTITATIVE COMPARISON In this section, we conducted a comparative analysis of the proposed framework against several state-of-the-art diffusion-based image editing methods <cit.>, following the experimental setup of <cit.>. To ensure a fair comparison, all methods utilize the pre-trained Stable Diffusion v1-4, employ the PNDM sampler with an equal number of sampling steps, and adopt the same classifier-free guidance scale. §.§ Baseline Methods In addition to the Plug-and-Play method discussed in the main paper, we include the following baselines for comprehensive quantitative comparison: SDEdit <cit.> + word swap. This method introduces the Gaussian noise of an intermediate timestep and progressively denoises images using a new textual prompt, where the source word (e.g., ) is replaced with the target word (e.g., ). Prompt-to-prompt (P2P) <cit.>. P2P edits generated images by leveraging explicit attention maps from a source image. The source attention maps M_t are used to inject, re-weight, or override the target maps based on the desired editing operation. These original maps act as hard constraints for the edited images. DDIM + word swap <cit.>. This method applies null-text inversion to real input images, achieving high-fidelity reconstruction. DDIM sampling is then performed using inverted noise vectors and an edited prompt generated by swapping the source word with the target. pix2pix-zero <cit.>. pix2pix-zero first derives a text embedding direction vector c_edit from the source to the target by using a large bank of diverse sentences generated from a state-of-the-art sentence generator, such as GPT-3 <cit.>. Inverted noise vectors are denoised with the edited text embedding, c + c_edit, and cross-attention guidance to preserve consensus. §.§ Dataset For our quantitative evaluations, we focus on three image-to-image translation tasks: (1) translating cats to dogs (cat → dog), (2) translating horses to zebras (horse → zebra), and (3) adding glasses to cat input images (cat → cat with glasses). Following the data collection protocol of <cit.>, we retrieve 250 relevant cat images and 213 horse images from the LAION 5B dataset <cit.> using CLIP embeddings of the source text description. We select images with a high CLIP similarity to the source word for each task. §.§ Metrics Motivated by <cit.>, we measure CLIP Accuracy and DINO-ViT structure distance. Specifically, (a) CLIP Acc represents whether the targeted semantic contents are well reflected in the generated images. It calculates the percentage of instances where the edited image has a higher similarity to the target text, as measured by CLIP, than to the original source text <cit.>. On the other hand, (b) structure distance <cit.> measures whether the overall structure of the input image is well preserved. It is defined as the difference in self-similarity of the keys extracted from the attention module at the deepest DINO-ViT <cit.> layer. §.§ Details The main context vector _main is encoded given a main prompt automatically generated by BLIP <cit.>. In addition, the editorial context vectors _src and _tgt are encoded given the text descriptions of the source and target concept, i.e. source and target prompt. For example, for a cat → dog task (cat → cat w/ glasses), the source prompt is (), and the target prompt is (). Then we apply BCU and CACAO based on the obtained context vectors. Please refer to Table <ref> for the hyperparameter configurations. §.§ Results Table <ref> shows that the proposed energy-based framework gets a high CLIP-Acc while having low Structure Dist. It implies that the proposed framework can perform the best edit while still retaining the structure of the original input image. This is a remarkable result considering that the proposed framework is not specially designed for the real-image editing task. Moreover, the proposed framework does not rely on the large bank of prompts and editing vector c_edit <cit.> which can be easily incorporated into our method. While DDIM + word swap records remarkably high CLIP-Acc in horse → zebra task, Figure <ref> and <ref> show that such improvements are based on unintended changes in the overall structure. Table <ref> summarizes the hyperparameter settings for each task. Examples of results are presented in Figure <ref> and <ref>. § ABLATION STUDY AND MORE RESULTS Attention and regularization terms. To access the degree of performance improvement attained by the proposed BCU, we conducted an ablation study for the attention and the regularization terms by regulating γ_attn and γ_reg for the text-guided image inpainting (Figure <ref>) and the multi-concept image generation (Figure <ref>). From the Figure <ref>, we can observe that the desired content is generated when proper range of γ_attn and γ_reg are given. Specifically, once γ_reg is set to a valid value, the BCU consistently generate a with various γ_attn, otherwise it generates background or imperfect objects. This result emphasizes the role of the introduced prior energy (). Furthermore, the γ_attn also affects to the context alignment of the generated sample (for instance γ_attn=0.025 and γ_reg=0.02), which highlights the importance of the introduced conditional energy function (; ). The same evidences could be found in the first row in Figure <ref> which are the multi-concept image generation examples. Synergy between BCU and CACAO. While both BCU and CACAO are designed from the common energy-based perspective, each operation is originated from different energy functions (; ) and (; {_s}_s=1^M), respectively. This fact suggests the synergistic energy minimization by combining the BCU and CACAO, which could further improve the text-conditional image generation. To investigate this further, we conducted an ablation study using a real image editing application. Specifically, we compared the editing performance when solely utilizing CACAO and when combining BCU with CACAO. The second row in Figure <ref> is the result of the ablation study that shows fully-compatibility of the BCU and CACAO. Importantly, the incorporation of the BCU improves the quality of the generated images. While the CACAO alone effectively captures the context of the given editing concept, the addition of BCU enhances the fine-grained details in the generated outputs. Importance of concept negation. Remark that a negative α_s in (<ref>) denotes the negation of given editing prompt. We empirically observed that the concept negation may significantly contribute to the performance of compositional generation. Specifically, for the image-to-image translation task in Table <ref>, we apply both positive and negative guidance with the target (e.g. ) and source (e.g. ) concepts, respectively, following the degree of guidance denoted in Table <ref>. The third row in Figure <ref> shows the impacts of source concept negation in the image-to-image translation task. While the positive guidance alone may fail to remove the source-concept-related features, e.g. eyes of the , the negative guidance removes such conflicting existing attributes. This implies that the proposed framework enables useful arithmetic of multiple concepts for both real and synthetic image editing. Prior energy and α. While α/2(^T) in (<ref>) penalizes norm of each context vectors uniformly, the proposed prior energy function () adaptively regularizes the smooth maximum of _i. Intuitively, adaptive penalization prevents the excessive suppression of context vectors, potentially resulting in images that are more semantically aligned with a given context. To demonstrate the effectiveness of adaptive penalization in the prior energy function, we conducted a multi-concept image generation task with varying α in (<ref>) from 0 to 1, while fixing other hyperparameters. Figure <ref> illustrates the gradual disappearance of salient contextual elements in the generated images depending on the change of α. Specifically, the crown is the first to diminish, followed by subsequent context elements, with the lion being the last to vanish with α=1. This result highlights the validity of the adaptive penalization for the context vectors which stems from the prior energy function.
http://arxiv.org/abs/2306.06897v1
20230612070359
Fisher information as general metrics of quantum synchronization
[ "Yuan Shen", "Hong Yi Soh", "Leong-Chuan Kwek", "Weijun Fan" ]
quant-ph
[ "quant-ph" ]
School of Electrical and Electronic Engineering, Nanyang Technological University, Block S2.1, 50 Nanyang Avenue, Singapore 639798 National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 [email protected] School of Electrical and Electronic Engineering, Nanyang Technological University, Block S2.1, 50 Nanyang Avenue, Singapore 639798 National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 Centre for Quantum Technologies, National University of Singapore 117543, Singapore MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, Singapore [email protected] School of Electrical and Electronic Engineering, Nanyang Technological University, Block S2.1, 50 Nanyang Avenue, Singapore 639798 #1#1 Quantum synchronization has emerged as a crucial phenomenon in quantum nonlinear dynamics with potential applications in quantum information processing. Multiple measures for quantifying quantum synchronization exist. However, there is currently no widely agreed metric that is universally adopted. In this paper, we propose using classical and quantum Fisher information (FI) as alternative metrics to detect and measure quantum synchronization. We establish the connection between FI and quantum synchronization, demonstrating that both classical and quantum FI can be deployed as more general indicators of quantum phase synchronization, in some regimes where all other existing measures fail to provide reliable results. We show advantages in FI-based measures, especially in 2-to-1 synchronization. Furthermore, we analyze the impact of noise on the synchronization measures, revealing the robustness and susceptibility of each method in the presence of dissipation and decoherence. Our results open up new avenues for understanding and exploiting quantum synchronization. Fisher information as general metrics of quantum synchronization Weijun Fan July 31, 2023 ================================================================ § INTRODUCTION Synchronization is an emergent dynamic process that explains numerous phenomena such as, for instance, the flashing of fireflies (Photinus carolinus) in tandem <cit.>, the clicking of pacemakers <cit.>, and the unusual sideward swaying of the Millenium bridge in London <cit.>. A key feature of synchronization is the existence of self-sustaining oscillators coupled to each other or to a driven oscillator. Synchronization has yielded many interesting mechanisms for complex systems: limit cycles, amplitude death, oscillation death, and so forth. Synchronization is also intimately connected to chaos theory, where one speaks of the synchronization of chaotic oscillators. A path less well trampled and studied revolves the synchronization oscillators with co-rotating and counter-rotating orbits in phase space <cit.>. Such phenomenon has also been known as mixed synchronization in classical literature. In recent years, there has been extensive work on quantum versions of classical synchronization <cit.>. Instead of the classical phase space, one investigates the Wigner function and probes into the presence of an Arnold-like tongue. For more than one oscillator, measures have been devised to detect the presence of quantum synchronization <cit.>. As in the classical case, we know less about quantum mixed synchronization. In the classification of measures for continuous variable quantum system, the authors have briefly mentioned mixed synchronization for two oscillators with opposite momenta and its deep connection to Einstein-Podolsky-Rosen pairs<cit.>. Moreover, in the quantum regime, it is known that a limit-cycle oscillator with a squeezing Hamiltonian can undergo a bifurcation where the Wigner function splits into two symmetrical peaks <cit.>. In Ref. <cit.>, the authors have investigated two quantum oscillators that display peak synchronization at two different phases, with a phase of π apart. In such cases, the system can be regarded as in-phase synchronization with the drive, but the phase locking happens at two distinct phases. We refer to this phenomenon as 2-to-1 synchronization, and it is similar to the case described as mixed synchronization in Weiss et al.  <cit.>. Fisher information has been used as a measure of the ability to estimate an unknown parameter or as a measure of the state of disorder of a system <cit.>. The quantum Fisher information (QFI) serves as a crucial measure in quantum parameter estimation, providing insights into the precision with which a quantum system can estimate an unknown parameter. In this paper, we look at various measures for quantum synchronization of an externally driven oscillator and explore the possibility of introducing Fisher information to define quantum synchronization, especially for the general case of n-to-1 quantum synchronization. This paper is organized as follows: in Section <ref>, we describe the driven quantum Stuart-Landau oscillator and we discuss various possible measures of synchronization. In Section <ref>, we study the behavior of the various measures of synchronization and discuss 2-to-1 synchronization in Section <ref>, the case where a squeezing term is added to the oscillator. There are different possible noises in such system, we investigate the effects of noise in Section <ref>. In section <ref>, we compare the correlations between the different measures. Finally, in section <ref>, we investigate the asymmetric case of 2-to-1 synchronization and make some concluding remarks in section <ref>. § OSCILLATOR MODEL AND SYNCHRONIZATION MEASURES We study the quantum van der Pol oscillator (also known as the quantum Stuart-Landau oscillator <cit.>) subjected to both single photon drive and two photon squeezing drive. The master equation in the rotating frame of the drive gives (with ħ = 1): ρ̇= -i[Ĥ,ρ] + γ_1 𝒟[a^†]ρ +γ_2𝒟[a^2]ρ +γ_3𝒟[a]ρ Ĥ = Δ a^† a + iE(a-a^†) + i η(a^† 2 e^2iφ - a^2 e^-2iφ) , where 𝒟[L]ρ = Lρ L^† - 1/2(L^† L ρ + ρ L^† L), γ represents the rate of decay, with γ_1, γ_2 and γ_3 corresponding to negative damping, nonlinear damping and linear damping respectively. Δ = ω_0-ω_d is the amount of detuning between the frequency of the drive, ω_d, and the frequency of the oscillator, ω_0. E is the amplitude of the harmonic drive, with a and a^† being the annihilation and creation operators. η is the squeezing parameter, with φ representing the phase of squeezing. In this paper, we focus on the measures of quantum phase synchronization in an externally driven oscillator. As a measure of phase synchronization, the phase coherence is frequently used in the literature and, defined as <cit.> S_pcoh = [aρ]/√([a^† aρ]), where |S| measures the degree of phase coherence with a range of 0≤ |S|≤ 1. Another appropriate simple measure is based on the relative phase distribution <cit.>: S_peak = 2π [P(Φ)]-1, where the phase distribution is defined by P(Φ)=(1/2π)⟨Φ|ρ|Φ⟩ with |Φ⟩=∑_n=0^∞e^inΦ|n⟩. S_peak represents the maximum value of P(Φ) compared to a uniform distribution. This measure is valuable for detecting synchronization because it is exclusively nonzero when P(Φ) deviates from a flat distribution. It is well known that the phase operator is not well-defined in quantum theory. However, most quantum harmonic oscillators are populated up to some finite levels, and we can resort to the Pegg-Barnett phase operator. In Ref. <cit.>, the mean resultant length (MRL), which incorporates the Pegg-Barnett operator, has been proposed as a measure of synchronization. It arises from the study of circular statistics <cit.> and is initially developed for 1-to-1 synchronization. However, it can be generalized to measure n-to-1 synchronization. The n-th order mean resultant length (MRL^(n)) of a circular distribution is given by MRL^(n)=√(⟨sin nϕ⟩^2 + ⟨cos n ϕ⟩^2) = |⟨ e^i nϕ⟩|. This measure is capable of capturing n-to-1 synchronization, which exhibits multiple peaks in the phase distribution P(Φ) and fixed-points in the quasi-probability phase-space distribution, e.g. Wigner function. Fisher information proves to be an important tool for determining classical synchronization in a system of Kuramoto oscillators <cit.>. It has been mooted as a good measure for phase drift in clock synchronization, both classical and quantum <cit.>. It is also a useful measure in classical signal processing <cit.>, being intimately related to the Cramer-Rao bound. Motivated by these works, we propose the quantum Fisher information (QFI) as a measure for quantum phase synchronization: QFI=ℱ_Q[ρ,A] = 2 ∑_k,l(λ_k-λ_l)^2/(λ_k+λ_l) |⟨ k|A|l⟩ |^2 , where λ_k,l and |k,l⟩ are the eigenvalues and eigenvectors of the steady-state ρ = ρ_ss. We use A = a^† a to measure the phase uncertainty in the steady-state. Phase synchronization is closely related to the phase distribution P(Φ), which is a classical probability distribution. Therefore, it make sense to directly inspect this classical distribution to obtain information about synchronization. We propose another measure of phase synchronization using classical Fisher information (CFI). This new measure is defined by the classical Fisher information of the phase distribution P(Φ): CFI = E[(∂/∂Φlog P(Φ))^2], It is important to note that this CFI is different from the conventional Fisher information, which is directly calculated from the density matrix as: F(X̂|θ)=∑_x 1/p(x|θ)(∂ p(x|θ)/∂θ)^2, where p(x|θ) is the probability of observing outcome x when measuring observable X̂ <cit.>. Classical and quantum Fisher information and S_peak reads 0 for unsynchronized states, but is unbounded for highly synchronized states, whereas phase coherence and MRL^(n) is bounded between 0 and 1. Two advantages of FI-based measures over phase coherence can be observed: Firstly, Fisher information appears to be more sensitive to highly synchronized states, while exhibiting less sensitivity at the other extreme. However, in most cases, our primary interest lies in the highly synchronized states. Secondly, FI-based measures are more general metrics of synchronization. Measures such as phase coherence face limitations in detecting synchronization of squeezed states or, more generally, Wigner functions with multiple peaks— n-to-1 synchronization, see Fig. <ref> as an example. In contrast, FI-based measures are capable of detecting synchronization in such instances. As a measure of synchronization, we find that FI-based measures are not only comparable to existing measures for normal cases of 1-to-1 synchronization, it is also more appropriate for the measurement of 2-to-1 synchronization. Measuring QFI in experiments can be challenging due to its reliance on the full quantum state of the system. However, there are various strategies that have been developed to estimate QFI experimentally, such as randomized measurement <cit.>. Recently there has been some work <cit.> to relate quantum synchronization to quantum geometric phase <cit.>. In this work, they showed that the geometric phase for the quantum Stuart-Landau oscillator under driven pump exhibits an Arnold tongue-like structure, somewhat similar to the Arnold tongue in quantum synchronization as measured by the shifted phase distribution of the Q function. Also, for two oscillators, it is sometimes useful to measure the quantum mutual information <cit.>. § 1-TO-1 SYNCHRONIZATION We first study the scenario of a coherently driven oscillator without squeezing drive(by simply set η=0). When only the coherent driving is present, there will be only one preferred phase (namely 'fixed point') to synchronize to and the phase distribution P(Φ) has only one peak, as shown by the first row in Fig. <ref>, whose position indicates the relative phase between the oscillator and drive. With increasing amplitude of the driving, the quantum phase synchronization between the oscillator and drive improves, and so does the values of the synchronization measures. This indicates a monotonic behavior in the measure. We show that all measures qualitatively agree in Fig. <ref> where synchronization measures are plotted against coherent driving amplitude E. Therefore, these are all valid measures to capture 1-to-1 quantum phase synchronization, and their correlations are close to unity, as shown in the later section. Take note that in Fig. <ref> the unbounded and bounded measures are plotted separately. In Fig. <ref>, the synchronization measures are compared across different nonlinear damping ratios γ_2/γ_1, where this ratio directly controls the radius of the limit cycle and mean photon number in the oscillator. Conventionally, the oscillator is regarded in 'semi-classical' regime when γ_2/γ_1≈ 1, and 'quantum' regime when γ_2/γ_1≫ 1. We can see that these measures remain valid for different regimes. A driven oscillator with a smaller radius (i.e. larger γ_2/γ_1) is more prone to lose synchronization by phase diffusion and quantum noise <cit.>. Comparing two columns of Fig. <ref>, the values of a synchronization measure are higher in the classical regime, as expected. Note that in Fig. <ref> right column, the value of CFI surpass QFI at certain driving amplitude E. As we have explained previously, the CFI we proposed in this paper is not the direct classical counter-part of QFI. Therefore, this is not a violation of the property that QFI should be the supremum of the CFI over all observables. More insights can be developed in deep quantum regime (γ_2 →∞), where the analytical solutions to all these measures can be obtained. By using the 3× 3 density matrix ansatz proposed in <cit.>, the analytical equation for MRL^(1), QFI and phase coherence S_pcoh are obtained as (with Δ=0,γ_1=1,γ_3=0): lim_γ_2 →∞MRL^(1)=2E/9+8E^2, lim_γ_2 →∞QFI = 4 |2E/9+8E^2|^2, lim_γ_2 →∞S_pcoh = 2E/√((8E^2+9)(4E^2+3)). Subsequently, the phase distribution P(Φ) can be obtained: lim_γ_2 →∞ P(Φ)=1/2π[1-4E/9+8E^2cos(Φ)] After deriving the phase distribution P(Φ), the peak of phase distribution S_peak and CFI can be easily obtained as lim_γ_2 →∞S_peak = 4E/9+8E^2 , lim_γ_2 →∞CFI = 4A_0 + A_1 E^2 +A_2 E^4 + A_3 E^6 +A_4 E^8/λ (9+8E^2)(λ-9-8E^2)^2 , where A_0 = 729(λ-9), A_1 = 108(17λ -201), A_2 = 544(3λ - 52), A_3 = 256(2λ - 67), A_4 = -4096, λ = √((9+4E+8E^2)(9-4E+8E^2)). These solutions are valid when driving amplitude E<<1 (see Appendix A), beyond which the density matrix ansatz breaks down. § SQUEEZING ENHANCES 2-TO-1 SYNCHRONIZATION In this section, we show squeezing drive can create and enhance quantum 2-to-1 synchronization, i.e. synchronization with two distinct fixed points in phase space, and the phase distribution P(Φ) has two distinct peaks. However, as mentioned previously, some synchronization measures are not suitable for measuring such type of synchronization. As shown in Fig. <ref> left column, increasing squeezing sharpens the two peaks in the phase distribution and thus improves mixed synchronization. Here, we need to consider the following question: Does E=0. i.e. no drive, makes sense for synchronization? We can always regard the squeezing term as a drive. When squeezing is present without coherent drive, the synchronization can be regarded as between the oscillator and the squeezing drive. This is also considered in Ref. <cit.> where they did for frequency entrainment (the frequencies of the oscillator and external drive converge). Note that phase coherence S_pcoh and MRL^(1) is zero when only squeezing is present, which is expected, as these measures reflect the first off-diagonal elements in the density matrix. On the other hand, S_peak and MRL^(2) scale almost linearly with squeezing. 2-to-1 synchronization can be created out of 1-to-1 synchronization. This is shown in the right column of Fig. <ref>, where in addition to squeezing, a coherent drive with amplitude E=0.5 is present. This coherent drive creates a single peak when squeezing is off or small. When the squeezing is tuned up, the single peak splits into two under pitchfork bifurcation, so does the corresponding Wigner functions <cit.>. In this scenario, the two measures (phase coherence S_pcoh and MRL^(1)) which are only capable of measuring 1-to-1 synchronization decrease and appears to change almost linearly with increasing squeezing parameter. MRL^(2) is a measure dedicated to 2-to-1 synchronization, therefore it is unsurprising that it only provides partial information when single peak is present. This explains why MRL^(2) drops to zero at small η and increases linearly afterwards. The measurement of S_peak lacks the ability to differentiate between two types of synchronization. Consequently, only the classical and quantum Fisher information measures exhibit a monotonic relationship with respect to squeezing. § EFFECT OF NOISE In this section, we investigate and compare the effect of different noise across these measures. We consider two types of noise, namely single photon dissipation and white noise. The single photon dissipation process is implemented by the Lindblad dissipator proportional to γ_3 in the master equation (<ref>). In Fig. <ref>, all six measures are captured in the surface plots with respect to the single photon dissipation γ_3 and coherent driving amplitude E. It is known that single photon dissipation can be beneficial for 1-to-1 synchronization in coherently driven oscillators <cit.>, which is reflected in Fig. <ref> among all measures consistently. Surprisingly, this noise-induced synchronization boost is absent in 2-to-1 synchronization, as shown in Fig. <ref>, in which the squeezing η is increasing instead of driving amplitude E. Again, the phase coherence and MRL^(1) remain 0 for the same reason explained in the previous section. To introduce white noise into the density matrix, we define noise parameter p∈ [0,1], thus the noisy steady-state density matrix is defined as: ρ_noisy = (1-p) ρ_ss + p I/N_dim , where ρ_ss is the noise-less steady-state density matrix and I is the identity matrix with dimension N_dim. After introducing white noise, it is expected for all measures to degrade with increasing p. Interestingly, phase coherence turns out to be most sensitive to white noise, as shown in Fig. <ref>, where there is a bigger drop in the measure as a function of noise p compared to other measures. § CORRELATIONS BETWEEN MEASURES In this section, a correlation analysis was performed to investigate to what extent the different measures of quantum synchronization carry independent and non-redundant information. We calculate the Pearson correlation between the values of different measures, defined as 𝒞 = (X,Y)/σ_X σ_Y, with (X,Y) being the covariance between two synchronization measures and σ the standard deviation. In the case of 1-to-1 synchronization, i.e. single peak, high Pearson correlations are observed across all the measures, as shown in Fig.9. Whereas in the case of 2-to-1 synchronization, it is obvious that phase coherence S_pcoh, S_peak and MRL^(1) are ill-suited measures, as they are negatively related to the other three proper measures. From both plots we can tell the connections between these measures: CFI, QFI and MRL^(2) are highly correlated in their response to the driving. On the other hand, phase coherence S_pcoh and MRL^(1) exhibit a strong connection, as they are both related to the first off-diagonal coherences. § ASYMMETRICAL SYNCHRONIZATION So far we have discussed cases when the two peaks in phase distribution are symmetrical (i.e. the peaks have identical amplitude). To complete the whole picture, in this section we discuss the situation when the two peaks are distorted and asymmetrical. This asymmetrical phase distribution can be observed when both coherent drive and squeezing are present with a difference of phase, as shown in Fig. <ref>. Interestingly, both FI-based measures are shown to be insensitive to the change of symmetry, by varying the phase of squeezing φ in Fig.<ref>. Meanwhile, all the other measures have great dependence on the phase of squeezing φ. This is another convenient trait of FI-based measures of being tolerant to phase mismatch. As the phase of squeezing is usually determined by the specific experiment setups, such as the properties of cavity in cQED platform <cit.> and nonlinear crystals in optical platform <cit.>. § CONCLUDING REMARKS In conclusion, this research provides a comprehensive analysis of quantum phase synchronization measures. Our work proposes a novel approach to measure the degree of synchronization by deploying classical and quantum Fisher information. Significantly, both measures demonstrate success in characterizing both the 1-to-1 and 2-to-1 synchronization regimes, where other existing methods fail to yield reliable results in one or another. Our comparative study of the classical and quantum Fisher information measures with existing measures highlights the advantages and limitations of each method. Our study offers valuable guidance for future investigations and practical implementations. Our analysis of the impact of noise on the synchronization measures reveals the robustness and susceptibility of each method in the presence of decoherence. Furthermore, the correlations between these measures provide insight into the similarities and differences between different measures of quantum synchronization. Our findings contribute significantly to the characterization of quantum phase synchronization, particularly in the 2-to-1 synchronization regime. These results pave the way for further research in the field, such as the development of more efficient and robust quantum communication and computing protocols. Future work could explore other synchronization regimes, investigate the impact of various types of noise, and assess potential applications of our proposed measures in real-world quantum systems. § ACKNOWLEDGMENTS Numerical simulations are performed using the QuTiP numerical toolbox <cit.>. YS and WJF would like to acknowledge the support from NRF-CRP19-2017-01, National Research Foundation, Singapore. LCK are grateful to the National Research Foundation, Singapore and the Ministry of Education, Singapore for financial support. § ANALYTICAL SOLUTIONS USING DENSITY MATRIX ANSATZ In deep quantum regime (γ_2→∞), the 3×3 density matrix ansatz in noiseless limit (γ_3=0) is given in the Fock basis <cit.>: ρ = [ ρ_00 ρ_01 0; ρ_10 ρ_11 0; 0 0 ρ_22 ] , with ρ_00 = γ_2(12E^2+18))/12E^2+9+3γ_2(15+8E^2), ρ_11 = γ_2(12E^2+9))/12E^2+9+3γ_2(15+8E^2), ρ_22 = 12E^2+9/12E^2+9+3γ_2(15+8E^2), ρ_01 = ρ_10^* =6i γ_2 E/12E^2+9+3γ_2(15+8E^2). This amounts to restricting the number of excitations to 2, and neglecting all coherences involving the state |2⟩. The higher order coherences are dropped on the grounds that they can be seen to be small in exact numerical simulations, and dropping them makes analytical calculations much easier and more insightful. In Fig. <ref> we compare the analytical solutions to the numerical simulation, revealing that the validity of the solutions lies in the range when coherent driving is small (E≪ 1). With larger driving E, the oscillator will be excited to higher levels beyond the assumption of the 3×3 ansatz. unsrt
http://arxiv.org/abs/2306.09123v1
20230615133110
Kobayashi-Ochiai's finiteness theorem for orbifold pairs of general type
[ "Finn Bartsch", "Ariyan Javanpeykar" ]
math.AG
[ "math.AG", "math.CV" ]
Classification of NMS-flows with unique twisted saddle orbit on orientable 4-manifolds [ ====================================================================================== empty Kobayashi–Ochiai proved that the set of dominant maps from a fixed variety to a fixed variety of general type is finite. We prove the natural extension of their finiteness theorem to Campana's orbifold pairs. § INTRODUCTION In <cit.> Kobayashi and Ochiai proved a higher-dimensional generalization of the finiteness theorem of De Franchis for compact Riemann surfaces. Namely, for X and Y smooth projective varieties over ℂ with X of general type, the set of dominant rational maps Y X is finite. In this paper we prove a generalization of the classical finiteness theorem of Kobayashi–Ochiai for dominant rational maps in the setting of Campana's orbifold maps (Theorem <ref>). The notion of orbifold pairs (also referred to as C-pairs <cit.>) was introduced in <cit.> and has already been shown to be fruitful in, for example, the resolution of Viehweg's hyperbolicity conjecture <cit.>. Let k be an algebraically closed field. A variety (over k) is an integral finite type separated scheme over k. A ℚ-orbifold (over k) (X, Δ) is a variety X together with a ℚ-Weil divisor Δ on X such that all coefficients of Δ are in [0,1]. If Δ = ∑_i ν_i Δ_i is the decomposition of Δ into prime divisors, we say that m(Δ_i) := (1-ν_i)^-1 is the multiplicity of Δ_i in Δ. If all multiplicities of a ℚ-orbifold are in ℤ∪{∞}, we say that (X,Δ) is a ℤ-orbifold or simply an orbifold. A ℚ-orbifold (X, Δ) is normal if the underlying variety X is normal. Moreover, a ℚ-orbifold (X, Δ) is smooth (over k) if the underlying variety X is smooth and the support of the orbifold divisor Δ is a divisor with strict normal crossings. This means that every component of Δ is smooth and that étale locally around any point of X, the divisor Δ is given by an equation of the form x_1⋯ x_n=0 for some n ≤ X. Let (X, Δ_X) be a normal ℚ-orbifold and (Y, Δ_Y) be a ℚ-orbifold such that Y is locally factorial. In this case, we define a morphism of ℚ-orbifolds f (X, Δ_X) → (Y, Δ_Y) to be a morphism of varieties f X → Y satisfying f(X) ⊈Δ_Y such that, for every prime divisor E ⊆Δ_Y and every prime divisor D ⊆ f^*E, we have t m(D) ≥ m(E), where t ∈ℚ denotes the coefficient of D in f^*E; the local factoriality of Y ensures that E is a Cartier divisor, so that f^∗ E is well-defined. Note that, equivalently, we can require that t - 1 + ν_D ≥ t ν_E, where ν_D and ν_E are the coefficients of D in Δ_X and of E in Δ_Y, respectively. If X is a normal variety, we identify X with the orbifold (X, 0). If X and Y are varieties such that X is normal and Y is locally factorial, every morphism of varieties X → Y is an orbifold morphism (X,0) → (Y,0). A ℚ-orbifold (X, Δ) is proper (resp. projective) if the underlying variety X is proper (resp. projective) over k. A smooth proper ℚ-orbifold (X,Δ) is of general type if K_X+Δ is a big ℚ-divisor, where K_X denotes the canonical divisor of X. If Δ=0, we recover the usual notion of a smooth proper variety of general type. If the multiplicities of Δ are all infinite, then (X,Δ) is of general type if and only if the smooth quasi-projective variety X∖Δ is of log-general type. Finally, if X is a smooth proper variety of nonnegative Kodaira dimension, D is a strict normal crossings divisor, and m ≥ 2, then the orbifold (X, (1-1/m)D) is of general type if and only if X ∖ D is of log-general type. If (Y,Δ) is a smooth proper orbifold pair of general type and X is a normal variety, then the set of separably dominant orbifold morphisms X→ (Y,Δ) is finite. We prove a more general result in which we consider rational maps X (Y,Δ); see Theorem <ref> for a precise statement. Theorem <ref> (or, actually, its more precise version Theorem <ref>) generalizes Kobayashi–Ochiai's finiteness theorem for dominant rational maps to a projective variety of general type in characteristic zero (take Δ to be trivial and k of characteristic zero) <cit.>. It also implies Tsushima's extension of Kobayashi–Ochiai's theorem to varieties of log-general type (take the multiplicities of Δ to be infinity and k of characteristic zero) <cit.>. Moreover, we also obtain the finiteness theorem of Martin-Deschamps and Menegaux for separably dominant morphisms to a proper variety of general type (take Δ to be trivial and k of arbitrary characteristic) <cit.>, as well as Iwanari–Moriwaki's extension of Tsushima's result to characteristic p <cit.>. Finally, we obtain a new proof of Campana's extension of De Franchis's theorem for one-dimensional smooth proper orbifold pairs of general type; see <cit.>. The theorem of Kobayashi–Ochiai can be made effective in the sense that one can give effective upper bounds for the number of dominant maps from a fixed variety to a fixed variety of general type; see <cit.>. It seems reasonable to expect that one can obtain similar effective statements in the orbifold setting. One part of the Green–Griffiths–Lang conjecture predicts that every complex projective hyperbolic variety is of general type. In particular, the theorem of Kobayashi–Ochiai suggests that a similar finiteness statement for dominant maps should hold for projective hyperbolic varieties. Such a finiteness result for projective hyperbolic varieties was in fact already conjectured by Lang in the early seventies (see <cit.>) and proven by Noguchi <cit.> (see also <cit.>) in the early nineties. We stress that, conjecturally, a complex projective variety is of general type if and only if it is “pseudo-hyperbolic”, i.e., there is a proper closed subset Δ⊊ X such that every entire curve ℂ→ X^ lands in Δ. The analogous finiteness statement for dominant maps to a pseudo-hyperbolic projective variety is currently not known. In particular, its extension to Campana's orbifold pairs is not known either. As a straightforward application of Theorem <ref>, we prove the finiteness of the set of surjective endomorphisms of an orbifold pair of general type; see Corollary <ref> for a precise statement. We were first led to investigate the orbifold extension of the theorem of Kobayashi–Ochiai in our joint work with Rousseau on rational points over number fields. We refer the reader to <cit.> for arithmetic applications of our orbifold extension of Kobayashi–Ochiai's finiteness theorem. Our proof of Theorem <ref> follows the general strategy of Kobayashi-Ochiai (and Tsushima). However, these proofs crucially rely on properties of differential forms on (log-)general type varieties (see, for example, <cit.> for a key step in Tsushima's proof relying on properties of tensor powers of the sheaf of differential forms). The main difficulty in proving Theorem <ref> is that differentials for an orbifold pair (X,Δ) are not well-behaved. One may even say that there is no meaningful way to define a sheaf Ω^1_(X,Δ) of orbifold differentials on X. On the other hand, in his seminal work on orbifold pairs <cit.>, Campana suggests to instead use locally free sheaves which mimick sheaves of symmetric differentials. These sheaves are abusively denoted by S^n Ω^p_(X,Δ) despite the lack of existence of Ω^p_(X,Δ); see Section <ref> for a precise definition. The aforementioned key step in Tsushima's proof is then replaced by an argument involving symmetric differentials on (X,Δ); see the proof of Proposition <ref> for details. §.§ Conventions We work over an algebraically closed field k. A variety is a integral separated scheme of finite type over k. If X and Y are varieties, we write X × Y for X ×_ k Y. A point of a variety is a schematic point and need not be closed. If ℒ is a line bundle, D is a ℚ-divisor, and n is a natural number such that nD is a ℤ-divisor, we abuse notation and write (ℒ(D))^⊗ n instead of ℒ^⊗ n(nD). The second-named author thanks Erwan Rousseau for many helpful discussions on Campana's theory of orbifolds. § ORBIFOLD NEAR-MAPS In this paper, we work with the more general notion of an orbifold near-map (Definition <ref>). An open subscheme U ⊆ X of a variety X is big if its complement is of codimension at least two. A rational map X Y of varieties is a near-map if there is a big open U ⊆ X such that U Y is a morphism. Note that a rational map X Y is a near-map if and only if it is defined at all codimension one points of X. For example, for every normal variety X and any proper variety Y, every rational map X Y is a near-map. If X and Y are varieties with X locally factorial, and f X Y is a near-map, we can pull back a line bundle ℒ on Y to a line bundle ℒ̃ on X. Indeed, while the pullback bundle f^*ℒ is a priori only defined on a big open of X, as X is locally factorial, it extends uniquely to a line bundle on all of X by <cit.>. Since locally factorial schemes are normal, global sections of ℒ also pull back to global sections of ℒ̃. Let (X, Δ_X) be a normal orbifold and (Y, Δ_Y) be an orbifold such that Y is locally factorial. Then an orbifold near-map f (X, Δ_X) (Y, Δ_Y) is a near-map f X Y satisfying f(X) ⊈Δ_Y such that, for every prime divisor E ⊆Δ_Y and every prime divisor D ⊆ f^*E, we have t m(D) ≥ m(E), where t ∈ℚ denotes the coefficient of D in f^*E; this pullback is well-defined as E is Cartier. As before, this is equivalent to requiring t - 1 + ν_D ≥ t ν_E, where ν_D and ν_E are the coefficients of D in Δ_X and of E in Δ_Y, respectively. Caution is advised: the composition of orbifold morphisms need not be an orbifold map. Indeed, although the condition on the multiplicities of divisor pullbacks is stable under composition, the image of the composition might be completely contained in the orbifold divisor of the target. For example, consider the morphism ℙ^1 → (ℙ^1, (1/2)·∞) given by z ↦ z^2 and the inclusion of the point ∞ into ℙ^1. While both morphisms are orbifold, their composition is the inclusion of the point ∞ into (ℙ^1, (1/2)·∞) which is not an orbifold morphism. There is however one situation in which we can compose orbifold morphisms. Namely, if the composition of two orbifold morphisms is dominant, then the composition is again an orbifold morphism. We will need a similar statement for orbifold near-maps (see Corollary <ref>). Now, the following lemma gives a criterion for when the composition of orbifold near-maps is still orbifold. Note that this is not immediate from the definitions, as the composition of two near-maps, even when it exists as a rational map, need not be defined in codimension 1, so that it is not immediately clear why it satisfies the orbifold condition. Let (Y, Δ_Y) be an orbifold such that Y is locally factorial. Let X and Z be locally factorial varieties. Let f X (Y, Δ_Y) be an orbifold near-map and let g Z X be a near-map. If the composition f ∘ g exists and extends to a near-map h Z Y of varieties whose image is not contained in Δ_Y, then h is an orbifold near-map h Z (Y, Δ_Y). Let D ⊆Δ_Y be an irreducible component and let m ∈ℤ_≥ 2∪{∞} be its multiplicity. Let ℒ be the line bundle on Y with a global section s ∈ℒ(Y) cutting out D. We pull back ℒ along f and h, and by Remark <ref>, we obtain line bundles f^*ℒ and h^*ℒ on X and Z, respectively. Pulling back the section s as well, we obtain global sections f^*s and h^*s of f^*ℒ and h^*ℒ, respectively. Since h is generically the composition of f and g, we know that h^*s and g^*f^*s determine the same element of the function field of Z. As the localization maps 𝒪_Z,z→ k(Z) are injective, it follows that (h^*s)_z = g^♯_η((f^*s)_g(z)) for every z ∈ Z. If m=∞, to verify the orbifold condition for h over D, we have to show that h^∗ s is nowhere vanishing on Z. Suppose h^∗ s vanishes at a codimension 1 point η∈ Z. Then f^∗ s vanishes at g(η) in X. This contradicts the orbifold condition for f X (Y,Δ_Y) over D (as m=∞). If m ∈ℤ_≥ 2, let η∈ Z be a point of codimension 1 at which h^*s vanishes, i.e., η is the generic point of an irreducible component of f^∗ D. To verify the orbifold condition for h, we have to show that h^*s vanishes to order at least m there. In other words, we have to show that (h^*s)_η∈𝔪^m_Z, η(h^*ℒ)_η. We know that (h^*s)_η∈𝔪_Z, η(h^*ℒ)_η, so it follows that (f^*s)_g(η)∈𝔪_X, g(η)(f^*ℒ)_g( η). Since the vanishing locus of a nonzero section of a line bundle is pure of codimension 1, there exists a point ξ∈ X of codimension 1 specializing to g(η) such that f^*s vanishes at ξ. The assumption that f X (Y, Δ_Y) is orbifold now tells us that f^*s vanishes to order at least m at ξ, so that (f^*s)_ξ∈𝔪_X,ξ^m(f^*ℒ)_ξ. As X is locally factorial, the ring 𝒪_X, g(η) is a UFD, so that we have 𝒪_X,g(η)∩𝔪^m_X,ξ⊆𝔪^m_X, g(η). Indeed, for R a local UFD with maximal ideal 𝔪, n ≥ 1 an integer and 𝔭⊂ R a prime ideal of height one, the ideal 𝔭^n R_𝔭∩ R equals 𝔭^n, and is thus contained in 𝔪^n. The above implies that (f^*s)_g(η)∈𝔪^m_X, g(η)(f^*ℒ)_ g(η). As g^♯_η is a local homomorphism, it follows that (h^*s)_η = g^♯_η((f^*s)_g(η)) ∈𝔪^m_Z,η(h^*ℒ)_η, as required. We will use Lemma <ref> in the following form. Let (Y, Δ_Y) be a proper orbifold such that Y is locally factorial. Let X be a locally factorial variety. Let f X (Y, Δ_Y) be a dominant orbifold near-map and let Z ⊂ X be a locally factorial closed subvariety. If the restriction f|_Z exists as a rational map and is still dominant, then it defines an orbifold near-map f|_Z Z (Y, Δ_Y). In Section <ref> and in the proof of Theorem <ref> it will be convenient to use the following notion of products for orbifold pairs. If (X, Δ_X) and (Y, Δ_Y) are two orbifolds, then we define the product orbifold by (X, Δ_X) × (Y, Δ_Y) := (X × Y, Δ_X × Y + X ×Δ_Y). If X and Y are locally factorial, the product of orbifolds defined above satisfies the universal property of a product. More specifically, for any orbifold (T, Δ_T) and for any two orbifold morphisms ϕ_X (T, Δ_T) → (X, Δ_X), ϕ_Y (T, Δ_T) → (Y, Δ_Y), there is a unique orbifold morphism ϕ (T, Δ_T) → (X, Δ_X) × (Y, Δ_Y) such that ϕ_X = π_X ∘ ϕ and ϕ_Y = π_Y ∘ ϕ. Indeed, it is clear that there is a morphism ϕ (T, Δ_T) → X × Y, and we just have to check that this is indeed an orbifold morphism after equipping X × Y with its orbifold structure. First note that the set of closed points t ∈ T satisfying ϕ_X(t) ∉Δ_X is a non-empty, hence dense, open subset of T. Of course, the same holds for the condition ϕ_Y(t) ∉suppΔ_Y, so there is a closed point t ∈ T satisfying both conditions. Thus, the image of ϕ is not contained in (Δ_X × Y + X ×Δ_Y). Now let E ⊆ (Δ_X × Y + X ×Δ_Y) be a prime divisor. Without loss of generality, we may assume that E = E_X × Y for some prime divisor E_X ⊆ X. Now let D ⊆ϕ_X^* E_X be a prime divisor, and let r be its coefficient in ϕ_X^* E_X. Since we know that ϕ_X is an orbifold morphism, we have r m(D) ≥ m(E_X). Now note that ϕ^* E = ϕ_X^* E_X, so that r is also the coefficient of D in ϕ^* E. Furthermore, we have m(E_X) = m(E). Thus, r m(D) ≥ m(E), and ϕ is an orbifold morphism, as desired. § FAMILIES OF MAPS In this section we consider families of maps, and prove that certain conditions on the maps are either open or closed. More precisely, we show in Lemma <ref> that a morphism landing in a closed subscheme is a closed condition, in Lemma <ref> that a rational function being dominant is an open condition, and in Lemma <ref> that the pullback of a fixed differential form having no poles outside some fixed divisor is a closed condition. Let S be a scheme. Let X→ S be a flat morphism whose geometric fibres are reduced. Let Y and T be S-schemes, and let F:X×_S T→ Y be an S-morphism. Then, for every closed subscheme Z⊂ Y, the set of t in T such that F_t:X×_S {t}→ Y factors over Z is closed in T. Let T_1⊂ T be the set of t in T such that F_t factors over Z. We consider T_1 = ⊔_t∈ T_1κ(t) as an S-scheme. To show that T_1 is closed in T, it suffices to show that T_1 = T_1. We endow T_1⊂ T with the reduced closed subscheme structure. The natural morphism T_1⊂T_1 is dominant. Since X→ S is flat, the basechange X×_S T_1→ X×_S T_1 is (also) dominant. Since X×_S T_1→ Y factors set-theoretically through Z and X×_S T_1→ X×_S T_1 is dominant, we see that the restriction of F to X×_S T_1 (also) factors set-theoretically through Z. However, for any t∈T_1, the scheme X×_S κ(t) is geometrically reduced over κ(t). In particular, F_t factors (scheme-theoretically) through Z, as required. A rational map X Y of varieties over k is separably dominant if it is dominant and k(Y) ⊂ k(X) is separable. Let f X → Y be a morphism of smooth varieties. Then the following are equivalent: * The morphism f is separably dominant. * There is a closed point x ∈ X such that df_x _x X →_f(x) Y is surjective. * There is a closed point x ∈ X such that f is smooth at x. Assume <ref> holds. Let Z ⊆ X be the locus of points where the rank of df_x is less than Y. Since K(Y)⊂ K(X) is a separable field extension, the arguments used to prove <cit.> and <cit.> show that the dimension of f(Z) is less than Y. Thus, since f is dominant, this implies that Z ≠ X. In particular, there is a closed point x ∈ X such that the rank of df_x is equal to Y. Since Y is smooth, this shows that <ref> holds. Assume <ref> holds. Then <cit.> shows that <ref> holds. Now assume that <ref> holds. There is an open neighborhood U ⊆ X of x such that f|_U is smooth. Smooth maps are flat, and flat maps of varieties are open. Hence f(U) is a nonempty open of Y. Thus, f is dominant. In particular, f maps the generic point of X to the generic point of Y. The smoothness of f|_U also implies that f is smooth at the generic point of X. Since generically smooth morphisms are separable, we see that <ref> holds. This concludes the proof. Let X and Y be varieties of the same dimension. Assume that X is smooth. Let f X → Y be a morphism. Then the following are equivalent: * The morphism f is separably dominant. * There is a closed point x ∈ X such that df_x _x X →_f(x) Y is an isomorphism. Assume <ref> holds. Let U ⊆ Y be the locus of smooth points of Y. Then U is a dense open of Y, and the restriction of f f^-1(U) → U is still separably dominant. Thus, we may assume that Y is smooth. In particular, by Lemma <ref>, there is a closed point x in X such that df_x _x X →_f(x) Y is surjective. Since X and Y are smooth of the same dimension, it follows that df_x is an isomorphism. Assume <ref> holds. Then, the point f(x) is a regular point of Y. We can thus replace Y by an open neighborhood of f(x) and replace X by the preimage of that. The assumption on the dimensions continues to hold, so we may assume that Y is smooth. Consequently, by Lemma <ref>, the morphism f is separably dominant. If X, Y and T are varieties, we say that a rational map f X × T Y is a relative rational map (over T) if the maximal open subset U ⊆ X × T on which f is defined has nonempty intersection with every closed fiber X_t := X ×{t}. In other words, it is a family of rational maps f_t X Y parametrized by the variety T. Let X and Y be varieties of the same dimension and let T be any variety. Let F X × T Y be a relative rational map. Then the locus of t ∈ T such that F_t X Y is separably dominant is open in T. Replacing X by an alteration if necessary, we may assume that X is smooth <cit.>. Let U ⊆ X × T be the maximal open subset on which F is defined. The map F then induces a morphism of T-schemes G U → Y × T. We claim that if (x,t) ∈ U is any closed point, the differential dG_(x,t) is an isomorphism if and only if the differential of the rational map F_t X Y is an isomorphism at x ∈ X. To see this, note that the tangent spaces of X × T and Y × T are the products of the tangent spaces of the factors. Furthermore, the component of dG_(x,t) which maps _t T →_F(x,t) X is the zero map, and the component which maps _t T →_t T is just the identity. Lastly, the component of dG_(x,t) mapping _x X →_F_t(x) is just dF_t. This implies the claim. Let t ∈ T be a closed point. By Lemma <ref>, F_t is separably dominant if and only if there is a closed point x ∈ X such that the differential dF_t,x is an isomorphism. By the previous claim, this happens if and only if there is a closed point x ∈ X such that dG_(x,t) is an isomorphism. Let V ⊆ U be the set of all points at which the differential of G is an isomorphism. The set V is open in U, hence open in X × T. The map X × T → T is flat, hence open. Thus, the projection of V to T is an open subset of T. By the previous paragraph, this is exactly the set of t ∈ T for which F_t is separably dominant, so we are done. The locus of (separably) dominant maps is not necessarily closed in T (this seems to have been overlooked in <cit.>). Consider, for example, the map ℙ^1 ×ℙ^1 ℙ^1 given by (x,t)↦ xt. Its indeterminacy locus consists of the two points (0,∞) and (∞,0), so that this is indeed a relative rational map. If we fix any value t ∈ℙ^1 ∖{0,∞}, the resulting rational map ℙ^1 ℙ^1 is an isomorphism, and thus separably dominant. For t ∈{0, ∞}, the resulting map is constant. Thus, the locus where the map is separably dominant is ℙ^1 ∖{0,∞}⊆ℙ^1; this is open but not closed. The following statement is a purely algebraic result which we will use in the proof of Lemma <ref> below. Recall that if M is an R-module, and f ∈ R is an element, the element f is called M-regular if the morphism M → M, m ↦ fm is injective. In the special case that M = S is an R-algebra, this is equivalent to f being a nonzerodivisor in S. If S is additionally assumed to be reduced and noetherian, this in turn is equivalent to asking that (f) ⊆ S has codimension ≥ 1 (where the empty set is considered to have codimension ∞). Let R be a noetherian ring, let S be a noetherian R-algebra, let f ∈ S. Let M be a finitely generated S-module. Assume that M is flat over R and assume that for every maximal ideal 𝔪⊆ S, the element f is M/(𝔪∩ R)M-regular. Then f is M-regular and M/fM is flat over R. See <cit.>. The following lemma and proof are essentially due to Tsushima <cit.>. Before starting the proof, we briefly discuss extending relative rational maps. Let G X × T Y be a relative rational map with X, T normal varieties, and Y a proper variety. Then, by properness of Y and normality of X × T, the rational map G X × T Y extends to a morphism U → Y on a maximal open set U ⊆ X × T with complement of codimension at least 2. However, for any given closed point t ∈ T, it might still happen that U_t := U ∩ X_t ⊆ X has a complement of codimension 1. While the restriction of G to U_t will extend to a rational map X Y defined in codimension 1, this extension will in general not be compatible with G X × T Y. Let X and Y be smooth projective varieties of the same dimension n, and let T be a variety. Let G X × T Y be a relative rational map over T. Let D_X be an effective divisor on X and let D_Y be a divisor on Y. Let m ≥ 0. Let ω∈Γ(Y, ω_Y^⊗ m(D_Y)). Assume that, for every closed point t ∈ T, the rational map G_t X Y is separably dominant. Then, the set T_ω of t ∈ T such that G_t^*ω lies in Γ(X, ω_X^⊗ m(D_X)) is closed in T. The case ω = 0 is clear, so suppose ω≠ 0. By replacing T with an alteration if necessary, we may assume that T is smooth. Consider the pullback form G^*ω, and note that it defines a rational section of the vector bundle (Ω^n_X × T)^⊗ m. For any closed point t ∈ T, we pullback G^*ω along the inclusion ι_t X := X×{t}⊆ X × T to get the form ι_t^*G^*ω = G_t^*ω. Let E and F be the divisors of zeroes and poles of G^*ω, respectively. For t in T, we define E_t := E ∩ X_t and F_t := F ∩ X_t. Since G_t is separably dominant and ω≠ 0, we have that, for every t in T, E_t and F_t are (possibly trivial) effective divisors in X_t. Note that whenever G_t is defined in codimension one, we have that E_t (resp. F_t) is the divisor of zeroes (resp. poles) of G_t^*ω. On the other hand, if G_t is not defined at all points of codimension 1, it may happen that E_t and F_t are strictly bigger than the divisor of zeroes and poles, respectively. We now prove the result by induction on (T). The case (T)=0 is clear. Consider the set S of all t in T such that (E_t ∩ F_t) = n-1. By semicontinuity of fiber dimension, S is closed in T (since (E_t ∩ F_t) > n-1 cannot occur). The condition t ∈ S implies that G_t cannot be defined in codimension 1. Otherwise the form G_t^*ω would have to have both a pole and a zero along the codimension 1 subset E_t ∩ F_t, which is absurd. Since G is defined at all points of codimension 1, we see (S)<(T). By the inductive hypothesis, it follows that S_ω=S ∩ T_ω is closed. We now show that F→ T is flat using Lemma <ref>. First, note that it is locally cut out by the vanishing of a single equation given by the denominator of G^*ω. Furthermore, the morphism X × T → T is flat and, for every closed point t in T, the scheme-theoretic fiber F_t of F → T is a divisor of X_t. Thus, we conclude that F → T is flat by Lemma <ref>. In particular, there is a morphism T →Hilb(X) representing the family (F_t)_t ∈ T, where Hilb(X) is the Hilbert scheme of X over k. Now, as there are only finitely many effective divisors with the property of being ≤ D_X, the set of such divisors form a (finite) closed subscheme of Hilb(X). It follows that F_t ≤ D_X is a closed condition on t. For t in T, the condition F_t≤ D_X implies that G_t^* ω∈Γ(X,ω_X^⊗ m(D_X)). Moreover, outside the set S (defined above), the condition G_t^*ω∈Γ(X,ω_X^⊗ m(D_X)) is equivalent to F_t ≤ D_X. Thus, a point t ∈ T lies in T_ω if and only if we have t ∈ S ∩ T_ω or F_t ≤ D_X. As S∩ T_ω is closed in T and the set of t in T with F_t≤ D_X is closed in T, this concludes the proof. § SYMMETRIC DIFFERENTIALS ON ORBIFOLDS In this section we collect some statements regarding the sheaf of symmetric differentials on an orbifold. We start by recalling their definition, first given by Campana in <cit.>. Let (X, Δ) be a smooth orbifold. Let n, p ≥ 0 be natural numbers. The sheaf of symmetric differentials, written S^n Ω^p_(X,Δ), is the locally free subsheaf of ^n Ω^p_X(logΔ) which is étale-locally generated by the following elements: x^k/m⊗_i=1^n dx_J_i/x_J_i Here, the following notation was used: * x_1,...,x_(X) are a set of local coordinates which exhibit Δ in normal crossing form. * The J_i are subsets of {1,...,(X)} of size p. * dx_J_i := ⋀_j ∈ J_i dx_j and x_J_i := ∏_j ∈ J_i x_j * k is a tuple of (X) integers, where the j-th entry counts the number of occurences of j in the J_i. * m is a tuple of (X) integers, where the j-th entry is the multiplicity of the coordinate x_j in Δ. * x^k/m := ∏_j=1^(X) x_j^k_j/m_j For smooth proper varieties X without any orbifold structure, the sheaves S^n Ω^p_(X,0) defined this way coincide with the usual symmetric powers of the module of differentials ^n Ω^p_X. More generally, if (X, Δ) is an orbifold where all multiplicities in Δ are equal to 1 or ∞, the sheaves S^n Ω^p_(X,Δ) defined above coincide with the symmetric powers of the module of log differentials ^n Ω^p_X(logΔ). However, in general, the sheaves S^n Ω^p_(X,Δ) are not the symmetric powers of any coherent sheaf (so that calling them symmetric differentials is a significant abuse of language). The main use of S^n Ω^p for us comes from the fact that these sheaves behave nicely when they are pulled back by orbifold morphisms. If f (X, Δ_X) → (Y, Δ_Y) is a morphism of smooth orbifolds and n, p ≥ 0, then pullback of differential forms induces a morphism f^*S^n Ω^p_(Y, Δ_Y)→ S^nΩ^p_(X,Δ_X). Campana shows this when k=ℂ using computations in the analytic topology, see <cit.>. His arguments adapt to positive characteristic, as we show now. We have a morphism of sheaves f^* ^n Ω^p_Y(logΔ_Y) →^n Ω^p_X( (Δ_X + f^* Δ_Y)). As f^*S^n Ω^p_(Y, Δ_Y) (resp. S^nΩ^p_(X,Δ_X)) is a subsheaf of the source (resp. the target) of this morphism, we may argue étale-locally around a fixed point η. As the sheaves involved are locally free, we may and do assume that η∈ X is of codimension 1 (except for in the trivial situation in which X is zero-dimensional). Locally around η, the sheaf f^* S^n Ω^p_(Y,Δ_Y) is generated by the pullbacks of the local generators of S^n Ω^p_(Y,Δ_Y) around f(η) (see Definition <ref>). Let ω∈ S^n Ω^p_(Y, Δ_Y) be such a generator. Let Y→ Y be a connected étale neighborhood of f(η) such that * Y is an étale open of 𝔸^d with d = Y, * Δ_Y is in normal crossing form (i.e., Δ_Y is given by the pullback of x_1·…· x_ℓ =0 in 𝔸^d for some ℓ≥ 0), * f(η) specializes to the origin of 𝔸^d, and * ω has the form described in Definition <ref>. There is a connected étale neighborhood X→ X of η such that f|_X factors over Y and such that Δ_X is in normal crossings form. Since the induced morphism f (X, X×_X Δ_X) → (Y, Y×_Y Δ_Y) is (still) orbifold, we may replace (X, Δ_X) by (X, X×_X Δ_X) and (Y, Δ_Y) by (Y, Y×_Y Δ_Y). As Y is an étale open of 𝔸^d, we obtain a map X →𝔸^d given by d maps f_1,...,f_d X →𝔸^1. For i=1,…,d, we let m_i denote the multiplicity of the (pullback of the) prime divisor {y_i=0} in Δ_Y. Since f(X) is not contained in Δ_Y, we know that whenever m_i > 1, the function f_i is not identically zero. Viewing f_i as an element of the DVR 𝒪_X, η, we decompose it as f_i = t^ν_i g_i with t a uniformizer and g_i(η) ≠ 0. Since f is an orbifold morphism, for any i with ν_i ≠ 0, we have ν_i ≥m_i/m_η, where m_η is the multiplicity of the divisor η in Δ_X. Let J ⊆{1,...,d} be a p-element subset and consider the rational p-form dy_J/y_J on Y. If none of the functions f_i with i ∈ J vanish along η, the pullback f^*(dy_J/y_J) has no pole at η. If such an i ∈ J exists, then f^*(dy_J/y_J) has a pole of order at most 1. Since we can always write f^*(dy_J/y_J) = (dt/t) ∧ u + v, where u is a (p-1)-form with no pole at η and v a p-form with no pole at η, the pullback of ω is given by f^*ω = ∏_i=1^d (f_i)^k_i/m_i⊗_α=1^n ( (dt/t∧ u_α) + v_α). Here, as before, u_α and v_α are forms with no pole at η. We can write the tensor product of sums as a sum of tensor products. When doing this, the order at η of each summand occuring in such a rewriting is at least (∑_i=1^d k_i/m_iν_i) - k_t, where k_t counts the number of ((dt/t) ∧ u_α)-factors occuring in that summand (as opposed to v_α-factors). Note that k_t ≤∑ k_i, where the sum runs over those i for which ν_i ≥ 0. Using our estimate ν_i ≥m_i/m_η from before, we obtain that the order at η of each summand is at least (∑_i=1 ν_i ≠ 0^d k_i/m_η) - k_t ≥ -k_t + k_t/m_η, so the pole at η is at most of the order we allow for elements of S^n Ω^p_(X, Δ_X). Hence f^*ω∈ S^n Ω^p_(X, Δ_X), which concludes the proof. If X is a smooth proper variety of dimension n, the sheaf S^1 Ω^n_(X,0) is just the dualizing sheaf ω_X. Thus, one might guess that for a smooth orbifold (X, Δ), the sheaf S^1 Ω^n_(X,Δ) should correspond to a line bundle related to the ℚ-divisor K_X + Δ. Of course, naively formulated like this, this guess does not really make sense, since K_X+Δ is not a ℤ-divisor and hence does not correspond to any line bundle. However, as we show now, the intuition can be saved. (Recall our convention that, for ℒ a line bundle, D a ℚ-divisor, and n a natural number such that nD is a ℤ-divisor, we write (ℒ(D))^⊗ n instead of ℒ^⊗ n(nD).) Let (X, Δ) be a smooth proper orbifold of dimension n and let N be a natural number such that NΔ is a ℤ-divisor. Then S^N Ω^n_(X, Δ)≅ω_X(Δ)^⊗ N. For the sheaf of log-differentials, we have Ω^n_X(logΔ) = ω_X(Δ) (see <cit.>). It follows that ^N Ω^n_X(logΔ) = ^N ω_X(Δ). Since symmetric powers of line bundles agree with tensor powers, it follows that ^N ω_X(Δ) = ω_X(Δ)^⊗ N. Thus, S^N Ω^n_(X, Δ) is by construction a locally free subsheaf of ω_X(Δ)^⊗ N. More precisely, we see that locally around a point p ∈ X, it is the subsheaf generated by the single element x_1^N/m_1...x_n^N/m_n⊗_l=1^N dx_1 ∧ dx_2 ... ∧ dx_n/x_1x_2...x_n where x_1,...,x_n are a set of normal crossing coordinates for Δ, and m_i denotes the multiplicity of the coordinate x_i in the orbifold divisor. The subsheaf generated by this element is equal to ω_X(Δ)^⊗ N in some neighborhood of p. The claim follows since p was arbitrary. Let f (X, Δ_X) (Y, Δ_Y) be a near-map of smooth proper orbifolds with n:= Y and let N be a natural number such that N Δ_Y is a ℤ-divisor. Then there is an induced pullback morphism f^* ω_Y(Δ_Y)^⊗ N→ S^N Ω_(X,Δ_X)^n of locally free sheaves on X. By Lemma <ref>, we have ω_Y(Δ_Y)^⊗ N= S^N Ω^n_(Y, Δ_Y). By Lemma <ref>, we get a morphism f^* S^N Ω^n_(Y, Δ_Y)→ S^N Ω^n_(U, Δ_X ∩ U) of sheaves on U, where U ⊆ X denotes the domain of definition of f. By Remark <ref>, the line bundle f^* S^N Ω^n_(Y, Δ_Y) on U extends to a line bundle on X. Furthermore, as the morphism of locally free sheaves extends as well by Hartogs, this concludes the proof. If X and T are smooth varieties, and π_X X × T → X and π_T X × T → T denote the canonical projections, we get a direct sum composition for the Kähler differentials: Ω^1_X × T≅π_X^* Ω^1_X ⊕π_T^* Ω^1_T Passing to exterior powers, and noting that taking exterior powers commutes with taking pullbacks, we retain such a direct sum decomposition, although it gets slightly more involved: Ω^m_X × T≅⊕_i=0^m π_X^* Ω^i_X ⊗π_T^* Ω^m-i_T Lastly, if A and B are modules over any commutative ring, we have the following direct sum decomposition for the symmetric powers: ^n(A ⊕ B) ≅⊕_i=0^n (^i A ⊗^n-i B) By combining the two previous lines, we obtain that ^n π_X^* Ω^m_X is a direct summand of ^n Ω^m_X × T. Hence we get an idempotent endomorphism of ^n Ω^m_X × T which projects an element into that summand. Furthermore, if t ∈ T is any closed point and ι_t X = X ×{t}⊆ X × T is the inclusion, then the pullback map ι_t^* ^n Ω^m_X × T→^n Ω^m_X factors over that projection. We now prove the analogous result for orbifolds. Let (X, Δ_X) and (T,Δ_T) be smooth orbifolds. Let π_X and π_T denote the canonical projection of X × T onto X and T, respectively. Then for all natural numbers N and m, the sheaf π_X^* S^N Ω^m_(X,Δ_X) is a direct summand of S^N Ω^m_(X, Δ_X) × (T,Δ_T). Furthermore, if t ∈ T is a closed point and ι_t X = X ×{t}⊆ X × T is the inclusion, then the pullback map ι_t^* S^N Ω^m_(X,Δ_X) × (T,Δ_T)→ S^N Ω^m_(X,Δ_X) factors over the projection to ι_t^* π_X^* S^N Ω^m_(X,Δ_X). We first deal with the case that Δ_X and Δ_T are ℤ-divisors, i.e. that all multiplicities are either 1 or ∞. In this case, we have S^N Ω^m_(X,Δ_X) = ^N Ω^m_X(logΔ_X). The latter is a genuine symmetric power of an exterior power of Ω^1_X(logΔ_X). Notice that the decomposition Ω^1_X × T(log (Δ_X × T + X ×Δ_T)) = π_X^* Ω^1_X(logΔ_X) ⊕π_T^* Ω^1_T(logΔ_T) is still valid. Thus, the discussion of the previous paragraph applies, proving the result in this case. In general, Δ_X and Δ_T are not ℤ-divisors. By definition, the sheaf S^N Ω^n_(X,Δ_X) is a subsheaf of ^N Ω^m_X(logΔ_X), and, similarly, the sheaf S^N Ω^m_(X,Δ_X)× (T,Δ_T) is a subsheaf of ^N Ω^m_X× T(logΔ_X× T + X×Δ_T). By the previous paragraph we know that the morphism π_X^* ^N Ω^m_X(logΔ_X) →^N Ω^m_X× T(logΔ_X × T + X ×Δ_T) is injective and has a retract. Since the projection map π_X is orbifold, it follows from Lemma <ref> that the above injection sends the subsheaf π_X^*S^NΩ^m_(X,Δ_X) to the subsheaf S^NΩ^m_(X,Δ_X)× (T,Δ_T). To prove the claim, it thus suffices to show that the retraction also respects these subsheaves. This can be checked locally, and it suffices to consider the generators. This can be done very explicitly. Indeed, let (x,t) ∈ X × T be any closed point, let dx_1,...,dx_n be local coordinates for X around x which exhibit Δ_X in normal crossings form, and let dt_1,...,dt_r be local coordinates for T around t which exhibit Δ_T in normal crossings form. Then dx_1,...,dx_n, dt_1,..., dt_r are local coordinates for X × T exhibiting its orbifold divisor in normal crossings form. Let ω be a local generator of S^N Ω^m_(X, Δ_X) × (T,Δ_T) around (x,t). If ω contains any factors containing a dt_i, the pullback ι_t^* ω will be identically zero. Thus, it remains to consider the case where the only differentials appearing in ω are products of dx_i terms. Pulling back such a generator of S^NΩ^m_(X,Δ_X)× (T,Δ_T) along ι_t yields a (formally identical) generator of S^N Ω^m_(X,Δ_X). This proves the desired claim. Finally, to prove that the pullback along ι_t factors over this direct summand, note that π_X ∘ι_t = 𝕀_X is the identity, so that in fact ι_t^* π_X^* S^N Ω^m_(X,Δ_X) = S^N Ω^m_(X,Δ_X). § KOBAYASHI–OCHIAI'S THEOREM FOR ORBIFOLD PAIRS In this section, we prove the finiteness theorem for dominant maps into a smooth orbifold of general type (Y, Δ_Y). The first step of the proof is to show that given a dominant morphism f (X, Δ_X) → (Y, Δ_Y), we can recover f from its induced map on global sections of the canonical bundles ω_Y(Δ_Y)^⊗ N(Y) →ω_X(Δ_X)^⊗ N(X) for sufficiently large N, where N only depends on (X, Δ_X) and (Y, Δ_Y) but not on f. This allows us to shift the focus from studying dominant morphisms to studying linear maps ω_Y(Δ_Y)^⊗ N(Y) →ω_X(Δ_X)^⊗ N(X) satisfying certain conditions. To state the next lemma, we introduce some terminology. We call a line bundle very big if the rational map to projective space induced by its global sections is birational onto its image. Note that every big line bundle has a tensor power which is very big. Of course, a very ample line bundle is very big. Also, if V is a vector space, the projective space ℙ(V) parametrizes subspaces of codimension 1. Let X and Y be projective varieties. Assume that X is locally factorial. Let ℒ_X and ℒ_Y be line bundles on X and Y respectively. Assume that ℒ_X is very big and that ℒ_Y is very ample. Consider the following set: S = { (f, ϕ) | f X Y dominant and ϕ f^*ℒ_Y →ℒ_X injective} If (f,ϕ) and (g,ψ) have the same image under the composed map of sets S →(ℒ_Y(Y), ℒ_X(X))∖{0}→{rational maps from ℙ(ℒ_X(X)) to ℙ(ℒ_Y(Y)) }, then f=g. Before starting the proof, we note that the set S is well-defined by Remark <ref>. Let (f, ϕ) and (g, ψ) be elements of S which induce the same rational map γℙ(ℒ_X(X)) ℙ(ℒ_Y(Y)). By our assumptions on the line bundles, the space ℙ(ℒ_X(X)) contains a birational copy X of X and ℙ(ℒ_Y(Y)) contains Y. The following square commutes whenever the compositions are defined: X [r, densely dashed] [d, densely dashed] Y [d] ℙ(ℒ_X(X)) [r, densely dashed, "γ"] ℙ(ℒ_Y(Y)) Here, the upper horizontal arrow can be either f or g. Note that the indeterminacy locus of γ is a linear subspace, and that X is not contained in any proper linear subspace of ℙ(ℒ(X)). Hence γ is defined on some open of X and the commutativity of the diagram above implies that γ sends X to Y. So we get a rational map X Y. The composition X X Y is equal to both f and g whenever it is defined, showing that f=g on a dense open subset. As Y is separated, this implies that f=g everywhere. We now have all the prerequisite results needed for our proof of the announced theorem. We follow the general proof strategy of <cit.>. Let (X, Δ_X) and (Y, Δ_Y) be smooth proper orbifolds with (Y, Δ_Y) of general type. If X= Y, then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y). If there are no separably dominant near-maps from (X,Δ_X) to (Y,Δ_Y), then we are done. Otherwise, let f (X, Δ_X) (Y, Δ_Y) be a separably dominant near-map. By Corollary <ref>, for N ∈ℕ sufficiently divisible, we get an induced morphism of line bundles f^* (ω_Y(Δ_Y))^⊗ N→ω_X(Δ_X)^⊗ N. Since f is separably dominant, the morphism of line bundles f^* (ω_Y(Δ_Y))^⊗ N→ω_X(Δ_X)^⊗ N is non-zero, hence injective. This implies that ω_X(Δ_X)^⊗ N is a big line bundle, so that the orbifold (X, Δ_X) is also of general type. Increasing N if necessary, we can thus assume that all of the following hold: * ω_X(Δ_X)^⊗ N and ω_Y(Δ_Y)^⊗ N are well-defined line bundles. * The line bundle ω_X(Δ_X)^⊗ N is very big. * There is an effective divisor C ⊆ Y such that (ω_Y(Δ_Y)^⊗ N)(-C) is very ample. We fix the integer N and the effective divisor C ⊆ Y from the last bullet point. We define V_X := Γ(X, ω_X(Δ_X)^⊗ N) and V_Y := Γ(Y, ω_Y(Δ_Y)^⊗ N(-C)). Note that we obtain a closed immersion ι_Y Y →ℙ(V_Y) and a rational map ι_X X ℙ(V_X) which is birational onto its scheme-theoretic image X. For every dominant near-map f, we get an induced vector space morphism f^* V_Y → V_X. By Lemma <ref>, we can recover f from f^* and even from ℙ(f^*). Thus, we are led to studying linear maps V_Y → V_X. Let H := (V_Y, V_X)^∨, with ^∨ denoting the dual. Composition of functions is a canonical bilinear map V_X^∨×(V_Y, V_X) → V_Y^∨ and after identifying (V_Y,V_X) with its double dual, we get a bilinear morphism V_X^∨× H^∨→ V_Y^∨. It induces a relative (over ℙ(H)) rational map F ℙ(V_X) ×ℙ(H) ℙ(V_Y). For every closed point h ∈ℙ(H), we denote by F_h the rational map ℙ(V_X) = ℙ(V_X) ×{h}ℙ(V_Y). To prove the proposition, we are first going to construct a “small” locally closed subset H_3 of ℙ(H) such that the set of separably dominant near-maps (still) injects into H_3 via f↦ℙ(f^∗). Let H_1 ⊆ℙ(H) be the subset for which F_h maps X⊆ℙ(V_X) to Y ⊆ℙ(V_Y). To see that this is a meaningful condition, note that the indeterminacy locus of F_h is a linear subspace and that X⊆ℙ(V_X) is contained in no proper linear subspace. Let η_X be the generic point of the scheme X. Since H_1 is the set of h in ℙ(H) such that the morphism {η_X}×ℙ(H)→ℙ(V_Y) factors over Y, it follows from Lemma <ref> that it is closed in ℙ(H). Note that we obtain a relative rational map X × H_1 Y. Let H_2⊂ H_1 be the subset of elements of H_1 for which the induced rational map X Y is separably dominant. By Lemma <ref>, the set H_2 is open in H_1. Let H_3⊂ H_2 be the subset of rational maps g X Y such that, for every global section ω of ω_Y(Δ_Y)^⊗ N, the pullback (g∘ι_X)^*ω is a global section of ω_X(Δ_X)^⊗ N. By applying Lemma <ref> to every single ω and taking the intersection over all closed sets obtained this way, we see that H_3 is closed in H_2. Hence H_3 is locally closed in ℙ(H), and we give it the reduced scheme structure. If f (X, Δ_X) (Y, Δ_Y) is a separably dominant orbifold near-map, then the induced map ℙ(f^*) lies in H_3. As we mentioned before, by Lemma <ref>, different separably dominant orbifold near-maps induce different elements of H_3. Therefore, to prove the proposition, it suffices to show that H_3 is finite. To do so, let H_4 be an irreducible component of H_3, so that H_4 is a quasi-projective variety. Since H_3 is quasi-projective, it has only finitely many irreducible components. Therefore, to conclude the proof, it suffices to show that H_4 is finite. Let H_4 be the closure of H_4 in ℙ(H), and note that H_4 is a projective variety. Since H_1 is closed in ℙ(H), we see that H_4 is contained in H_1. In particular, we can interpret every closed point of H_4 as a (possibly non-dominant) rational map X Y. Let H_4 be a smooth projective variety and let H_4→H_4 be an alteration such that the preimage of H_4∖ H_4 in H_4 is a strict normal crossings divisor D_H (this exists by <cit.>). We let G X ×H_4 Y be the relative rational map induced by the above map X × H_1 Y. By Lemma <ref>, the sheaf S^N Ω^n_(X, Δ_X) × (H_4, D_H) has π_X^* S^N Ω^n_(X,Δ_X) as a direct summand. We denote by π S^N Ω^n_(X, Δ_X) × (H_4, D_H)→π_X^* S^N Ω^n_(X,Δ_X) the projection. We now define the morphism ΨH_4→(V_Y,V_X) = H^∨ by Ψ(h) = [ ω↦ι_h^*π(G^*ω) ], where ι_h X→ X×H_4 is the inclusion map x↦ (x,h). We now show that Ψ is a well-defined morphism of varieties. To do so, fix a closed point h ∈H_4 and ω∈ V_Y. Then ω∈Γ(Y, ω_Y(Δ_Y)^⊗ N). If h does not lie over H_4, then the form G^*ω might have a pole along X ×{h}, so that the pullback of G^∗ω to X is not well-defined. But we do have some control over the poles of G^*ω. Indeed, it can only have poles along Δ_X×H_4 with orders bounded by the coefficients of N Δ_X or poles along X × D_H. The latter poles are logarithmic by Corollary <ref>. This means that G^*ω is a global section of S^N Ω^n_(X, Δ_X) × (H_4, D_H). In particular, π(G^*ω) is a global section of π_X^* S^N Ω^n_(X,Δ_X). Since global sections of π_X^* S^N Ω^n_(X,Δ_X) only have poles along subsets of Δ_X×H_4, the element ι_h^*π(G^*ω) is always well-defined, i.e., Ψ:H_4→ H^∨ is well-defined. The restriction of Ψ to elements of H_4 lying over H_4 is simpler to describe. Indeed, if h lies over H_4, then we can restrict G^∗ω to X ×{h}. By definition of H_3, after identifying X ×{h} with X, this will give us an element of V_X=Γ(X, ω_X(Δ_X)^⊗ N). By the second part of Lemma <ref>, this element coincides with ι_h^*G^*ω. Let h_1 and h_2 be elements of H_4 lying over H_4 such that Ψ(h_1) = Ψ(h_2). Since Ψ(h_1):V_Y→ V_X is the injective map ω↦ (G∘ι_h_1)^∗ω and Ψ(h_2):V_Y→ V_X is the injective map ω↦ (G∘ι_h_2)^∗ω, it follows from Lemma <ref> that the dominant near-maps G∘ι_h_1 and G∘ι_h_2 are equal. This obviously implies that h_1 and h_2 lie over the same element of H_4 (via the alteration H_4→H_4). On the other hand, since H_4 is a projective variety and H^∨ is affine, the morphism Ψ is constant. Since Ψ separates elements lying over distinct points of H_4 (see previous paragraph), we conclude that H_4 is a singleton. This concludes the proof. As we show now, we may drop the properness and smoothness assumptions on (X,Δ_X). Let (X, Δ_X) and (Y, Δ_Y) be orbifolds with (Y, Δ_Y) a smooth proper orbifold of general type. If X = Y, then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y). First, let X' be the locus of smooth points of X ∖Δ_X. Then, the set of separably dominant near-maps (X,Δ_X) (Y,Δ_Y) injects into the set of separably dominant near-maps X' (Y,Δ_Y). Now, let X be an snc compactification of X' and let D=X∖ X. Then, the set of separably dominant near-maps X' (Y,Δ_Y) equals the set of separably dominant near-maps (X,D) (Y,Δ_Y). The latter is finite by Proposition <ref>. Note that Theorem <ref> follows from the following stronger finiteness statement. Let (X, Δ_X) and (Y, Δ_Y) be orbifolds with (Y, Δ_Y) a smooth proper orbifold of general type. Then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y). We may assume that the base field k is uncountable. As before, we may replace (X, Δ_X) by the smooth locus of X ∖Δ_X, so we may assume that X is smooth and Δ_X is empty. We argue by contradiction. Assume that (f_i X (Y, Δ_Y))_i ∈ℕ is an infinite sequence of pairwise distinct separably dominant near-maps. Let n := (Y). We will construct an n-dimensional subvariety Z ⊂ X which still admits infinitely many pairwise distinct separably dominant near-maps to (Y, Δ_Y), in contradiction to Corollary <ref>. Consider the n-fold direct sum ( X)^⊕ n of the tangent bundle of X, viewed as a variety over X. For every i ∈ℕ, consider the following subset of ( X)^⊕ n: U_i := { (x,v_1,...,v_n) ∈ ( X)^⊕ n |  f_i is defined at x and {(df_i)_x(v_j)}_j=1,...,n is a basis of _f_i(x) Y } Clearly, U_i is an open subset of ( X)^⊕ n. Moreover, by the implication (a) (b) of Lemma <ref>, the open subset U_i is nonempty. Now, for each pair of natural numbers i,j, consider the following subset of ( X)^⊕ n: V_ij := { (x,v_1,...,v_n) ∈ ( X)^⊕ n |  f_i and f_j are defined at x and f_i(x) ≠ f_j(x) } Since we assumed the near-maps f_i to be pairwise distinct, every V_ij is a nonempty open. Since k is uncountable, there exists a point (x,v_1,...,v_n) which lies in every U_i and every V_ij. Let Z ⊂ X be an n-dimensional smooth closed subvariety of X such that x ∈ Z and _x Z = span(v_1,...,v_n). Then, the maps f_i|_Z Z Y are pairwise distinct. Moreover, they are separably dominant by the implication (b) (a) of Lemma <ref>. Also, by Corollary <ref>, each f_i|_Z Z (Y,Δ_Y) is an orbifold near-map. Since this contradicts Corollary <ref>, this concludes the proof. Theorem <ref> also holds if we allow (X, Δ_X) and (Y, Δ_Y) to be ℚ-orbifolds (but still requiring (Y, Δ_Y) to be smooth, proper, and of general type). Indeed, as before, we immediately reduce to the case that X is a smooth variety. Writing Δ_Y = ∑ (1-1/m_i) D_i, we can define Δ_Y = ∑ (1-1/m_i) D. Then every morphism X → (Y, Δ_Y) is also a morphism X → (Y, Δ_Y); hence we are reduced to the case of ℤ-orbifolds. We conclude with the following application to endomorphisms of orbifolds of general type which generalizes finiteness results of Matsumura and Iitaka (see, for example, <cit.>) to the setting of orbifold pairs. If (X,Δ) is a smooth projective orbifold pair of general type, then the following statements hold. * Every separably dominant near-map (X,Δ) (X,Δ) is birational. * Every separably dominant morphism (X,Δ)→ (X,Δ) is an automorphism. * The group of birational near-maps (X,Δ) (X,Δ) is finite. To prove the first statement, let f (X, Δ) (X, Δ) be a separably dominant near-map. Let f^n be the n-fold composition of f. Since every f^n is a separably dominant near-map, by Proposition <ref>, there are distinct positive integers m,n such that f^m = f^n. Hence the degree of f must be 1, so it is birational. To prove the second statement, we first note that any surjective endomorphism f:X→ X is finite. Indeed, the induced morphism f^∗(X)⊗ℚ→(X)⊗ℚ is injective, and thus an isomorphism of finite-dimensional ℚ-vector spaces. Now, we argue by contradiction to show that f is finite. If f is not finite, then there is an integral curve C on X with f(C) a point. Let L be an ample divisor. Since f^∗ is an isomorphism, there is a divisor D such that f^∗ D ≅ L. By the projection formula, we have (C,L) = (C,f^∗ D) = (f_∗ C, D) =0. This contradicts the ampleness of L. Thus, the morphism f is finite. Now, to prove the second statement, note that any dominant morphism X→ X is surjective, hence finite. Thus, by (i), every separably dominant morphism (X,Δ_X)→ (X,Δ_X) is birational and finite. It follows from Zariski's Main Theorem and the normality of X that f is an automorphism. The third statement obviously follows from Proposition <ref>. This concludes the proof. alpha
http://arxiv.org/abs/2306.02255v1
20230604040836
Exploring Model Complexity in Machine Learned Potentials for Simulated Properties
[ "Andrew Rohskopf", "James Goff", "Dionysios Sema", "Kiarash Gordiz", "Ngoc Cuong Nguyen", "Asegun Henry", "Aidan P. Thompson", "Mitchell A. Wood" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.comp-ph" ]
Article Title]Exploring Model Complexity in Machine Learned Potentials for Simulated Properties [1]A. [email protected] 1]J. Goff 2]D. Sema 2]K. Gordiz 3]N.C. Nguyen 2]A. Henry 1]A. P. Thompson 1]M. A. Wood *[1]Center for Computing Research, Sandia National Laboratories, Albuquerque, 87185, NM, USA [2]Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, 02139, MA, USA [3]Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, 02139, MA, USA Machine learning (ML) enables the development of interatomic potentials that promise the accuracy of first principles methods while retaining the low cost and parallel efficiency of empirical potentials. While ML potentials traditionally use atom-centered descriptors as inputs, different models such as linear regression and neural networks can map these descriptors to atomic energies and forces. This begs the question: what is the improvement in accuracy due to model complexity irrespective of choice of descriptors? We curate three datasets to investigate this question in terms of ab initio energy and force errors: (1) solid and liquid silicon, (2) gallium nitride, and (3) the superionic conductor LGPS. We further investigate how these errors affect simulated properties with these models and verify if the improvement in fitting errors corresponds to measurable improvement in property prediction. Since linear and nonlinear regression models have different advantages and disadvantages, the results presented herein help researchers choose models for their particular application. By assessing different models, we observe correlations between fitting quantity (e.g. atomic force) error and simulated property error with respect to ab initio values. Such observations can be repeated by other researchers to determine the level of accuracy, and hence model complexity, needed for their particular systems of interest. [ [ ===== § INTRODUCTION Numerous macroscopic physical and chemical properties are predicted solely by the underlying interaction and motion of atoms. Molecular dynamics (MD) simulations are an indispensable tool for studying this motion, and have enabled computational prediction and discovery in fields as diverse as materials science <cit.>, energy storage <cit.>, catalysis <cit.>, and molecular biology <cit.>. High fidelity atomic forces for integrating Newton's equations of motion can be obtained from quantum mechanical calculations such as density functional theory (DFT), but unfavorable computational scaling of ab initio methods prevents simulations of relevant length and time scales for many applications. Ultimately, the accuracy of an MD simulation comes down to how these atomic forces are generated. To overcome this limitation, much progress has been made in the past decade by machine learning the quantum mechanical potential energy surface (PES) <cit.>. These ML surrogates of the PES possess a favorable linear scaling with number of atoms like traditional empirical potentials, and have been shown to be more accurate in a number of scenarios <cit.>. To achieve physically realistic and energy conserving simulations, ML potentials traditionally involve the use of an invariant atom-centered basis expansion that transform atomic environments into inputs suitable to ML models <cit.>. The mapping of these inputs to atomic energies and forces can take a variety of mathematical forms. Common ML models span a range of functional complexity including linear regression, neural networks (NNs), or Gaussian processes. Today a wide variety of descriptors (features, inputs, etc.) exist for atomistic machine learning, such as bispectrum components in the spectral neighbor analysis potential (SNAP) <cit.>, complete bases such as atomic cluster expansion (ACE) descriptors <cit.>, and traditional radial and angular bases such as in Behler-Parrinello atom-centered symmetry functions <cit.>. Despite this wide development of descriptors for quantifying atomic environments, the effect of different ML models utilizing these descriptors as inputs remains relatively unexplored. In this manuscript we quantify the gain in accuracy that can be expected when increasing model complexity, irrespective of choice of descriptors. Here model complexity can be defined by both nonlinearity and number of fitting coefficients. For example, a linear regression model with many fitting coefficients is more complex compared to a linear regression model with fewer fitting coefficients. Likewise, a neural network model may be seen as more complex than linear regression models with a similar number of fitting coefficients. Treating the models and descriptors of ML potentials separably gives insight where computational expense can be optimized in favor of accuracy. For completeness, we also consider three different descriptor sets that are available within LAMMPS; bispectrum components <cit.>, atomic cluster expansion <cit.>, and proper orthogonal descriptors <cit.>. To compare the performance of different models irrespective of descriptor choice, we fit linear (which includes quadratic kernel tricks) and nonlinear (e.g. neural network) ML models with identical descriptors as input. The gain in accuracy from using nonlinear models is quantified not only for validation errors in terms of fitting quantities (energies and forces), but also how errors in fitting quantities affect simulated properties. Here we focus on the distinction between fitting quantities, which can include validation/test errors in atomistic quantities such as energies and forces, and simulated properties which are calculated from using atomistic quantities like forces as inputs to calculations. This distinction between categories of prediction accuracy is greatly needed in the field as the volume of published models has been rapidly increasing, and their translation to a larger MD user base warrants this detail of reporting. To this end, we curate three datasets for comparing ML potential accuracy and its effect on select property predictions. Additionally, exposing the accuracy limits that are determined by model form and/or descriptor basis uncovers physically interpretable insight into the nature of chemical bonds governed contained in the ground state PES to which these models are fit. First we benchmark the accuracy of linear, quadratic, and NN SNAP potentials on a silicon (Si) dataset containing solid and liquid phase ab initio molecular dynamics (AIMD) configurations, and quantify the benefit a non-linear mapping from a single set of descriptors offers. Next, we perform the same model comparisons with a more chemically complex gallium nitride (GaN) dataset containing AIMD trajectories from 300 K to 2300 K; here we observe if the accuracy advantage offered by NNs translates to improvements in phonon properties when using multiple descriptor sets (SNAP, ACE). Finally, we curate a dataset for superionic diffusion in the solid electrolyte Li10Ge(PS6)2 (LGPS), to determine if complex model forms and descriptor sets offer an advantage over simple models for the macroscopically observable property of Li ion diffusion. We also benchmark our linear and NN models on another superionic conductor lithium phosphorus sulfide (LiPS) studied by graph neural networks (GNN) in literature <cit.>. While many studies have shown model comparisons in ab initio energy/force errors, it remains relatively unexplored how these errors correlate with the desired quantity of interest that the potential will be used for. Decoupling the effects of model form complexity from descriptor set is key in this understanding. Fundamentally, we are attempting to resolve the accuracy gap between ab initio and classical MD predictions, but for material properties that are sampled from a thermodynamic sampling of states. To this end, we seek to elucidate the required accuracy in ab initio energies/forces for accurately simulating material properties. Since the definition of accurate property prediction depends on individual research needs, it is important to show a range of model fitting errors and their associated simulated property errors. Such knowledge will enable researchers to choose models that are accurate enough for their purposes, since more complicated but accurate models may introduce additional unnecessary cost to simulate and/or train. It is worth noting that these trade-offs are only exposed in ML interatomic potentials, where traditional empirical potentials (Tersoff, EAM, COMB, ReaxFF) have a fixed accuracy with respect to computational cost. Decisions regarding increased cost of using complex NN models for the gain in accuracy, compared to simpler and more performant linear models are chosen by the user. Linear models also come with a range of other advantages, such as fast training times and convenient measures of Bayesian uncertainties <cit.>, so understanding the sacrifice in accuracy will help researchers weigh these pros and cons. To facilitate researchers in using our results to choose and develop their own models, we utilize the open source FitSNAP software as a ML interface to LAMMPS, with capabilities to fit models of varying complexity while keeping descriptor settings constant. FitSNAP allows training of various model/descriptor combinations, e.g. SNAP or ACE descriptors with both linear or neural network models. Our distinction between the ML potential descriptors and the model is illustrated in Figure <ref>. Note that graph neural networks of atomic interactions will replace the mapping of { R_i1, …, R_iN}→ B_i with additional interaction layers to construct an analogous descriptor. The modular separation between descriptors and models in Figure <ref> allows us to take advantage of performant descriptor and descriptor gradient calculations in LAMMPS, which are then input to automatic differentiation frameworks such as PyTorch <cit.> and JAX <cit.> for optimization. Combining the LAMMPS implementation of descriptors with ML frameworks such as PyTorch allows performant NN force training via a modified iterated backpropagation algorithm, explained in more detail in the Methods section. This modular separation between descriptors and models also allows researchers to explore which descriptor/model combination best suit their needs. While this separation of descriptor and model offers distinct advantages, it is important to note that this separation is entirely abstract and conceptual. Indeed, NNs may be viewed as linear models if considering the final layer of the network, which can be understood as a transformed set of descriptors that combine linearly at the output. In this view, the question becomes how much does a NN improve the original set of descriptors. Nonetheless we seek to answer how much fitting accuracy is improved by feeding descriptors into different models, and quantify the improvement in simulated property prediction therefrom. We begin by considering analytical differences between some ML potential models. § ML POTENTIAL MODEL FORMS In general, atom-centered ML potentials seek a relation between energy of an atom and its environment. The total energy is then a sum over all atomic energies, written as E = ∑_i E_i ( B_i) where B_i is the feature vector for atom i, otherwise known as the descriptors which quantify the atomic environment. Linear ML potentials are the simplest models by assuming a linear relationship between descriptors and the atomic energy, so that E_i = β_0 + ∑_k^K β_k B_ik = β_0 + β· B_i where β is a vector of β_k model coefficients corresponding to descriptor k out of K descriptors. For multi-element systems, we may use a different set of coefficients for each element type as in weighted density descriptors <cit.>, or a unique set of coefficients for combinations of elements as in ACE <cit.> or chemSNAP <cit.>. Nonlinear terms, hence more complexity, may be introduced in linear models via kernel tricks; one popular example here involves quadratic ML potentials such as quadratic SNAP <cit.>. Here, one may think of the energy in Eq. <ref> as a first order Taylor expansion in descriptor space. A second order expansion may be written as E_i = β_0 + β· B_i + 1/2 ( B_i)^T ·α· B_i where α is a symmetric K × K matrix of model coefficients. The quadratic potential of Eq. <ref> is readily optimized in the same linear regression manner as Eq. <ref>, with the cost of significantly more fitting coefficients. This increased number of coefficients, however, offers more flexibility and improved fitting errors <cit.>. Even with their improvements in accuracy, quadratic models can retain computational costs close to linear models since we may use the same descriptors for the first and second order terms as seen in Eq. <ref>. Of course quadratic models are more expensive to train due to their greatly increased number of coefficients, but this training cost can still be cheaper than more complicated nonlinear models such as neural networks. Atom-centered neural network potentials approximate atomic energies with multilayer perceptrons that are functions of the descriptors, written in terms of matrix products of layer weights as E_i = W_L σ(...σ( W_2 σ( W_1 B_i))) where W_l denotes the weights for layer l, and σ is the activation function used at each layer. The last node of the multilayer perceptron predicts a scalar energy E_i, and is therefore not transformed with the activation function σ. One may expect that NN models such as Equation <ref> offer more complexity/flexibility than the linear and quadratic models in Equations <ref> and <ref> since there is no limit on the degree of nonlinearity in NN models, and because of the universal approximation theorems <cit.>. Whether or not the improved accuracy offered by this extra flexibility matters in representing the true PES or for modelling material properties, however, is an unexplored question that we investigate in this study. We may better appreciate how these model types are related and how they offer different degrees of flexibility for modelling the PES by considering a general view of interatomic forces for atom-centered ML potentials. The force on atom i arises from the negative gradient of the total potential energy in Equation <ref> with respect to the position of atom i. For atom-centered ML potentials, these forces are generally written as F_i = - ∑_j ∑_k β_jk∂ B_jk/∂ R_i where β_jk represents the derivative ∂ E_j / ∂ B_jk for each neighboring atom j. All the aforementioned models possess different β that offer varying degree of flexibility when modelling interatomic forces, shown mathematically below. β_jk = β_k (Linear) β_k + ∑_l α_kl B_jl (Quadratic) ∂/∂ B_jk{ W_L σ(...σ( W_2 σ( W_1 B_j))) } (NN) Equation <ref> shows that linear ML potentials assume neighboring force contributions F_ij are proportional to a the descriptor derivatives by the coefficient β_k. Meanwhile, β_jk for quadratic models is a linear combination of all other descriptors. Finally, NNs provide the highest possible order of nonlinearity since the derivatives of a multilayer perceptron with respect to the inputs is yet another multilayer perceptron. In principle Equation <ref> shows that NNs offer the most flexibility in modelling interatomic forces, by not assuming any functional form of the descriptors. It remains to be seen, however, how much this extra complexity affects force accuracy, and whether this improvement in force accuracy improves property prediction. To begin investigating this question we first apply these models to silicon, one of the simplest systems benchmarked by ML potentials. § SOLID AND LIQUID SILICON In the literature, a Si dataset was used to demonstrate state-of-the-art accuracy with the recently developed ACE descriptors <cit.>, and another dataset was developed to demonstrate the ability of GAP to model a wide variety of phase spaces and properties <cit.>. To this end, silicon has become a common benchmark system for simple comparisons. Here, we curate a simple dataset for the purpose of comparing different models on phonon properties and liquid structure. Our dataset includes AIMD configurations from 300 K to the melting point, along with 2000 K liquid AIMD simulations, with more details in the SI. Before simulating properties with our potentials, however, we seek to benchmark their accuracy on the energies and forces within the dataset. This will show to what extent more accurate fits give better ab initio property agreement. In this regard, we seek a fair method of comparing different model accuracies on fitting quantities irrespective of training hyperparameters, which requires a closer look at the loss function. Hyperparameters common to all ML potential models and optimization methods reside in the loss function. When training to energies and forces we use the same L2 norm loss function for all models, given by L = 1/M∑_m^M 1/N_m[ w_m^E ( Ê_m - E_m )^2 + w_m^F/3∑_a,i^3N_m( F̂_a,i,m - F_a,i,m)^2 ] where M is the number of configurations, N_m is the number of atoms in configuration m, Ê_m is the model energy of a configuration, and E_m is the target ab initio energy. The right-most term is a L2 norm in force components where F̂_a,i,m is the model force component on atom i in configuration m with Cartesian direction a. Likewise for the target ab initio force components F_a,i,m. For linear and NN models we solve this L2 norm via singular value decomposition (SVD) and gradient descent, respectively, with more details in the SI. Regularization penalty functions were not used, but are available within the FitSNAP framework. Quadratic models are also readily solved via SVD since they can be written as a linear combination of descriptors as in Equation <ref>. Linear models can trained with gradient descent, but SVD offers fast training times that have been utilized to create successful potentials in the past <cit.>, so we wish to retain this advantage in a model comparison. The common hyperparameters between linear and nonlinear models are therefore loss function energy weights w_m^E and force weights w_m^F. For linear models, effort is often dedicated to optimizing these energy/force weight hyperparameters by looping over SVD fits combined with multi-objective optimization on the energy/force weights <cit.>. Gradient descent training for NNs is more costly, however, so we do not individually optimize the weight hyperparameters here. Instead, we perform fits with a variety of force/energy weight ratios w^F/w^E, where the same ratio is applied to all configurations in the training set. These w^F/w^E ratios are chosen in a range of 10^-4 to 10^4, with more details on the exact values in the SI. By comparing the best energy and force errors on a validation set using a variety of w^F/w^E ratios, we achieve a fair comparison of model accuracy irrespective of loss function hyperparameters. For an initial comparison we fit linear, quadratic, and NN models on our silicon data set using only SNAP descriptors, to observe the effect of model complexity alone. Descriptor complexity is also observed by using SNAP descriptor settings j_max = 3 (31 descriptors) and j_max = 4 (56 descriptors). We refer the reader to literature for understanding how the j_max parameter influences the number of SNAP descriptors <cit.>. For all w^F/w^E ratios, we report the average validation error on five random 10% validation sets, along with a standard deviation thereof. Scanning a variety of w^F/w^E ratios we find that all models saturate at different energy and force accuracy, as seen in Figure <ref>a, with notable trade offs between these two fitted properties. It is important to note the force error saturation experienced by all models as shown in Figure <ref>. We proposed this saturation level should be used to fairly compare how well different models can match the PES shape or its derivatives. Furthermore, properties that rely on sampling a MD trajectory at equilibrium are hypothesized to be highly sensitive to force accuracy that determines atomic motions in some equilibrium state. Other immediately observable and important trends involve the improvement in force accuracy due to increasing model complexity, irrespective of descriptor choice. For example, SNAP descriptors with j_max = 3 saturate near  150 meV/, as seen with the blue triangles in Figure <ref>. Simply feeding these same descriptors into quadratic or small NNs (32 × 32 hidden node architecture denoted by NN1 in Figure <ref>) decreases the force saturation level down to  100 meV/, as seen with the red and grey triangles. A similar improvement is seen with more detailed SNAP descriptors using j_max = 4, where the linear models (blue circles) saturate at  125 meV/ while quadratic and small NN models (red and grey circles, respectively) saturate at  80 meV/. Further complicating the model with larger NNs, however, does not offer much improvement in forces. This is shown with the black diamond and square in Figure <ref>a, representing NNs with architectures of 250 × 250 and 500 × 2000 node architectures, respectively. Using these much larger models only yielded a  5 meV/ improvement over the smaller NNs. This agrees with previous experiments in literature, which found deeper/wider NNs have little effect on energy accuracy <cit.>. Nonetheless we observe a clear improvement in force accuracy due to quadratic and NN models over linear models, using the same descriptors. This begs the question: does this improvement in forces translate into accurate dynamics and material properties that require sampling a trajectory? Although it has been noted in literature that force errors in general are not enough to say a potential is reliable for stable MD or property prediction <cit.>, we find for our models here that force error correlates well with liquid structure and phonon property error. The easiest dynamical quantity to reproduce was the 2000 K liquid RDF in Figure <ref>b, where we achieved low RDF MAE (< 0.01 RDF units) with models possessing up to ∼ 120 meV/ force error. More liquid structure data such as angular distribution functions for specific w^F/w^E ratios are shown in the SI. Solid state phonon frequencies, on the other hand, exhibited a pronounced correlation with force error as shown in Figure <ref>c. This is not surprising due to the fact that vibrational frequencies depend on second order derivatives of the PES, so models that best match forces (first derivative) should better capture how force changes with atomic displacement. Indeed, the only potentials capable of reaching 1% frequency error are those with the lowest force errors; quadratic and NN SNAP with j_max = 4. Likewise for thermal conductivity error in Figure <ref>d, the same quadratic and NN SNAP j_max = 4 potentials accurately reproduced thermal conductivity within 10%. These quadratic and NN SNAP j_max = 4 potentials achieved the lowest force errors of 80 meV/ and were the only potentials of simultaneously reproducing all AIMD quantities (liquid structure, phonon frequencies, and thermal conductivity) to within reasonable agreement (0.01 RDF MAE, 1% frequency error, and less than 10% thermal conductivity error). This shows the practical advantage of using atom-centered descriptors such as SNAP with more complex models, and that the 50 meV/ improvement in forces can yield a noticeable improvement in solid/liquid quantities of interest. It is important to note that this does not come with a significant increase in computational cost when simulating MD, as most of the cost is associated with the descriptor calculation. More details on computational cost are reported in the SI, and be aware that software factors such as PyTorch interfaces to allow NN potentials in LAMMPS are constantly changing. Aside from the slightly increased computational cost of MD, researchers should also acknowledge the increased cost of training for NN models. The SVD procedure for training linear SNAP on a data set of  2000 configurations like we use here completes on the order of minutes. Meanwhile, the 1000 epochs of gradient descent training used for our NNs can take 12 hours on a single CPU core with our silicon data set. Linear solvers therefore possess a notable advantage of fast training times, allowing one optimize other hyperparameters like those associated with SNAP descriptors to match more important quantities, such as stability metrics, defect/surface energies, and others <cit.>. Noting these advantages in training cost, one may be attracted to the prospect of using quadratic models, especially since they saturate near the same force errors as compared to NNs. Both quadratic and small NN models with different SNAP descriptors j_max = 3 and j_max = 4 converge at similar force errors near 100 meV/ and 80 meV/, respectively. This might suggest that quadratic and NN models arrive at the same approximation of the ab initio PES, which is important to know because researchers should not waste time fitting both models if they are indeed the same solution. To test whether quadratic and NN models arrive at the same solution, we compare the derivatives of the PES with respect to model descriptors. These first derivatives are the β_jk given in Equation <ref>, and plotted as a function of their respective bispectrum components in Figure <ref>. By comparing model first derivatives with respect to descriptors, we are comparing the shape of the PES up to first order in the descriptors. Some bispectrum components share similar first derivatives between quadratic and NN SNAP models, e.g. Figure <ref>a shows that the clouds of first derivatives of the first bispectrum components significantly overlap for all data points when evaluated by quadratic and NN models trained on the silicon set. The derivative with respect to the second bispectrum component shown in Figure <ref>b, however, shows less overlap. Overall, we find a correlation in the average value of first derivatives with respect to descriptors, shown by the parity plot for quadratic and NN β_jk shown in Figure <ref>c. Quadratic and NN models therefore arrive at similar approximations of the PES up to first order in the descriptors. Up to second order, however, the quadratic and NN models are different, shown by the parity plot in average second derivatives α_j,kl = ∂^2 E_j / ∂ B_jk∂ B_jl. Although quadratic and NN models arrive at different approximations of the PES up to second order, these differences do not significantly affect force, phonon property, or liquid structure accuracy. This may be surprising since quantities like vibrational frequencies directly depend on spatial force derivatives; at some level, properties become insensitive to differences in PES approximations, and we seemed to have reached this point with quadratic and NN SNAP. Though we acknowledge there is a deeper study to be had of which material properties would be sensitive to these differences, we highlight the unique solutions provided by these two model forms. It therefore might be advantageous for researchers to fit both models and retain all fits even though they have similar errors, because differences up to second or higher orders may be responsible for other phenomena not studied here, such as improvements in stability or extrapolation to much different phase spaces. Exploring the differences in accuracy for other systems will also show whether the agreement between quadratic and NN models is consistent. Since silicon is a relatively simple single element system, we can introduce training complexity by considering a system with two element types. To that end, we see if similar model trends arise by considering our gallium nitride AIMD data set. § GALLIUM NITRIDE THERMAL TRANSPORT GaN is an important material in power electronics where large heat fluxes limit performance in devices <cit.>. Phonon transport in GaN is therefore well studied and there are reports in literature of accurate potentials for GaN <cit.>. Here we quantify the improvement of quadratic and NN models over linear models like we did for Si. We perform this comparison using descriptors that treat multi-element systems to see if NNs offer noticeable gain in accuracy with different types of descriptors. Specifically, we now add ACE descriptors to the set of comparisons due to their explicit treatment of multi-element pairs <cit.>. In the ACE formalism each pair of atoms has its own set of descriptors and model coefficients based on element type. For example, GaN has 4 sets of coefficients encompassing Ga-Ga, Ga-N, N-Ga, and N-N interactions. Meanwhile, traditional "weighted density" SNAP reduces all element types in the environment onto a single descriptor value, with separate model coefficients reserved only for distinct elements as the central atom instead of element-element pairs. There is also an explicit multi-element version of SNAP, but we have not introduced this formalism with NNs within FitSNAP <cit.>. Due to explicit multi-element treatment it is therefore expected that ACE can achieve lower errors compared to weighted density SNAP; this has been noted recently when comparing ACE with SNAP and many other potentials <cit.>. When feeding ACE and SNAP descriptors into NNs, however, we use the same multi-element treatment of atom-centered NN models. Here we assign a unique NN to each central atom based on its type, as common in literature for ANI <cit.> and Behler-Parrinello NNs <cit.>. The force/energy tradeoff for GaN using SNAP descriptors with various models, and ACE descriptors with linear and NN models, is shown in Figure <ref>. Linear ACE saturates at a lower force error than linear SNAP. This is not surprising, especially since the ACE settings we used result in 120 descriptors for each element type pair, while SNAP j_max = 3 and j_max = 4 involve only 31 and 56 descriptors for each element, respectively. Our ACE basis could therefore be thought of as more comprehensive than our SNAP basis used here. One may therefore expect less of an improvement in errors when considering a nonlinear ACE model, since the descriptors already offer a comprehensive description of the environment. Indeed, NN ACE only exhibits an improvement in forces from  80 meV/ to  60 meV/ when keeping descriptors constant. This improvement is also limited by our implementation of multi-element NNs; linear ACE possesses unique coefficients for each element type pair but our NNs are unique for each element type. It would be more commensurate to use different NNs for each element type pair, which should offer more flexibility. Nonetheless we see less of an improvement compared to SNAP descriptors, which experience almost a two-fold reduction in force errors as seen in Figure <ref>. With SNAP descriptors, we see the same saturation of quadratic and NN models at similar force errors, as we saw in Si. This improvement in forces has a measurable effect on vibrational frequencies and thermal transport, as shown in Figure <ref>. In Figure <ref> we see a clear positive correlation between force error and phonon frequency error averaged over symmetry directions in the Brillouin zone, with more details in the SI. In Figure <ref> we observe a weaker correlation between force error and thermal conductivity error. This is expected since thermal conductivity is a higher order property and therefore some potentials may obtain small errors for physically improper reasons, such as cancellation of errors due to improper phonon contributions. There is therefore less of a correlation compared to frequencies, as we also saw for Si. Nonetheless, quadratic and NN models with j_max = 4 offer enough flexibility to simultaneously achieve low frequency errors of 1% and thermal conductivity errors < 10%. The two-fold improvement in forces when using SNAP descriptors with quadratic and NN models is therefore important for describing phonon transport in GaN. We note that past efforts in modelling GaN involved the use of Taylor expansion potentials whose fitting coefficients are the PES spatial derivatives, which allowed nearly exact reproduction of ab initio phonon dispersion <cit.>. The potentials made here may not be as computationally efficient as low order Taylor expansions but retain the ability to be used in scenarios beyond the solid phase that Taylor expansions are limited to, such as melting or diffusion. § LITHIUM ION DIFFUSIVITY Si and GaN are still relatively simple systems from a molecular modelling standpoint and therefore served as good candidates for observing improvements in accuracy, without confounding variables such as complicated chemical forces or other difficult training data. These systems alone, however, may not be as commensurate with other efforts of the community to study systems with more element types and more complicated dynamical properties. We therefore seek evermore difficult multi-element systems and dynamical properties to benchmark our linear and nonlinear models on. For this purpose we turn to Li ion conductors, which were recently used in literature to exhibit profound ability to match AIMD properties with state-of-the-art graph neural network potentials <cit.>. Li ion superionic conductors are solids that exhibit Li diffusivity on par with liquid electrolytes <cit.>. These materials are of recent interest in the energy storage community because they may contribute to the commercialization of all-solid-state batteries <cit.>. It is therefore of interest for researchers to study the atomistic mechanisms allowing for superionic diffusion, which may allow engineering of materials with favorable diffusion or synthesis properties <cit.>. To achieve this level of atomistic insight, interatomic potentials for Li superionic conductors are required. ML potentials for Li ion systems to date use deep neural networks that resulted in excellent agreement with AIMD diffusion <cit.>. These systems remain somewhat challenging for the ML potential community, with only a few NN potentials available. It may therefore be believed that NN models are required to model such systems with 3-4 element types and unique diffusion mechanisms. This model complexity of existing Li ion potentials therefore suggests that energies and forces cannot be represented by low order functional forms of atom-centered descriptors. To test whether NNs offer advantages for modelling Li superionic conductors as we saw with Si and GaN, we curate an AIMD data set for LGPS. This dataset contains AIMD trajectories from 300 K to the melting point of around 2000 K, with more details in the SI. We trained a variety of potentials to 5,000 AIMD configurations sampled across these temperatures. From our experience with Si and GaN in Figures <ref> and <ref>, we compare force saturation errors since this is a fair comparison for how well the models can fit the PES. Force errors associated with our best fits are tabulated in Table <ref>. Validation will be performed by simulating ion diffusion at 600 K. Here we use linear SNAP with j_max = 4 and NN SNAP with j_max = 3 and j_max = 4. To benchmark more linear models on Li conductors, we include another recently developed potential based on proper orthogonal decomposition <cit.>. POD descriptors use a form of proper orthogonal decomposition traditionally applied in continuum mechanics, formulated for discrete atomistic applications. Our POD potential here has a total of 3314 coefficients composed from 11 2-body descriptors and 80 3-body descriptors. These POD descriptors treat multi-element interactions such that each atom pair or triplet has its own coefficients, but use the same basis functions, so that the computational cost is independent of the number of elements. The number of fitting coefficients for our linear SNAP model is number of descriptors times number of elements resulting in 56 × 4 = 224 fitting coefficients. It is therefore not surprising that linear POD alone obtains much better force errors than linear SNAP. As has been seen before with Si and GaN, simply adding more descriptors to a model results in a notable increase in accuracy. Adding SNAP descriptors to POD for LGPS also reduced force error from 63.3 meV/ to 48.1 meV/. Note there are diminishing returns on accuracy with increased basis size, as was mentioned in Wood et al. <cit.>. To observe the effect of model complexity on simulating ion diffusion, we first focus on SNAP descriptors input to linear and NN models. MD simulation results for our LGPS data set and a literature LiPS data set are shown in Figure <ref>. From <ref> we see a clear improvement in MD simulations of ion diffusion using NN SNAP compared to linear SNAP. It is important to note that these comparisons involved the same SNAP descriptors for j_max = 4; we did not optimize SNAP hyperparameters for linear models as was successfully done for other complicated multi-element systems <cit.>. If we chose this route for linear SNAP, it may be possible to include metrics like RDF or MSD as objective functions, and use multi-objective optimization of SNAP hyperparameters to find linear fits that perform as well as NNs. Such a procedure, however, was not required for NN SNAP. Overall for LGPS, NN SNAP exhibits better agreement with AIMD Li ion diffusivity compared to linear SNAP, as well as a nearly two-fold improvement in force errors. For LiPS, we use the data set available in literature which consists of 520 K AIMD trajectories <cit.>. We were unable properly stabilize LiPS using linear SNAP with this data set; for LGPS we required high temperature AIMD data up to 1600 K for stabilizing linear SNAP. This is seen by the disagreement of RDFs in Figure <ref>c. NN SNAP, on the other hand, does not require high temperature configurations to stabilize this system; the existing LiPS data set in literature was therefore sufficient to accurately model the structure of Li ions. In Figure <ref>d we see the improvement offered by using NN SNAP with 56 descriptors (j_max = 4) compared to 31 descriptors (j_max = 3). The best NN SNAP potential exhibits much closer agreement to the AIMD diffusion curve compared to linear SNAP, although not as well as the state-of-the-art graph based NequIP model. This is not suprising considering the extremely impressive force MAE of 4.7 meV/ achieved by NequIP for LiPS <cit.>. Our best NN SNAP model obtained a force MAE of  80 meV/ with computational cost of ∼ 1 × 10^-4 s per timestep per atom per CPU core. Figure <ref> might suggest that nonlinear models are simply better or necessary for modelling mass transfer in complex multi-element systems, but this may not be the case. A linear basis with sufficiently detailed descriptors can also achieve the same level of AIMD agreement. We demonstrate this for LGPS using the recently developed POD descriptors with a linear model. LGPS Li ion diffusion simulations using POD <cit.> are shown in Figure <ref>. As seen in Figure <ref>, linear POD exhibits excellent agreement with AIMD diffusion like NN SNAP. Our results here show, for the first time, the ability of linear models to accurately model diffusion in Li superionic conductors. Linear models possess pros and cons compared to deep learning methods, so this is a worthy addition to the molecular modelling toolkit used by researchers. If more accuracy is ever desired, our results with SNAP descriptors show that simply feeding these features into a NN can result in a two-fold improvement in force accuracy. The resulting impact on simulated properties is positive, evidenced by our investigation of Si, GaN, LGPS, and LiPS. § DISCUSSION We observed improvement in simulated properties calculated with nonlinear models that use the same features as linear models. This improvement is attributed to the enhanced force accuracy offered by more flexible nonlinear models compared to linear models. A fair comparison to quantify this force improvement was achieved by varying loss function force/energy weight hyperparameters, where it was found that all models saturate at a different agreement in forces. This observation of a force saturation level may be used by researchers to fairly determine which models best match the shape of the PES, irrespective of loss function energy/force weight hyperparameters. Our observation on the correlation between force accuracy and simulated property accuracy can also aid researchers in determining a force threshold required for their particular application; this is important if researchers want to choose the simplest or cheapest model for the task at hand. The correlation between force accuracy and simulated property accuracy is best observed with properties that are less prone to cancellation of errors, such as vibrational frequencies. Higher order properties that are measured by a thermodynamic sampling of states such as thermal conductivity, mass diffusivity, and liquid structure, on the other hand, are known to exhibit cancellation of errors where potentials can achieve poor force/energy errors but still produce accurate property values <cit.>. It is also important to note that force/energy accuracy is not always enough for obtaining usable potentials that perform stable MD simulations and property prediction <cit.>. For our data sets and models, however, we showed that the improvement in force errors when using NNs does not sacrifice MD stability. We observe here that our potentials with most accurate forces exhibit the best property agreement. This was achieved with Si, GaN, and LGPS by simply feeding descriptors traditionally used with linear models, such as SNAP and ACE, into quadratic or NN models. Despite the improved accuracy from using NNs with a given set of atom-centered descriptors, it is important to note that linear models with a sufficient basis and larger number of fitting coefficients can outperform NNs with a less descriptive basis or fewer features. We showed this using the more complicated Li ion systems. For LGPS and LiPS, an original set of 56 SNAP descriptors input to a linear model was not sufficient to accurately model diffusion. With the smaller diversity in training data for the LiPS set <cit.>, we even had difficulty stabilizing the linear model for MD simulations. Using this original set of 56 SNAP descriptors with a NN model greatly improved stability along with the simulated mass diffusivity. This does not mean NNs are required to obtain such accuracies, however, as we illustrated with the recently developed POD descriptors <cit.>. Our POD potential involves a linear model that uses a total of 91 descriptors and 3314 fitting coefficients; this resulted in better force errors than NN-SNAP with 56 descriptors input to a 64 × 32 NN architecture, and similar agreements in mass diffusivity. With the systems we studied here, we therefore cannot claim that particular materials or properties always require nonlinear model forms for reliable simulations. For simpler/smaller sets of descriptors, however, nonlinear model forms are required to achieve the desired level of ab initio property agreement showed in this study (1% error in phonon frequencies, 0.01 MAE in RDF, and < 10 % error in thermal conductivity and mass diffusivity). Nonetheless, the results herein show that a linear model with a sufficient basis and number of fitting coefficients can achieve similar results compared to NNs. It remains to be seen in future work if this will always be the case for systems that are more complicated than the four element LGPS system studied here, such as rare earth metals <cit.>, metal organic frameworks <cit.>, or many-element alloys <cit.>. Aside from more complicated chemical systems, NNs might also possess an advantage in more complicated simulated properties. Indeed, our study is limited to properties at equilibrium such as liquid structure, phonon frequencies, and transport properties. All of these properties require small extrapolations from an equilibrium structure. It is therefore paramount that researchers heavily weight the equilibrium structure when making potentials for equilibrium properties. For example, calculating phonon frequencies on a structure with non-zero forces will result in negative frequencies. This can be alleviated by assigning larger weights in the loss function of Equation <ref> for the equilibrium configurations, or oversampling the equilibrium configurations when training. For non-equilibrium phenomena, however, one may need to sufficiently model multiple equilibrium states as well as transitions between these states. In such non-equilibrium scenarios, active learning approaches may also be necessary to gather appropriate data, instead of fitting purely to AIMD simulations at equilibrium like we did here. This is currently a topic of work applied to NN and linear models <cit.>. § CONCLUSION AND OUTLOOK Overall we sought to answer the original question of how much gain in accuracy can be expected by feeding descriptors into different ML models, where we saw up to a 50% improvement in force accuracy when using nonlinear models such as NNs compared to linear models. For the Si, GaN, and Li ion systems studied here, this resulted in significant improvement in simulated property errors with respect to AIMD simulations. We can therefore scale up ab initio accuracy to significantly larger length and time scales by simply taking existing and widely used descriptors and feeding them into nonlinear models. Despite this improvement when using nonlinear models with SNAP and ACE descriptors, we also showed that linear models with a sufficient basis can achieve property simulations on par with NNs. We showed this using linear POD with more fitting coefficients than our NN models. This is an important result because linear and NN regression both have unique advantages and disadvantages, and molecular modelling researchers may use both of these capabilities with the publicly available tools developed herein. A fruitful topic for future work is to investigate whether simpler descriptors (e.g. 56 SNAP descriptors) input to complicated models (e.g. NN) possess advantages/disadvantages compared to complex descriptors (e.g. thousands of POD or ACE descriptors) input to simple models (e.g. linear). There may be other advantages and disadvantages of linear and NN models not studied in detail here, such as MD stability/usability or extrapolation ability in non-equilibrium events such as chemical reactions. The ability to use both linear and nonlinear models expands the community toolbox for such future studies, and for creating potentials that describe a variety of systems/scenarios. To this end, we created FitSNAP as an open-source software possessing all the abilities shown in this manuscript; fitting linear and NN models with different descriptors such as SNAP or ACE, then immediately deploying the model for high performance MD in LAMMPS <cit.>. Our linear POD potential is also available as a LAMMPS package where users can perform linear regression with training data and immediately use the potential after. To conclude, we presented a variety of potentials available for researchers in the open-source LAMMPS/FitSNAP ecosystem and benchmarked expected improvements in fitting and simulation accuracy from using such models for simulating transport properties. § AUTHOR CONTRIBUTIONS A.R. developed software to support ML models in FitSNAP and LAMMPS, along with curating data sets and property simulations. J.G. implemented ACE descriptors in FitSNAP. D.S. further aided in training and development. C.N. trained POD potentials and aided with the LAMMPS implementation in the ML-POD package. K.G. and A.H. helped curate LGPS training data and mass diffusion calculations. A.P. and M.W. led the development of FitSNAP and LAMMPS to support new features seen in this study. § METHODS All models with SNAP and ACE descriptors were trained using the FitSNAP software, which we provide open-sourced with full documentation on linear and NN training procedures. The POD potential was trained using least squares on a linear system of equations, as implemented in the LAMMPS ML-POD package. NN models were trained using a modified form of iterated backpropagation <cit.>. The iterated backpropagation algorithm we implemented in FitSNAP is as follows. * Calculate all descriptors and their spatial derivatives in LAMMPS. * Perform a forward pass that builds a computational graph in an automatic differentation framework; we use PyTorch <cit.>. * Perform a backward pass to obtain the derivatives of the NN output with respect to inputs i.e. the array of values β_jk defined in Eq. <ref>. * Using the array β_jk, calculate atomic forces in LAMMPS with Eq. <ref> * Perform a second forward pass in PyTorch to calculate loss function defined in Eq. <ref> * Perform a second backward pass to obtain loss function derivatives with respect to model coefficients, which are used in gradient descent minimization. A key aspect of this iterated backpropagation method is that it eliminates the need to store gradients of model outputs with respect to model coefficients for the entire batch. Instead, we only need to store loss function derivatives with respect to model coefficients. Previous implementations of force training were strongly limited by available physical memory <cit.>. Relaxing this constraint makes it possible to explore more diverse combinations of model complexity and force training protocols, such as larger models and/or batch sizes. § ACKNOWLEDGEMENTS This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. A.R., J.G., A.T., and M.W. acknowledge funding support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, and U.S. Department of Energy, Office of Fusion Energy Sciences (OFES) under Field Work Proposal Number 20-023149. Dr. Nguyen and Dionysios Sema acknowledge the United States Department of Energy under contract DE-NA0003965. Dr. Nguyen also acknowledges the Air Force Office of Scientific Research under Grant No. FA9550-22-1-0356 for supporting his work. K.G. and A.H. acknowledge support from the National Science Foundation (NSF) career award to A.H. (award no. 1554050) and the Office of Naval Research (ONR) under a MURI program (grant no. N00014-18-1-2429). This article has been authored by an employee of National Technology and Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan. § DECLARATION OF COMPETING INTERESTS The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY Training data, FitSNAP input scripts, and LAMMPS input scripts for mass diffusion simulations are available on our FitSNAP GitHub page.
http://arxiv.org/abs/2306.03595v1
20230606113512
Transversals via regularity
[ "Yangyang Cheng", "Katherine Staden" ]
math.CO
[ "math.CO" ]
0pt 0pt 40pt 10pt 0pt 10pt 8.8in 6.6in C>c<L>l<[enumerate]topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex[itemize]topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1explaintheoTheorem[section] prop[theo]Propositionlemma[theo]Lemmacor[theo]Corollaryconj[theo]Conjectureclaim[theo]Claimprob[theo]Problemques[theo]Questiondefinitiondefn[theo]Definitionrem[theo]Remarkeg[theo]Examplealg[theo]Algorithmset[theo]Setting Transversals via regularity Yangyang ChengMathematical Institute, University of Oxford, Oxford, UK. Email: [email protected] Cheng was supported by a PhD studentship of ERC Advanced Grant 883810. Katherine StadenSchool of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes, UK. Email: [email protected] Staden was supported by EPSRC Fellowship EP/V025953/1. July 31, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================== Given graphs G_1,…,G_s all on the same vertex set and a graph H with e(H) ≤ s, a copy of H is transversal or rainbow if it contains at most one edge from each G_c. We study the case when H is spanning and explore how the regularity blow-up method, that has been so successful in the uncoloured setting, can be used to find transversals. We provide the analogues of the tools required to apply this method in the transversal setting. Our main result is a blow-up lemma for transversals that applies to separable bounded degree graphs H. Our proofs use weak regularity in the 3-uniform hypergraph whose edges are those xyc where xy is an edge in the graph G_c. We apply our lemma to give a large class of spanning 3-uniform linear hypergraphs H such that any sufficiently large uniformly dense n-vertex 3-uniform hypergraph with minimum vertex degree Ω(n^2) contains H as a subhypergraph. This extends work of Lenz, Mubayi and Mycroft. § INTRODUCTION §.§ Transversal embedding The problem of deciding whether an n-vertex graph G contains a given subgraph H is a central topic in graph theory. Since this problem is NP-complete, much of the research on this topic has focused on finding sufficient conditions on G that guarantee the presence of H. Given a graph parameter π, we seek the best possible bound π_H,n such that if π(G) > π_H,n, then G contains a copy of H, whereas there are graphs G with π(G)≤π_H,n which are H-free. In this paper, we investigate the generalisation of this problem to graph collections, also known as a multilayer graph. Here, we are given graphs G_1,…,G_s on the same vertex set, where s ≥ e(H), and we seek a transversal copy of H, which is a copy of H containing at most one edge from each of the graphs G_1,…,G_s. We often think of each G_c having the colour c, so a transversal copy of H is also called a rainbow copy. Again, we seek a best possible condition on π which guarantees the existence of H. That is, we seek the minimum π_H,n^s such that if G_1,…,G_s are any graphs on the same vertex set of size n and π(G_c) > π_H,n^s for all 1 ≤ c ≤ s, then there is a transversal copy of H. Observe that we recover the original `uncoloured' problem when G_1=…=G_s, so the transversal embedding problem is indeed a generalisation. Ideally, we take s=e(H) graphs in the collection and are most interested in determining π_H,n^ col := π_H,n^e(H). There has been a lot of recent progress in this area. The central question about transversal embeddings, posed by Joos and Kim <cit.>, is whether the addition of colours changes the answer to the problem. That is, given some graph parameter π, when do we have π_H,n = π_H,n^ col? When equality does hold, we say that the embedding problem is colour-blind. We already have a non-example of colour-blindness in the case of the triangle and the size parameter. Mantel's theorem from 1907 states that e(G)>⌊ n^2/4 ⌋ guarantees that an n-vertex graph G contains a triangle, whereas the complete balanced bipartite graph shows that this is best possible. So e_K_3,n=⌊ n^2/4⌋. However, Aharoni, DeVos, de la Maza, Montejano and Šámal <cit.> proved that whenever graphs G_1,G_2,G_3 on the same set of n vertices satisfy e(G_c)>an^2 for all c=1,2,3, where a:=26-2√(7)/81≈ 0.2557>1/4, then there is a rainbow triangle, while there is an example which shows a cannot be decreased. That is, e_K_3,n^ col = ⌊ an^2 ⌋. The transversal embedding problem for larger cliques is still open. When one is interested in embedding a spanning graph, it makes sense to consider minimum degree conditions rather than size conditions, as, for example, a graph could be very dense but contain an isolated vertex, and therefore not contain a copy of any given spanning graph with minimum degree at least 1. Let M_n and C_n be the perfect matching and Hamilton cycle on n vertices respectively. Joos and Kim <cit.> proved that _C_n,n^ col=_C_n,n and _M_n,n^ col=_M_n,n (which both have the common value =⌈ n/2⌉-1); that is, Hamilton cycle embedding and matching embedding are both colour-blind with respect to minimum degree. An easier question is to ask for approximately best possible conditions. For minimum degree, this means we would like to find _H and _H^ col, where * _H is the minimum such that for all >0, any sufficiently large n-vertex graph G with (G) ≥ (+) n contains a copy of H; and * _H^ col is the minimum such that for all >0, any collection G_1,…,G_e(H) of sufficiently large n-vertex graphs with (G_c) ≥ (+) n for all 1 ≤ c ≤ e(H) contains a transversal copy of H. So the results of <cit.> imply that ^ col_C_n=^ col_M_n=1/2; this approximate version was earlier proved by Cheng, Wang and Zhao <cit.>. If _H=_H^ col, then we say that H is approximately colour-blind. Montgomery, Müyesser and Pehova <cit.> determined which F-factors are approximately colour-blind for any given small graph F, and showed that spanning trees with maximum degree o(n/log n) are approximately colour-blind. They observed that some spanning graphs are very far from being colour-blind: taking H to be the disjoint union of copies of K_2,3∪ C_4, it holds that _H ≤4/9 by a result of Kühn and Osthus <cit.>, whereas _H^ col≥1/2. Gupta, Hamann, Müyesser, Parczyk and Sgueglia <cit.> showed that powers of Hamilton cycles are approximately colour-blind. They also showed, improving results of Cheng, Han, Wang, Wang and Yang <cit.>, that Hamilton ℓ-cycles in k-uniform hypergraphs are approximately `d-colour-blind' for some range of ℓ, k and d, which means with respect to minimum d-degree, which we do not define here. The aim of <cit.> was to provide a fairly general approach for transversal embedding problems that used the corresponding uncoloured embedding result as a black box. Roughly speaking, their approach is designed to work when the graph H to be embedded is made up of small almost-disconnected blocks: they prove results for F-factors, which are made up of vertex-disjoint copies of F, and for trees which are made up of an ordered sequence of small subtrees each sharing one vertex with a previous tree. Similarly, <cit.> gave a widely applicable sufficient condition for approximate colour-blindness, which applies to graphs obtained by `cyclically gluing' copies of a smaller graph (e.g. a Hamilton cycle is obtained by `cyclically gluing' copies of an edge). Nevertheless, the H to which these two results apply are fairly specific, and in this paper we build on <cit.> to develop a method for H which are `more connected' than the above. Very recently, during the preparation of this paper, Chakraborti, Im, Kim and Liu <cit.> made important progress in this direction by proving a `transversal bandwidth theorem'. A graph has bandwidthb if there is an ordering v_1,…,v_n of its vertices such that |i-j| ≤ b whenever v_iv_j is an edge. Their result states that if a bounded degree graph H of sublinear bandwidth has chromatic number k, then ^ col_H≤ 1/k. This extends several of the above results, and is asymptotically best possible for many graphs H, as well as generalising the original bandwidth theorem of Böttcher, Schacht and Taraz <cit.> for single graphs. The proof uses similar ideas to the proof of the main result of our paper, a transversal blow-up lemma, which we introduce in the next section. §.§ The regularity-blow-up method for transversals The so-called `regularity-blow-up method' has been employed to prove many results concerning the embedding of a given spanning subgraph H in a large graph G. Such proofs typically run along the following lines. Apply Szemerédi's regularity lemma <cit.> to obtain a constant-size `reduced graph' R of G, which approximates the structure of G: vertices of R correspond to disjoint vertex clusters in G, and edges of R correspond to regular pairs of vertex clusters, which informally means that the edges between them are randomlike and therefore easy to embed into. There may be a small number of `exceptional' vertices which are not part of this structure. Next, find a suitable subgraph H' of R which is simpler than H, and often consists of many small connected components (whereas H could be connected). Next, embed small pieces of H which connect the components of H'. At this stage, some vertices may need to be moved from the structure into the exceptional set to make clusters balanced and to ensure regular pairs have sufficiently large minimum degree (they are superregular). Then incorporate the exceptional vertices: for each such vertex v find a component of H' where v has many suitable neighbours and a few vertices inside the H'-structure which can be used along with v to embed a small part of H. Finally apply the blow-up lemma of Komlós, Särkózy and Szemerédi <cit.> to embed most of H into H' allowing for the restrictions imposed by the initial embedding. The blow-up lemma states that, for the purposes of embedding a possibly spanning bounded degree graph, a regular pair behaves the same as a complete bipartite graph. The primary goal of this paper is to provide the tools needed to apply the regularity-blow-up method to obtain transversal embeddings, following the same steps as above. The basic idea is that one can think of a graph collection G=(G_c: c ∈) with common vertex set V as the 3-uniform hypergraph G^(3) with vertex and edge sets V(G^(3)) = V ∪ and E(G^(3)) = {xyc: xy ∈ G_c}. This natural idea has been noticed and used before, for example in <cit.>, where it was noted that a result of Aharoni, Georgakoupoulos and Sprüssel <cit.> on matchings in k-partite k-uniform hypergraphs can be used to find a transversal matching in a bipartite graph collection; and in <cit.>, where the weak regularity lemma was applied to G^(3) to find an almost spanning clique factor. In this paper, we take the idea further by using this perspective to provide the tools one would require to use the regularity-blow-up method for graph collections. To this end, we * define regularity, superregularity, reduced graphs and provide a regularity lemma for graph collections; * state some of the standard tools for regularity arguments, such as a `slicing lemma' which states that regularity is inherited in subpairs of regular pairs, degree inheritance in the reduced graph, and various embedding lemmas with target sets, candidate sets and prescribed colours; * prove a blow-up lemma for `separable graphs', which we define shortly. The first item is mainly a convenient reformulation of weak regularity for 3-uniform hypergraphs and does not require any new ideas; nonetheless the statements are useful to have. The third (which builds upon the tools developed in the second item) is our main contribution, and the proof uses the original blow-up lemma combined with colour absorption ideas from <cit.>. We envisage that our blow-up lemma will be useful in solving various transversal embedding problems, making it a viable alternative to the commonly used absorption technique. Furthermore, there are cases where the blow-up lemma is a more appropriate tool. For instance, when attempting to embed a given transversal 2-factor (a spanning 2-regular subgraph) or, more generally, a graph with low bandwidth, the absorption technique is not as effective. In such situations, the blow-up lemma can still be employed for successful embedding. For example, our blow-up lemma can be used to embed powers of Hamilton cycles, providing an alternative proof of a result in <cit.>. Abbasi <cit.>, proving a conjecture of El-Zahar <cit.>, used the original blow-up lemma to obtain the best possible minimum degree bound in a large graph containing a given 2-factor; our blow-up lemma could be useful in proving a transversal version of this result. Another notable advantage of the blow-up lemma is its ability to aid in characterising extremal constructions. This is helpful for determining exact bounds in these types of problems, by utilising the stability method commonly employed in non-transversal graph embeddings. This natural problem was recently raised in <cit.>. We discuss applications to transversal embedding problems further in Section <ref>. The secondary goal of the paper is to apply our transversal blow-up lemma to embeddings in uniformly dense 3-uniform hypergraphs. We introduce these `randomlike' hypergraphs, and our results in this direction, in Section <ref>. Indeed, our blow-up lemma can be formulated as a blow-up lemma for 3-uniform hypergraphs. Consequently, there is also potential to employ this lemma in generalising results such as those of Kühn and Osthus <cit.> who studied the problem of embedding loose Hamiltonian cycles under vertex degree conditions. All our results apply to `separable graphs' with bounded maximum degree. An n-vertex graph H is μ-separable if there is X ⊆ V(H) of size at most μ n such that H-X consists of components of size at most μ n. Separable graphs (with suitable small μ) include F-factors for fixed F, trees, 2-regular graphs, powers of a Hamilton cycle, and graphs of small bandwidth. §.§ Transversals in uniformly dense graph collections Our first main result concerns transversal embeddings of spanning graphs inside a quasirandom graph collection. A `quasirandom' condition is one possessed by a random graph of a similar density. Our condition is that for every linear subset of vertices and linear subset of colours, the total number of edges in these colours spanned by the subset is large. A graph collection satisfying this condition is said to be uniformly dense. We also require that the number of edges of each colour and the total degree of each vertex is Ω(n^2), where the graph collection has a common vertex set of size n. This condition cannot be completely removed, since if a colour is not present or if a vertex is isolated, we certainly cannot find a transversal copy of a spanning graph H. A graph collection satisfying both conditions is super uniformly dense, in analogy with regular and superregular. We are interested in the following question: for which (spanning) graphs H must any super uniformly dense graph collection contain a transversal copy of H? Given d,η>0, we say that a graph collection G=(G_c: c ∈) with vertex set V of size n is (d,η)-dense if for all A,B ⊆ V and ' ⊆, we have ∑_c ∈'e(G_c[A,B]) ≥ d|'||A||B| - η n^3, where e(G_c[A,B]) denotes the number of pairs (x,y) ∈ A × B for which {x,y} is an edge of G_c. We informally refer to a graph collection G on n vertices that is (d,η)-dense for parameters 0<1/n ≪η≪ d ≤ 1 as uniformly dense, and if additionally there is ≫η for which ∑_c ∈d_G_c(v) ≥|| n for all v ∈ V and e(G_c) ≥ n^2 for all c ∈, then G is super uniformly dense. Note that the condition is vacuous unless A,B and ' are of linear size (in the number n of vertices). We prove that a transversal copy of a separable graph can be found in a super uniformly dense graph collection. For all ,d,,>0, there are η,μ>0 and n_0 ∈N such that the following holds for all integers n ≥ n_0. Let G=(G_c: c ∈) be a (d,η)-dense graph collection on a common vertex set V of size n, where || ≥ n, and suppose that for all v ∈ V we have ∑_c ∈d_G_c(v) ≥||n, and for all c ∈ we have e(G_c) ≥ n^2. Then G contains a transversal copy of any given μ-separable graph H on n vertices with (H) ≤ and e(H) = ||. This result is a consequence of a more general transversal blow-up lemma, which we state in the next section. First, we discuss possible extensions to other colour patterns. One can ask whether a graph collection contains a copy of H with a given colour pattern, extending the monochromatic case (a copy in a single graph G_c) and rainbow case (a copy with one edge from each G_c). For example, the following was shown in <cit.>. Not only does the same minimum degree (2/3+o(1))3n which guarantees a triangle factor in a large graph with a vertex set of size 3n in fact guarantee a transversal copy when ||=3n, it also guarantees a triangle factor where the c-th triangle lies inside G_c (has colour c), when ||=n. Given d,η>0, a (single) graph with a vertex set V of size n is (d,η)-dense if for all A,B ⊆ V, we have e(G[A,B]) ≥ d|A||B|-η n^2. A collection of (d,η)-dense graphs on the same vertex set is a (d,η)-dense graph collection, but the converse does not hold. It is easy to see that, for any d>0, as long as η and 1/n are sufficiently small, such a G contains Ω(n^3) triangles: the condition applied to A=B=V implies that there are Ω(n) vertices v with d_G(v) ≥ dn/2; each one has Ω(n^2) edges in its neighbourhood. However, there are uniformly dense graph collections with d fairly large which do not contain any monochromatic triangles, as the following example shows. (The example is essentially equivalent to one in the setting of uniformly dense hypergraphs – introduced in Section <ref>– of Reiher, Rödl and Schacht <cit.>, which itself has its roots in work of Erdős and Hajnal <cit.>.) Form an auxiliary oriented 2-graph J by letting V, be disjoint, where, say, |V|=||=n is large, and first adding edges between every pair in V, and every pair in (V,). Independently for each edge, choose an orientation uniformly at random. Add the pair xy to G_c with x,y ∈ V and c ∈ precisely when xyc is a cyclic triangle. Then, with probability tending to 1 as n →∞, G is (1/4,o(1))-dense. However, for every triple x,y,z of vertices in V and every colour c ∈, there are at most two edges in G_c[{x,y,z}]. This raises the question of which colour patterns of which graphs H one can expect to find in a (super) uniformly dense graph collection. If d is sufficiently large, then any bounded degree H with any colour pattern can be found; for those H which are not present when d is an arbitrary positive constant, how large must d be to guarantee such a copy of H? We will explore a generalisation of this question in Section <ref>. §.§ A transversal blow-up lemma In this section, we state a simplified version of our blow-up lemma for bipartite graphs. We defer the complete statement to Theorem <ref> at the end of this section. Our blow-up lemma has the advantages that its proof, sketched at the beginning of Section <ref>, is conceptually straightforward, following from the original blow-up lemma and a colour absorption tool introduced in <cit.>; and the lemma should be powerful enough for all of the transversal embedding applications we have in mind. This is since the usual regularity-blow-up method is almost always successfully applied to `non-expanding' separable graphs, since one embeds them by using the blow-up lemma on a series of small pieces. The disadvantage is that there does not seem to be a reason why the separability condition should be necessary. [Simplified transversal blow-up lemma] Let 0 < 1/n ≪,μ≪ d,,1/≤ 1. Let be a set of at least n colours and let G = (G_c: c ∈) be a collection of bipartite graphs with the same vertex partition V_1,V_2, where n ≤ |V_1|≤|V_2|≤ n/, such that * for all V_i' ⊆ V_i for i=1,2 and ' ⊆ with |V_i'|≥|V_i| for i=1,2 and |'| ≥ ||, we have that ∑_c ∈'e_G_c(V_1',V_2') ≥ d|'||V_1'||V_2'|; * for i=1,2 and every v ∈ V_i we have ∑_c ∈d_G_c(v) ≥ d||n and for every c ∈, we have e(G_c) ≥ dn^2. Let H be a μ-separable bipartite graph with parts of size |V_1|,|V_2|, and || edges and maximum degree . Then G contains a transversal copy of H. The theorem requires that the number of vertices and edges of H are comparable, so it does not apply to very sparse H. If one adds edges to H to obtain a suitable denser graph H', and duplicates colours until there are e(H') colours, the copy of H' produced by the theorem will not necessarily contain a transversal copy of H. The general version of the theorem, Theorem <ref>, applies to a graph collection whose graphs have common parts V_1,…,V_r, and a graph R with V(R)=[r] such that the bipartite condition in the theorem holds between all ij ∈ E(R), and each ij has a dedicated set of colours _ij. §.§.§ Rainbow blow-up lemmas There are by now many blow-up lemmas for various settings which have been applied to many embedding problems. Of particular relevance to this paper are rainbow blow-up lemmas which apply to a single graph whose edges are coloured. We say that an edge-coloured graph G is k-bounded if no colour appears on more than k edges, and G is locally k-bounded if each vertex is incident to at most k edges of the same colour. Glock and Joos <cit.> proved a rainbow blow-up lemma for o(n)-bounded edge-colourings, which allows one to find a rainbow embedding of a given bounded degree graph H. Here, the number of colours is many times larger than e(H). Later, Ehard, Glock, and Joos <cit.> proved a similar lemma for locally O(1)-bounded colourings, which allows the number of colours to be (1+o(1))e(H). Therefore it does not seem possible to use such blow-up lemmas for transversal embedding problems where we require exactly e(H) colours. §.§ Applications to embedding in uniformly dense hypergraphs In this section we revisit the hypergraph perspective and discuss the consequences of our work to embeddings in 3-uniform hypergraphs. `Weak regularity' is the straightforward generalisation of Szemerédi regularity from (2-uniform hyper)graphs to k-uniform hypergraphs (hereafter referred to as k-graphs), and the `weak regularity lemma' was proved by Chung <cit.> in much the same way as the original <cit.>. However, this lemma is not powerful enough to prove many of the analogues of graph results which one would like. In particular, there is no general counting lemma which guarantees that the number of copies of a given small graph F in a regularity partition is similar to what one would expect if pairs were replaced by uniform random hyperedges, with the same density. Thus the `strong regularity lemma' was developed, which uses, as the name suggests, a more complicated and much stronger notion of regularity, and does have an associated counting lemma; this lemma is much more applicable than its weaker counterpart. It was shown by Conlon, Hàn, Person and Schacht <cit.> and independently Kohayakawa, Nagle, Rödl and Schacht <cit.> that weak regularity is in fact strong enough to give a counting lemma for linear hypergraphs, which have the property that |e ∩ f| ≤ 1 for all distinct hyperedges e,f. As a transversal copy of a (simple) graph inside a graph collection G is a linear subhypergraph of the associated 3-graph G^(3), weak regularity is an effective tool for transversal embedding problems. And conversely, the tools developed in this paper are useful for embedding 3-graphs. We generalise the definition we gave in Section <ref> for graphs. Let d,η>0 and let k ≥ 2 be an integer. A k-graph is (d,η)-dense if for all subsets U_1,…,U_k ⊆ V, we have e(U_1,…,U_k) ≥ d|U_1|… |U_k| - η n^k, where e(U_1,…,U_k) is the number of k-tuples (x_1,…,x_k) ∈ U_1×…× U_k for which {x_1,…,x_k} is a hyperedge. We informally refer to a k-graph G on n vertices that is (d,η)-dense for parameters 0<1/n ≪η≪ d ≤ 1 as uniformly dense. If there is ≫η for which additionally d_G(v) ≥ n^k-1 for all v ∈ V(G), then we say that G is super uniformly dense. This is a type of quasirandomness, since a random k-graph of density at least d is (d,o(1))-dense with high probability. Several papers studying uniform density use the term `quasirandom' instead. Following a suggestion of Erdős and Sós <cit.>, a systematic treatment of extremal problems in uniformly dense hypergraphs has been started by Rödl, Reiher and Schacht, in part due to the great difficulty of such problems in general hypergraphs. In <cit.>, they fully answered the zero Turán density question for uniformly dense 3-graphs, i.e. for which F is the following true? For all d>0, there exists η>0 such that every sufficiently large (d,η)-dense 3-graph contains F as a subhypergraph. They showed that these F are precisely those with the following property. There is an ordering v_1,…,v_r of V(F) and a colouring of the set of pairs of vertices contained in edges by red, blue and green, so that whenever v_iv_jv_k ∈ E(F) with i<j<k, we have that v_iv_j is red, v_iv_k is blue and v_jv_k is green. This set of F includes linear F and 3-partite F (Erdős <cit.> proved that a k-graph F has Turán density 0 if and only if F is k-partite), but is much richer than this: for example, the hypergraph obtained by removing one edge from the tight cycle on 5 vertices is such a hypergraph. The simple argument given in Section <ref> shows that for k=2 and >0, a sufficiently large uniformly dense 2-graph contains a copy (in fact many copies) of K_. In contrast, for k ≥ 3 there are very simple k-graphs F which require a fairly large density d to appear in any uniformly dense k-graph. Indeed, there is a (1/4,o(1))-dense 3-graph in which K^(3)-_4, the (unique) 3-graph with 4 vertices and 3 edges, does not appear. (The example is closely related to the one given in Section <ref>.) In this paper, we are interested in which spanning hypergraphs appear in any super uniformly dense hypergraph. Note that we cannot remove `super' since uniform density does not preclude the existence of isolated vertices. Let H = (H_ℓ: ℓ∈N) be a collection of k-graphs where |V(H_ℓ)|=: n_ℓ→∞. For which H is the following true? For all d,>0, there exist η>0 and ℓ_0 ∈N such that for all integers ℓ>ℓ_0, every (d,η)-dense k-graph G on n_ℓ vertices with d_G(v) ≥ n_ℓ^k-1 for all v ∈ V(G) contains H_ℓ as a subhypergraph. For k=2, the blow-up lemma of Komlós, Sárközy and Szemerédi <cit.> implies that every uniformly dense graph contains as a subgraph any given spanning graph H with bounded degree and sublinear bandwidth (as observed by Glock and Joos, see Theorem 9.3 in <cit.>). For general uniformities k, Lenz and Mubayi <cit.> proved that uniformly dense k-graphs contain an F-factor for any given fixed size linear F. In fact they found non-linear F for which uniform density is sufficient, and `almost-linear' F for which it is not. Ding, Han, Sun, Wang and Zhou <cit.> completed this work for k=3 by characterising those F for which uniform density guarantees an F-factor, and for general k they also obtained a characterisation for k-partite F. Lenz, Mycroft and Mubayi <cit.> showed that uniform density guarantees a loose cycle. To the best of our knowledge, there are no further results on embedding spanning F in uniformly dense hypergraphs of arbitrarily small positive density. Our next main result generalises the result of <cit.> for k=3 by providing a new family of spanning H which can be found in super uniformly dense 3-graphs. An expanded graph is obtained from a 2-graph by replacing every edge e=xy by some 3-edges xyc_1,…,xyc_t_e, where all the new vertices c_1,…,c_t_e are distinct, and the number t_e of new edges/vertices for the edge e depends on e. If every t_e is equal to t then we call this a t-expansion. For example, a loose 3-uniform cycle is an expanded cycle where each edge is replaced by one expanded edge, that is, a 1-expansion of a cycle. The 1-expansion of a (simple) graph H is linear and has |V(H)|+|E(H)| vertices. For all ,d,>0 there are η,μ>0 and n_0 ∈N such that the following holds for all integers n ≥ n_0. Let G be a (d,η)-dense 3-graph on n vertices with d_G(v) ≥ n^2 for all v ∈ V(G). Then G contains a copy of the 1-expansion of any given μ-separable graph H with (H) ≤ and |V(H)|+|E(H)|≤ n. Since a large 2-regular graph (a union of vertex-disjoint cycles) is o(1)-separable, the theorem implies that any super uniformly dense 3-graph G contains as a subhypergraph any 3-graph consisting of vertex-disjoint loose cycles. Next, we reformulate Theorem <ref> as a blow-up lemma for 3-graphs, which may be of independent interest. [Simplified weak hypergraph blow-up lemma] For all ,d,>0 there are ,μ>0 and n_0 ∈N such that the following holds for all integers n ≥ n_0. Let G be a 3-partite 3-graph with parts V_1,V_2,V_3 where every |V_i|=:n_i and n ≤ n_1 ≤ n_2 ≤ n/ and n_3 ≥ n such that G is a weakly (,d)-half-superregular triple, that is, (i) for all i ∈ [3] and V_i' ⊆ V_i with |V_i'| ≥|V_i|, we have e_G(V_1',V_2',V_3') ≥ d|V_1'||V_2'||V_3'|, (ii)d_G(v) ≥ dn^2 for all v ∈ V(G). Let H be a μ-separable bipartite 2-graph with parts of size n_1,n_2, with n_3 edges and maximum degree . Then G contains a copy of the 1-expansion of H, where the images of the new vertices lie in V_3. Weak superregularity and uniform density of 3-graphs and graph collections are closely connected. Observe that if G is (d+√(η),η)-dense, where η≫,, and V_1,V_2,V_3 is any partition with sizes as in the theorem, then the 3-partite subhypergraph G' of G induced on these parts satisfies (i). If d_G(v) ≥ n^2 for all v ∈ V, and in addition the vertex partition is uniformly random, then with probability tending to 1 as n →∞, G' is weakly (,min{d,/2})-half-superregular, say. Furthermore, if one takes any partition V(G)=V ∪ into parts which are not too small, then the graph collection G consisting of graphs G_c for c ∈ with V(G_c)=V and E(G_c)={xy: x,y ∈ V and xyc ∈ E(G)} is (d,η)-dense. Again, if d_G(v) ≥ n^2, a random such partition gives rise to a super uniformly dense graph collection. As an example, any 3-graph G satisfying the hypotheses of the theorem with n_1=n_2 and n_3=n_1+n_2 contains a copy of a loose Hamilton cycle. The full statement of a 3-graph version of our transversal blow-up lemma (Theorem <ref>) is stated in Theorem <ref>. A much more general hypergraph blow-up lemma was established by Keevash <cit.> which applies to k-graphs satisfying a strong regularity property, which is designed to be used with the strong regularity lemma; this blow-up lemma applies to embed any bounded degree k-graph H. Nevertheless, it would be interesting to determine which graphs can be embedded into a weakly superregular triple. We discuss this further in Section <ref>. There are many other notions of quasirandomness of hypergraphs, often related to the number of edges in/between subsets, that have been studied in the literature. We refer the reader to <cit.> for a detailed comparison of such notions; it turns out that equivalent notions of quasirandomness in graphs have inequivalent hypergraph analogues. The embedding of spanning structures has been studied in these various settings, for example tight Hamilton cycles in 3-graphs with a stronger quasirandomness condition than the one in this paper, in <cit.>, and graphs of sublinear bandwidth in `locally dense' 2-graphs, which is weaker than uniformly dense, in <cit.>, and in `inseparable' 2-graphs in <cit.>. While in this paper we ask which H are subhypergraphs of any quasirandom hypergraph G of positive density (where for us, `quasirandom' means `uniformly dense'), the results above mainly concern the generalisation of this question: given H, what minimum degree in a quasirandom hypergraph G is sufficient to imply the existence of H? There are various notions of degree in hypergraphs and the results mentioned in this paragraph consider several of them. Quasirandomness conditions that apply to sparse (hyper)graphs have also been well studied. Recently, Hàn, Han and Morris <cit.> extended the results of <cit.> to the sparse regime. §.§ The statement of our main result The full statement of our transversal blow-up lemma is as follows. Its proof will follow from Theorem <ref> which is a very similar statement written in the notation defined in Section <ref>. [Transversal blow-up lemma] For all ν,d,,,r>0 where r ≥ 2 is an integer, there exist ,μ,>0 and m_0 ∈N such that the following holds for all integers m ≥ m_0. Suppose that G=(G_c:c ∈) is a graph collection with the following properties. * There is a graph R with vertex set [r] and a partition = ⋃_e ∈ E(R)_e where |_e| ≥ m for all e ∈ E(R); * for all ij ∈ E(R) and c ∈_ij, G_c is bipartite with parts V_i,V_j, and V is a vertex set of size n with partition V=V_1 ∪…∪ V_r, where m ≤ |V_i| ≤ m/ for each i ∈ [r]; * for all ij ∈ E(R), * for all V_h' ⊆ V_h for h=i,j and '_ij⊆_ij with |V_h'|≥|V_h| for h=i,j and |'_ij| ≥ |_ij|, we have that ∑_c ∈'_ije_G_c(V_i',V_j') ≥ d|'_ij||V_i'||V_j'|; * for h=i,j and every v ∈ V_h we have ∑_c ∈_ijd_G_c(v) ≥ d|_ij|m and for every c ∈_ij, we have e(G_c) ≥ dm^2. Suppose that H is a graph with the following properties. * (H) ≤; * H is μ-separable; * H has vertex partition A_1 ∪…∪ A_r such that |A_i|=|V_i| for all i ∈ [r] and for every xy ∈ E(H) there is ij ∈ E(R) such that x ∈ A_i and y ∈ A_j; * e(H[A_i,A_j])=|_ij| for all ij ∈ E(R); * for each i ∈ [r], there is a set U_i ⊆ A_i with |U_i| ≤ m and for each x ∈ U_i, a set T_x ⊆ V_i with |T_x| ≥ν m. Then there is a transversal embedding of H inside G such that for every i ∈ [r], every x ∈ U_i is embedded inside T_x. §.§ Notation and organisation Notation. For reals x,a,b, we write x=a± b if we have a-b≤ x≤ a+b. For any two constants α,β∈ (0,1), we write α≪β if there exists a function α_0=α_0(β) such that the subsequent arguments hold for all 0<α≤α_0. When we write multiple constants in a hierarchy, we mean that they are chosen from right to left. For any positive integer a, let [a]:={1,2,…,a}. Given a set X and a positive integer b, write Xb to denote the set of b-element subsets of X. Let k ≥ 2 be an integer and let G=(V,E) be any k-graph, so each edge consists of k vertices. We use V(G):=V to denote its vertex set and E(G):=E to denote its edge set. Let v(G):=|V(G)| and e(G):=|E(G)| be their sizes. For any vertex v∈ V(G), let N_G(v) be the neighbourhood of v, that is, the set of (k-1)-tuples of v that are incident to v and let d_G(v):=|N_G(v)| be the degree of v. Sometimes we use d(v) for short if G is obvious from the text. We write (G) := min_v ∈ V(G)d_G(v) and (G) := max_v ∈ V(G)d_G(v) for the minimum and maximum degree of G. For each vertex v∈ V(G) and subset S ⊆V(G)k-1, let N_G(v,S):=N_G(v)∩ S and d_G(v,S):=|N_G(v,S)|. For any vertex set U⊆ V(G), let G[U] be the induced hypergraph of G on U, i.e. the hypergraph with vertex set U and all the edges of G with vertices only in U. Let G-U:=G[V(G) U]. For a subhypergraph H of G, let G H be the graph with vertex set V(G) and edge set E(G) E(H). For any (not necessarily disjoint) vertex sets U_1,…,U_k ⊆ V(G), we write G[U_1,…,U_k] for the k-graph with vertex set U_1 ∪…∪ U_k and edge set E_G(U_1,…,U_k), which is the set of edges of G with one vertex in U_i for each i ∈ [k]. Let e_G(U_1,…,U_k) be the number of k-tuples (x_1,…,x_k) ∈ U_1×…× U_k for which {x_1,…,x_k}∈ E_G(U_1,…,U_k) (note that when these sets are not disjoint, edges may be counted more than once). We say that a statement about a graph on n vertices holds with high probability if it holds with probability tending to 1 as n→∞. We use script letters (e.g. C) to denote sets of colours and bold letters (e.g. G) to denote graph collections. It will be convenient to represent a graph collection in three equivalent ways: * a collection G=(G_c: c ∈) of graphs on the same vertex set V. We call the colour set of G; * an edge-coloured graph G on vertex set V with edge set ⋃_c ∈G_c, where xy has a multiset of colours consisting of those c ∈ for which xy ∈ G_c. We say that G is the edge-coloured graph of G (but rarely use this representation); * a 3-graph G^(3) on vertex set V ∪ with edges xyc whenever xy ∈ G_c. We say that G^(3) is the 3-graph of G. Given two 2-graphs H and G, a graph homomorphism (from H to G) is a map ϕ: V(H) → V(G) such that ϕ(x)ϕ(y) ∈ E(G) whenever xy ∈ E(H). An injective graph homomorphism is an embedding (of H into G). Given a graph H and a graph collection G=(G_c: c ∈) on vertex set V, a transversal embedding (of H into G) is a pair τ:V(H) → V and :E(H) → of injective maps such that τ(x)τ(y) ∈ E(G_(xy)) whenever xy ∈ E(H). Organisation. In Section <ref>, we define regularity for graph collections, state a regularity lemma for graph collections, define the `template' graph collections to which our theory applies, and prove some of the basic tools one uses when applying the regularity method, such as a slicing lemma and a degree inheritance lemma. Section <ref> contains some embedding lemmas which are the main ingredients of the proof of our transversal blow-up lemma, Theorem <ref>, but which are also useful tools when applying the method. In Section <ref>, we prove Theorem <ref> (which readily implies Theorem <ref>) and in Section <ref>, we prove Theorems <ref> and <ref> on embeddings in uniformly dense graph collections and 3-graphs. Section <ref> contains some concluding remarks. §.§ Concentration of probability We will use the following version of Chernoff's bound: Let X be a binomially distributed random variable and 0<ε<3/2, then ℙ[|X-E(X)|≥εE(X)]≤ 2e^-ε^2/3E(X). We will frequently use the following easy corollary of Chernoff's bound for random partitions: Let α_1,…,α_k be k parameters and C is a constant where ε< α_i< 1(1≤ i ≤ k) and ∑_i=1^k α_i=1. Suppose that 1/n≪ε≪δ and ℱ={e_1,…,e_t} is a family of subsets of [n] where t≤ n^C and |e_i|=d_in ≥δ n for each 1≤ i≤ t. Then there exists a partition of [n] into k subsets V_1,…,V_k such that (i)|V_i|=(1±ε)α_i n; (ii)for every edge e_i∈ℱ and every subset V_j, we have |e_i∩ V_j|=(1±ε)d_i|V_j|. We construct k random sets V_1,…,V_k from [n] as follows: for each x∈ [n], we throw x independently into the set ∪_i V_i where x is thrown into V_i with probability α_i for each 1≤ i ≤ k. Thus we have E[|V_i|]=α_i n for each i. And similarly, E[|e_i∩ V_j|]=α_j d_i n for each i,j. Therefore, by Chernoff's bound, with probability at most 2ke^-ε^2/3α_i n+2n^Ce^-ε^2/3α_i d_i n=o(1), we do not have |V_j|=(1±ε)α_jn and |e_i∩ V_j|=(1±ε)α_jd_in for every i,j. And thus the lemma follows. A hypergeometric distribution X with parameters (N,K,m) is defined as follows: let V be an N-set and U be a fixed K-subset of V, take m sets A randomly and uniformly from V and let X=|A∩ U|. We have the following tail bound of hypergeometric distribution <cit.>. Let X be a hypergeometric distribution with parameters (N,K,m) and 0<μ <ε where ε=K/N, then ℙ(X≤ (ε-μ)m)≤ e^-2μ^2m. § THE REGULARITY-BLOW-UP METHOD FOR TRANSVERSALS §.§ Regularity Our notion of regularity for a graph collection G is essentially weak regularity for the 3-graph G^(3) of G. We now define several related notions of weak regularity for k-graphs, which we will use for k=2,3. We usually omit weak and weakly since we will not use any stronger type of regularity. Let G be a k-partite k-graph with classes V_1,…,V_k, which we also denote as (V_1,…,V_k)_G. We define the density of G to be d_G(V_1,…,V_k) := e(G)/|V_1|…|V_k|. Given >0, we say that (V_1,…,V_k)_G is * (weakly) -regular if for every subhypergraph (V_1',…,V_k')_G with V_i' ⊆ V_i and |V_i'| ≥|V_i| for all i ∈ [k] we have |d_G(V_1',…,V_k') - d_G(V_1,…,V_k)|< ; * (weakly) (,d)-regular if additionally d_G(V_1,…,V_k)≥ d; * (weakly) (,d)-superregular if it is (weakly) (,d)-regular and additionally d_G(x)≥ d(|V_1|…|V_k|)/|V_i| for all i ∈ [k] and x ∈ V_i; * (weakly) (,d)-half-superregular if for every subhypergraph (V_1',…,V_k')_G with V_i' ⊆ V_i and |V_i'| ≥|V_i| for all i ∈ [k] we have d_G(V_1',…,V_k')≥ d and d_G(x)≥ d(|V_1|…|V_k|)/|V_i| for all i∈[k] and x ∈ V_i. Our main results will use `half-superregular' given its close connection to uniform density. The 2-graph blow-up lemma <cit.> uses a similar notion, but nowadays `superregular' is generally used instead. This makes little material difference: by definition, an (,d)-superregular hypergraph is (,d-)-half-superregular, and as shown by Rödl and Ruciński in the course of their alternative proof of the blow-up lemma <cit.>, a half-superregular 2-graph contains a spanning superregular subgraph, with weaker parameters. This proof generalises easily to k-graphs, and therefore we postpone it to the appendix. Let 0<1/n≪≪' ≪ d,,1/k ≤ 1 where k ≥ 2 is an integer. Suppose that G is a k-partite k-graph with parts V_1,…,V_k where n≤ |V_i|≤ n/ for all i ∈ [k]. If G is (,d)-half-superregular, then G contains a spanning subhypergraph G' that is (',d^2/2)-superregular. Let G be a k-partite k-graph defined on V_1,…,V_k where n≤ |V_i|≤ n/. We say G is (d,η)-dense restricted to parts if for any subsets X_i⊆ V_i where |X_i|≥ |V_i| for all i∈ [k], we have e_G(X_1,…,X_k)≥ d|X_1|…|X_k|-η n^k. Let 0<1/n ≪η≪' ≪ d'≪ d, , ,1/k ≤ 1 where k ≥ 2 is an integer. Suppose that G is a k-partite k-graph defined on V_1,…,V_k where n≤ |V_i|≤ n/. If G is (d,η)-dense restricted to parts, and each vertex in V_i has degree at least (|V_1|…|V_k|)/|V_i| for any i∈[k], then * G is (',d_0)-half-superregular where d_0 := min{d/2,}, * G contains a spanning subhypergraph G' that is (',d')-superregular. Choose a new parameter and increase n if necessary so that 1/n ≪≪' and the conclusion of Lemma <ref> holds with n,,',d_0 := min{d/2,}, playing the roles of n,,',d,. By increasing n_0 and decreasing η if necessary, we may assume that 1/n ≪η≪. Let X_i⊆ V_i be any subsets such that |X_i|≥ |V_i| for i∈ [k]. By definition of (d,η)-dense, we have e_G(X_1,…,X_k)≥ d|X_1|…|X_k|-η n^k ≥ (d-)|X_1|…|X_k| since η≪. This implies G is (,d/2)-regular. Combined with the minimum degree condition, we see that G is also (,d_0)-half-superregular. Lemma <ref> and our choice of parameters implies that G contains a spanning (',d')-superregular subgraph G'. Regularity for a bipartite graph collection G is defined in terms of the 3-graph G^(3) of G. Suppose that G is a graph collection with colour set , where each G_c is bipartite with parts V_1,V_2. Let G^(3) be the 3-graph of G. We say that * G is (,d)-regular if G^(3) is (,d)-regular. That is, for all V_i' ⊆ V_i with |V_i'| ≥|V_i| for i=1,2 and ' ⊆ with |'| ≥||, we have |∑_c ∈'e_G_c(V_1',V_2')/|'||V_1'||V_2'|-∑_c ∈e_G_c(V_1,V_2)/|||V_1||V_2|| < . * G is (,d)-semi-superregular if it is (,d)-regular and d_G^(3)(v) = ∑_c ∈d_G_c(v) ≥ d|V_3-i||| for all i ∈ [2] and v ∈ V_i. * G is (,d)-superregular if G^(3) is (,d)-superregular. That is, it is (,d)-semi-superregular and d_G^(3)(c) = e(G_c) ≥ d|V_1||V_2| for all c ∈. * G is (,d)-half-superregular if G^(3) is (,d)-half-superregular. That is, for all V_i' ⊆ V_i with |V_i'| ≥|V_i| for i=1,2 and ' ⊆ with |'| ≥||, we have ∑_c ∈'e_G_c(V_1',V_2') ≥ d|'||V_1'||V_2'| and ∑_c ∈d_G_c(v) ≥ d|V_3-i||| for all i=1,2 and v ∈ V_i, and e(G_c) ≥ d|V_1||V_2| for all c ∈. Note that, if every G_c with c ∈ is the same, then G is (,d)-regular if and only if G_c is (,d)-regular; and G is (,d)-superregular if and only if G_c is (,d)-superregular. The superregularity of G does not imply a minimum degree condition for any graph G_c in the collection, and indeed they could all have isolated vertices. The following simple lemma shows that, in an (,d)-regular graph collection, most vertices –typical vertices – have large total degree (the sum of degrees over all colours) and typical colours have many edges. Let 0<≪ d ≤ 1, and let G be an (,d)-regular graph collection with colour set , where each G_c is bipartite with parts V_1,V_2. Then the following hold: (i) for every i ∈ [2] and all but at most |V_i| vertices v ∈ V_i we have ∑_c ∈d_G_c(v) ≥ (d-)|V_3-i|||; (ii) for all but at most || colours c ∈ we have e(G_c) ≥ (d-)|V_1||V_2|. For (i), let V_1' be the set of vertices in V_1 without this property and suppose for a contradiction that |V_1'|>|V_1|. Then (,d)-regularity implies that d_G^(3)(V_1',V_2,)>d-, but the definition of V_1' implies that e_G^(3)(V_1',V_2,)≤|V_1'|(d-)|V_2||| and hence d_G^(3)(V_1',V_2,) ≤ d-, a contradiction. By symmetry, the rest of (i) and (ii) are identical. The following lemma is a standard tool, written in our graph collection notation. Let 0<1/n ≪≪' ≪≪ d ≤ 1, and let G be a graph collection with colour set , where each G_c is bipartite with parts V_1,V_2 each of size at least n, and let V_i' ⊆ V_i for i ∈ [2] and ' ⊆. Let G'=(G_c[V_1',V_2']:c ∈'). (i) Suppose that G is (,d)-regular. Suppose |V_i'| ≥|V_i| for i ∈ [2] and |'| ≥||. Then G' is (/,d/2)-regular. (ii) Suppose that G is (,d)-superregular. Suppose |V_i'| ≥ (1-)|V_i| for i ∈ [2] and |'|>(1-)||. Then G' is (2,d/2)-superregular. (iii) Suppose that G is (,d)-superregular. Given n_1≥ |V_1|,n_2 ≥ |V_2| and h ≥ ||, if V_i'⊆ V_i is a uniform random subset of size n_i for each i=1,2 and '⊆ is a uniform random subset of size h, then with high probability, G' is (/,d^2/16)-superregular. For (i), firstly we have d_G^(3)(V_1^',V_2^',')≥ d-≥ d/2. Secondly, given any subset V_i^''⊆ V_i' with |V_i^''|≥|V_i^'|/≥ |V_i| where i=1,2 and any subset ^''⊆^' with |^''|≥ |^'|/≥ ||, by regularity we have |d_G^(3)(V_1^'',V_2^'',^'')-d_G^(3)(V_1^',V_2^',^')| ≤ |d_G^(3)(V_1^'',V_2^'',^'')-d_G^(3)(V_1,V_2,)| +|d_G^(3)(V_1,V_2,)-d_G^(3)(V_1^',V_2^',^')|≤ 2ε≤/. For (ii), note that G' is (/(1-),d/2)-regular and thus (2,d/2)-regular by (i). Let G_c':=G_c[V_1',V_2'] for each c ∈. For each i=1,2 and v∈ V_i', we also have ∑_c∈'d_G_c'(v)≥ d|V_3-i|||-2|V_3-i|||≥ d|V_3-i'||'|/2. Combining this with e(G_c')≥ d|V_1||V_2|-2|V_1||V_2|≥ d|V_1||V_2|/2 for each c∈', we get that G' is (2,d/2)-superregular. For (iii), similarly, we always have that G' is (/,d/2)-regular by (i). Let G_c':=G_c[V_1',V_2'] for all c ∈. We only need to show that with high probability, for each i=1,2 and v∈ V_i', we have ∑_c∈'d_G_c'(v)≥ d^2|V_3-i'||'|/16 and for each c∈', we have e(G_c')≥ d^2|V_1||V_2|/16. Suppose that ' has been chosen. Since G is (,d)-superregular, for each c∈', we have e(G_c)≥ d|V_1||V_2|. Let V_1^c:={v∈ V_1 : d_G_c(v)≥ d|V_2|/2}. Then we get d|V_1||V_2|≤ |V_1^c||V_2|+(|V_1|-|V_1^c|)d|V_2|/2 and thus |V_1^c|≥ d|V_1|/(2-d)≥ d|V_1|/2. A Chernoff bound implies that, with probability 1-e^-(n), we have |V_1'∩ V_1^c|≥ d n_1/4 and |V_2'∩ N_G_c(v)|≥ d n_2/4 for each v∈ V_1^c. Therefore, by a union bound, with high probability, we have e(G_c')≥ d^2n_1n_2/16 for each c∈'. Similarly, with high probability, for each i=1,2 and v∈ V_i', we have ∑_c∈'d_G_c'(v)≥ d^2n_3-ih/16. It follows that G' is (/,d^2/16)-superregular with high probability. §.§ The regularity lemma for graph collections We use the following version of the regularity lemma for graph collections, which is obtained by applying the degree version of the weak regularity lemma (Lemma <ref>) to the 3-graph G^(3) of G and cleaning up the clusters so that vertex clusters and colour clusters are separate. We postpone the derivation to the appendix. For all integers L_0 ≥ 1 and every ,>0, there is an n_0=n_0(,,L_0) such that for every d ∈ [0,1) and every graph collection G=(G_c: c ∈) on vertex set V of size n ≥ n_0 with n ≤ || ≤ n/, there exists a partition of V into V_0,V_1,…,V_L, of into _0,_1,…,_M and a spanning subgraph G'_c of G_c for each c ∈ such that the following properties hold: * L_0 ≤ L,M ≤ n_0 and |V_0|+|_0| ≤ n; * |V_1|=…=|V_L|=|_1|=… = |_M| =: m; * ∑_c ∈d_G'_c(v) > ∑_c ∈d_G_c(v)-(3d/^2+)n^2 for all v ∈ V and e(G'_c) > e(G_c)-(3d/^2+)n^2 for all c ∈; * if, for c ∈, the graph G'_c has an edge with both vertices in a single cluster V_i for some i ∈[L], then c ∈_0; * for all triples ({h,i},j) ∈[L]2× [M], we have that either G'_c[V_h,V_i]=∅ for all c ∈_j, or G'_hi,j := (G'_c[V_h,V_i]: c ∈_j) is (,d)-regular. The sets V_i are called vertex clusters and the sets _j are called colour clusters, while V_0 and _0 are the exceptional vertex and colours sets respectively. Given a graph collection G=(G_c: c ∈) on V and parameters >0, d ∈ [0,1) and L_0 ≥ 1, the reduced graph collectionR=R(,d,L_0), reduced 3-graphR^(3)=R(,d,L_0) and the reduced edge-coloured graphR=R(,d,L_0) of G are defined as follows. Apply Lemma <ref> to G with parameters ,d,L_0 to obtain G' and a partition V_0,…,V_L of V and _0,…,_M of where V_0, _0 are the exceptional sets and V_1,…,V_L are the vertex clusters and _1,…,_M are the colour clusters. Then R=(R_1,…,R_M) is a graph collection of M graphs each on the same vertex set [L], where, for ({h,i},j) ∈[L]2× [M], we have hi ∈ R_j whenever G'_hi,j is (,d)-regular. Also, R^(3) is the 3-graph of R and R is the reduced edge-coloured graph. The next lemma (related to Lemma 5.5 in <cit.>) states that clusters inherit a minimum degree bound in the reduced graph from G. Suppose p>0, L_0 ≥ 1 and 0 < 1/n ≪≤ d ≪,,p ≤ 1. Let G=(G_c: c ∈) be a graph collection on a vertex set V of size n with (G_c) ≥ (p+)n for all c ∈ and n ≤ || ≤ n/. Let R=R(,d,L_0) be the reduced graph collection of G on L vertices with M graphs. Then * for every i ∈ [L] there are at least (1-d^1/4)M colours j ∈ [M] for which d_R_j(i) ≥ (p+/2)L; * for every j ∈ [M] there are at least (1-d^1/4)L vertices i ∈ [L] for which d_R_j(i) ≥ (p+/2)L. To prove (i), note that for all v ∈ V V_0 we have ∑_c ∈d_G'_c-V_0(v) ≥∑_c ∈d_G_c(v)-(3d/^2+)n^2-||s ≥∑_c ∈d_G_c(v)-4dn^2/^2. Let D_v be the collection of colours c in _0 for which d_G'_c-V_0(v) ≥ d_G_c(v)-√(d)n. Then ∑_c ∈d_G'_c-V_0(v) ≤∑_c ∈d_G_c(v)-|D_v|√(d)n and therefore |D_v| ≤ 4dn^2/(^2√(d)n) ≤ d^1/3n/2 by d≪, so |D_v| ≥ || -d^1/3n/2- n ≥ ||-d^1/3n. We have mM ≤ || ≤ mM+ n and mL ≤ n ≤ mL+ n. Thus the number of clusters _j containing at least one colour of D_v is at least |D_v|/m ≥ M-d^1/3n/m ≥ M-d^1/3L/(1-) ≥ M-2d^1/3M/≥ (1-d^1/4)M. Now let i ∈ [L] and v ∈ V_i. For each cluster _j as above, choose an arbitrary colour c_j ∈_j ∩D_v. Then the number of clusters V_h containing some u ∈ N_G'_c_j(v) is at least d_G_c_j(v)-√(d)n/m≥(p+-√(d))n/m≥ (p+/2)L. But then Lemma <ref>(v) implies that i is adjacent to each such V_h in R_j. So for every i ∈ [L], d_R_j(i) ≥ (p+/2)L for at least (1-d^1/4)M colours j. The proof of (ii) is similar and we omit it. Let 0 ≪ 1/m ≪≪≪ d,D ≤ 1 and let F = (V,C,G) be an R-template with parameters (n,,d,,D). Let t ≤ m and let H be a subgraph of R(t) with maximum degree and let C'_e⊆ C_e with |C'_e| ≥ t+ m for every e ∈ E(R). Then there is a rainbow embedding of H in G where every edge corresponding to e ∈ E(R) receives a colour in C'_e. … §.§ Templates We define the notion of a `template', which is essentially a reduced graph in the transversal setting. We will use these as templates for embedding, in the same way that reduced graphs are used for embedding into a single graph. Let 0 < 1/m ≤≤ d,≤ 1 be parameters and let r ∈N. Suppose that * R is an r-vertex graph, with vertex set [r] unless otherwise specified, * V={V_1,…,V_r} is a set of r disjoint vertex sets with m ≤ |V_j| ≤ m/ for all j ∈ [r], whose union we denote by V, * =⋃_e ∈ E(R)_e is a colour set where |_e| ≥ m for all e ∈ E(R), * G is a graph collection with colour set where for each c ∈, the graph G_c is the union of bipartite graphs G_c^e where G_c^e has parts V_i,V_j, over all e=ij for which c ∈_e. For each e ∈ E(R), let G^e=(G_c^e: c ∈_e). We say that F=(V,,G) is an R-template with parameters (m,,d,) if for every e ∈ E(R), G^e is (,d)-regular. If we replace regular with semi-superregular, it is a semi-super R-template; if we replace regular with superregular, it is a super R-template; if we replace regular with half-superregular, it is a half-super R-template. If =⋃_e ∈ E(R)_e is a partition, we say that the template is rainbow. A transversal embedding of a graph H inside F is a copy of H with vertices in V such that for every edge e there is a distinct c ∈ such that e ∈ G_c. That is, there exist injections τ: V(H) → V and : E(H) → where τ(x)τ(y) ∈ G_(xy) for all xy ∈ E(H). Note that the partition =⋃_e ∈ E(R)_e is suppressed in the notation. Given a template F, we explicitly use the notation in the definition unless otherwise specified. Observe that for an R-template (V,,G) with parameters (m,,d,) and v(R)=r, rm ≤ |V| ≤ rm/. Given parameters ,',d','>0 with ≤ 1, ' ≥, d' ≤ d and ' ≤, any (m,,d,) template is also a ( m,',d',') template. We will often take subtemplates of templates, meaning that the new vertex clusters, colour clusters and graphs G_c are subsets/subgraphs of the originals. Some convenient notation for this is as follows: if F = (V,,G) is a template, ' ⊆ and V' = {V_1',…,V_r'} where V_j' ⊆ V_j for all j ∈ [r], we say that F' = (V',',G') is the subtemplate of F induced by V',' when each _e'=_e ∩', and (G')^e_c:=G_c[V_i',V_j'] is defined for each c ∈'_e. We also say that F' is obtained by deleting ' and V V'. The following straightforward lemma shows that removing a small fraction of colours and vertices from a template produces a subtemplate with slightly weaker parameters, which remains super if the original template was super. Let 0 <1/m ≪≪' ≪≪ d,,1/r,1/k ≤ 1 where r ≥ 2 is an integer. Let R be an r-vertex graph and let F=(V,,G) be an R-template with parameters (m,,d,). Let ' ⊆, V_i' ⊆ V_i for all i ∈ [r] and let F' = (V',',G') be the subtemplate of F induced by V','. (i) If every |'_e| ≥ |_e|/k and |V_i| ≤ |V_i'| ≤ k |V_i|, then F' is a template with parameters ( m,/,d/2,/k). (ii) If every |_e '_e| ≤ m and |V_i V_i'| ≤ m, then F' is a template with parameters (m/2,2,d/2,/2). Moreover, if F is super, then F' is super. (iii) Given |V_i| ≤ n_i ≤ k|V_i| for all i ∈ [r] and h_e ≥|_e|/k for all e ∈ E(R), if V_i' is a uniform random subset of V_i of size n_i and '_e is a uniform random subset of _e of size h_e and F is super, then with high probability, F' is a super template with parameters ( m,/,d^2/16,/k). (iv) Suppose F is half-super. For all c ∈, there is G_c' ⊆ G_c such that, defining G':=(G_c':c ∈), the template (V,,G') is super with parameters (m,',d^2/2,). For (i), we have m ≤ |V_i'| ≤ m/(/k) and |_e'| ≥ m/k. Also, by Lemma <ref>(i), for each e∈ E(R), (G')^e is (/,d/2)-regular. Thus F' is a template with parameters ( m,/,d/2,/k). For (ii), showing that F' is a template with the given parameters is similar to (i). Note that if F is super, then (G')^e is (2,d/2)-superregular for each e∈ E(R) by Lemma <ref>(ii) and thus F' is a super R-template with parameters (m/2,2,d/2,/2). For (iii), if F is super, then G^e is (,d)-superregular for each e∈ E(R). Thus by Lemma <ref>(iii), with high probability, (G')^e is (/,d^2/16)-superregular for each e∈ E(R). Therefore, by (i), F' is a super template with parameters ( m,/,d^2/16,/k) with high probability. Part (iv) follows immediately from Lemma <ref>. Edges which lie in many graphs G_c are particularly useful for embedding and thus we define the (simple, uncoloured) thick graph consisting of all such edges. Given >0 and an R-template F=(V,,G), let T^_F be the simple 2-graph with vertex set V such that xy ∈ E(T^_F) whenever xy ∈ G_c for at least |_ij| colours c ∈_ij where x ∈ V_i, y ∈ V_j and ij ∈ E(R). We call T^_F the -thick graph of F. The following proposition states that the thick graph of a bipartite semi-super template has a spanning half-superregular subgraph. This is very useful for embedding as one can use the usual blow-up lemma to embed a bounded degree graph into the thick graph, and then greedily assign colours. Let 0<1/m ≪≪≪ d,,1/r ≤ 1 where r ≥ 2 is an integer. Let R be a graph on r vertices and let F=(V,,G) be a semi-super R-template with parameters (m,,d,). Then for all ij ∈ E(R), T^_F[V_i,V_j] is (,d/2)-half-superregular. Let ij ∈ E(R) and let U_i⊆ V_i and U_j⊆ V_j be any two subsets such that |U_i|≥ |V_i| and |U_j|≥ |V_j|. Let ℓ_ij := e(T^_F[U_i,U_j]). By regularity we have |d_G^(3)(U_i,U_j,_ij)-d_G^(3)(V_i,V_j,_ij)|≤ and thus we get (d-)|U_i||U_j||_ij|≤ e_G^(3)(U_i,U_j,_ij) ≤ℓ_ij|_ij|+(|U_i||U_j|-ℓ_ij)|_ij|. Therefore, we have ℓ_ij/(|U_i||U_j|)≥ (d--)/(1-)≥ d/2. Similarly, for any x∈ V_i, let ℓ_x := d_T^_F(x,V_j). By semi-superregularity, we have ∑_c∈_ijd_G_c(x)≥ d|V_j||_ij|. Then we get d|V_j||_ij|≤ℓ_x|_ij|+(|V_j|-ℓ_x)|_ij| and thus ℓ_x/|V_j|≥ (d-)/(1-)≥ d/2. The half-superregularity then follows. § EMBEDDING LEMMAS In this section we state a series of embedding lemmas which we will combine to prove our transversal blow-up lemma, stated at the end of the section. These lemmas are also useful in their own right in applications. Each lemma is of the following type: we are given an R-template and a bounded degree graph H whose partition matches the template. For a small number of vertices y in H we are given large target setsT_y where y must be embedded. The output of the lemma is a transversal embedding of H, which consists of an embedding τ of vertices and an embedding of colours, so τ(x)τ(y) ∈ G_(xy) for all xy ∈ E(H), and such that τ(y) ∈ T_y whenever there is a target set. In Lemma <ref>, H is a small graph and we in fact embed only part of H, while finding large candidate sets for the remainder (and a chosen colour for the future edge τ(x)τ(y) where x was embedded and y will later be embedded inside its candidate set). In Lemma <ref>, H is again small, but now there are sets of prescribed colours which must be used in the embedding. This is possible since we assume that there are many more edges of H than the number of prescribed colours, and graph of each prescribed colour is dense. Theorem <ref> is the usual blow-up lemma, which one can think of as applying to a template where all coloured graphs are identical. Now the template is super, but H is allowed to be spanning. In Lemma <ref>, we assume the template is semi-super, and H may again be spanning, but consists of many small (but still linear) connected components, and there are many more colours than needed for a rainbow embedding. Theorem <ref> is our transversal blow-up lemma, in which the template is super, H is a spanning graph with a separable graph homomorphism into the template, and the number of colours is exactly as required. The first such lemma applies to embed a small graph H into a template. In fact we only embed some of H while finding large candidate sets for the rest of H. This means that we set aside some colours so that, for each unembedded vertex y and each of its embedded neighbours x, there is a distinct colour (xy), and a large vertex set so that if the image of y is chosen in this set, then we can extend our embedding to a transversal embedding that uses the specified colours. That is, for every z in this set we have τ(x)z ∈ G_(xy). The proof embeds vertices one by one, at each step fixing the colours that will be used to future neighbours, and is similar to the `partial embedding lemma', an uncoloured version, in <cit.>. Let 0 < 1/m ≪,≪ν' ≪ν,d,,1/,1/r ≤ 1 where r ≥ 2 is an integer. * Let R be a graph on vertex set [r] such that F = (V,,G) is an R-template with parameters (m,,d,). * Let H be a graph with (H) ≤ for which there is a graph homomorphism ϕ:V(H) → V(R) such that |ϕ^-1(j)| ≤ m for all j ∈ [r], and suppose there is a partition V(H) = X ∪ Y where E(H[Y])=∅. * For every w ∈ V(H), suppose there is a set T_w ⊆ V_ϕ(w) with |T_w| ≥ν m. Then there are injective maps τ: X → V and :E(H) → such that * τ(x) ∈ T_x for all x ∈ X; * (xy) ∈_ϕ(x)ϕ(y) for all xy ∈ E(H), and if xx' ∈ E(H[X]), then τ(x)τ(x') ∈ E(G_(xx')); * for all y ∈ Y there exists C_y ⊆ V_ϕ(y)τ(X) such that C_y ⊆⋂_x ∈ N_H(y) ∩ XN_G_(xy)(τ(x)) ∩ T_y and |C_y| ≥ν' m. Fix an ordering of V(H) where all vertices of X come before any vertex of Y, and for each x ∈ X, let N^<(x) be the set of neighbours of x in H which appear before x in the ordering, and N^>(x) the set of those which appear after. At step (x) (with x ∈ X), we will choose τ(x) and (xy) for all y ∈ N^>(x). Initially, define candidate sets C_w:=T_w for all w ∈ V(H) and C_xy:=_ϕ(x)ϕ(y) for all xy ∈ E(H). The following comprises step (x), which we perform for each x ∈ X in order. * For all y ∈ N^>(x), delete all vertices v ∈ C_x with ∑_c ∈ C_xy|N_G_c(v) ∩ C_y| < (d-)|C_xy||C_y|. * Choose τ(x) ∈ C_x. * For all u ∈ V(H), delete τ(x) from C_u. * For each y ∈ N^>(x) in order: * delete all colours c ∈ C_xy with |N_G_c(τ(x)) ∩ C_y| < d|C_y|/2, * choose (xy) ∈ C_xy, * for all uv ∈ E(H), delete (xy) from C_uv, * delete all vertices v ∈ C_y with v ∉ N_G_(xy)(τ(x)). We claim that, at the end of the process, |C_x|,|C_xy| ≥ν'm for all x ∈ V(H) and xy ∈ E(H). First, let us see why the claim implies the lemma. Since candidate sets are never empty and every edge in H is incident to X, τ: X → V and :E(H) → are defined, and by step (<ref>) and (<ref>), they are both injections. For (i), we choose τ(x) from C_x which is always a (non-empty) subset of T_x. For (ii), we choose (xy) from C_xy which is always a (non-empty) subset of _ϕ(x)ϕ(y). If xx' ∈ E(H) where x' appears after x in the ordering, then we choose τ(x') from C_x' and by step (x,x',<ref>) we have τ(x') ∈ N_G_(xx')(τ(x)). For (iii), given the claim and the fact that Y comes after X in the ordering, it suffices to show that, for all y ∈ Y and x ∈ N^<(y), we have C_y ⊆ N_G_(xy)(τ(x)). Noting that x ∈ X, this is a consequence of step (x,y,<ref>). It remains to prove the claim. We suppose that it is true up until step (x). The vertex candidate set C_x can only shrink at steps (x,<ref>), (<ref>) and at step (u,x,<ref>) for u ∈ N^<(x). Proposition <ref> and the fact that, currently, |C_x| ≥ν'm > |V_ϕ(x)|, imply that at step (x,<ref>), at most |C_x| vertices are deleted. At step (<ref>), at most |X| vertices are deleted. At step (u,x,<ref>), the colour (ux) was chosen from C_ux which has |N_G_(ux)(τ(u)) ∩ C_x| ≥ d|C_x|/2 due to step (u,x,<ref>). Thus C_x shrinks by a factor of at most d/2 at this step. Therefore |C_x| ≥ ((d/2)^-)|T_x|-|X| ≥ ((d/2)^-)ν - r)m ≥ν(d/3)^ m > ν'm. The colour candidate set C_xy, with x before y in the ordering, can only shrink at steps (x,y,<ref>) and (<ref>). At most e(H) colours are lost at step (<ref>). At step (x,y,<ref>), we chose τ(x) from C_x which, immediately after (x,<ref>), satisfies ∑_c ∈ C_xy|N_G_c(τ(x)) ∩ C_y| ≥ (d-)|C_xy||C_y|. Writing A ⊆ C_xy for the subset of colours which are not deleted at step (x,y,<ref>), we have (d-)|C_xy||C_y|≤ |A||C_y|+(d/2)(|C_xy|-|A|)|C_y| and thus |A| ≥ (d/2-)|C_xy|. Therefore |C_xy| ≥ (d/2-)|_ϕ(x)ϕ(y)|-e(H) ≥ (d/2-) m- r m ≥ d m/3 ≥ν'm. This completes the proof of the claim, and hence of the lemma. The next ingredient is a version of the above lemma where we are embedding a small graph H such that a small fraction of its vertices have target sets, but additionally we now have a very small set of colours which must be used in the embedding (together with any other colours). These prescribed colours will be used on an induced matching M in H. This matching, together with its neighbours, will be embedded greedily and then Lemma <ref> will apply to extend the embedding to the whole of H. Let 0 < 1/m ≪≪,≪_1,_2 ≪ν,d,,1/,1/r ≤ 1 where r ≥ 2 is an integer. * Let R be a graph on vertex set [r] and let (V,,G) be an R-template with parameters (m,,d,). * Let H be a graph with (H) ≤ for which there is a graph homomorphism ϕ:V(H) → V(R) such that |ϕ^-1(j)| ≤_1 m for all j ∈ [r] and e(H[ϕ^-1(i),ϕ^-1(j)]) ≥_2 m for all ij ∈ E(R). * Suppose there is a set W ⊆ V(H) with |W| ≤ m, such that for all w ∈ W there is a set T_w ⊆ V_ϕ(w) with |T_w| ≥ν m. * For each e ∈ E(R), let D_e ⊆_e be a set of at most m colours and let D=⋃_e ∈ E(R)D_e, and suppose that e(G_c[V_i,V_j]) ≥ d|V_i||V_j| for all c ∈D_ij and ij ∈ E(R), and |_e D| ≥ d|_e| for all e ∈ E(R). Then there are injective maps τ: V(H) → V and :E(H) → such that * τ(x) ∈ V_ϕ(x) for all x ∈ V(H); * τ(w) ∈ T_w for all w ∈ W; * for all xx' ∈ E(H) we have τ(x)τ(x') ∈ E(G_(xx')); * D⊆(E(H)). Let e_1,…,e_s be an arbitrary ordering of E(R). Let D_e_i' := D_e_i (D_e_1∪…∪D_e_i-1) for all i ∈ [s], so that the D_e' over e ∈ E(R) are pairwise disjoint, and their union equals D. We claim that for each e_i=jj' ∈ E(R), we can choose a matching M_jj'⊆ H[ϕ^-1(j),ϕ^-1(j')] such that, writing M := ⋃_e ∈ E(R)M_e, we have * V(M) ∩ W=∅ and M is an induced matching in H such that for every y ∈ V(H) V(M), there is xx' ∈ E(M) such that N_H(y,V(M)) ⊆{x,x'}; * |M_e|=|D_e'| for each e∈ E(R). We can do this greedily, as follows. Suppose we have found M_e_1,…,M_e_i-1 with the required properties, for some i ≥ 1. We show how to find M_e_i, where we write e_i:=jj'. Vizing's theorem states that every graph J with maximum degree can be properly edge-coloured with at most +1 colours. Thus J contains a matching of size ⌈ e(J)/(+1)⌉. Thus H[V_j,V_j'] contains a matching of size ⌈_2 m/(+1)⌉. Now, from this matching, delete W and any vertex at distance at most two to any vertex in a previously found matching. There are at most r-1 neighbours h of j in R for which we have defined M_jh, and for each one we delete at most (1++^2)v(M_jh) ≤ 4^2 m such vertices. The total number of deleted vertices is at most |W|+8r^2 m, which leaves a matching M' of size at least _2 m/(4). Now we will greedily choose a suitable submatching. Suppose we have chosen a subset E(M^q) := {x_1y_1,…,x_q-1y_q-1}⊆ E(M') where 1 ≤ q ≤ |D_jj''| and the distance in H M^q between any pair of vertices in V(M^q) is at least three. To choose x_qy_q, we delete all vertices in V(M') at distance in H M^q at most two. The number of deleted vertices is at most (1++^2)2q ≤ 4^2 m < _2 m/(4). Thus we can always choose x_qy_q from the remaining vertices of M'. We claim that M^q+1 := M^q∪{x_qy_q} is a suitable extension of M^q. Indeed, x_q,y_q have no neighbours in M^q so M^q+1 is induced, and if x_q shared a neighbour z with some other y ∈ M^q, then it would be at distance two from y, a contradiction. Thus we can obtain M_e_i and hence M_e_1,…,M_e_s satisfying both required properties. Let X := V(M) and Y := N_H(X) X. We will embed M into the template first, defining injections τ : X → V and : E(M) ∪ E(H[X,Y]) →D so that (M_e)=D_e' for all e ∈ E(R). We want to choose good images for the vertices x ∈ X so that there are many choices for each neighbour y ∈ N_H(x) X. Now we will embed M greedily, using the fact that e(G_c) ≥ d|V_j||V_j'| for all c ∈D_jj' and ≪ d. For each p≤ e(M), let i ≤ s be such that e(M_e_1)+… e(M_e_i-1) ≤ p < e(M_e_1)+… + e(M_e_i). Let M_p := M_e_1∪…∪ M_e_i-1∪ M'_e_i where e(M_p)=p and we have fixed an ordering of E(M_e_i) and M'_e_i is the (possibly empty) matching which consists of an initial segment. Let X_p := V(M_p) and Y_p := N_H(X_p) X_p. Suppose we have found injections τ : X_p → V and : E(M_p) ∪ E(H[X_p,Y_p]) →C such that * τ(x) ∈ V_ϕ(x) for all x ∈ X_p; * τ(x)τ(y) ∈ G_(xy) for all xy ∈ E(M_p); * (E(M_e_h)) = D_e_h' for all h < i and (E(M_e_i')) ⊆D_e_i'; * for all y ∈ Y_p there exists C_y ⊆ V_ϕ(y) such that C_y ⊆⋂_x ∈ N_H(y) ∩ X_pN_G_(xy)(τ(x)), and |C_y| ≥ d|V_ϕ(y)|/6. We want to extend τ and by embedding an edge xx' in M_e_i M_e_i' with a colour c^* ∈D_e_i' and choosing colours and candidate sets for its unembedded neighbours. So fix any such c^* and let (xx') := c^*. Let N(x),N(x') be the set of neighbours of x,x' respectively in H M. The first property of M implies that N(x),N(y),V(M) are pairwise disjoint for all y ∈ V(M) {x,x'}, and similarly for x'. However, we could have N(x) ∩ N(x') ≠∅ (for example if H is a union of vertex-disjoint triangles). First, we will define τ(x) and colours and candidate sets for every vertex in N(x). Write e_i=jj' and U_h := V_h X_p for h=j,j'. We have e(G_c^*[U_j,U_j']) ≥ d|V_j||V_j'| - 2 m(|V_j|+|V_j'|) ≥ d|V_j||V_j'|/2. Let Z(x) := {v ∈ U_j: d_G_c^*(v,U_j') ≥ d|U_j'|/4}. A simple counting argument implies that |Z(x)| ≥ d|U_j|/4. We will choose τ(x) ∈ Z(x) and candidate sets C_y for each y ∈ N(x), as follows. Order N(x) as y_1,…,y_ℓ, where ℓ≤, and let a_1,…,a_ℓ be such that ϕ(y_i)=a_i, so a_i ∈ [r]. Suppose we have found, for i ≥ 0, * c_i ∈_ja_i such that c^*,c_1,…,c_i are all distinct and disjoint from (E(M_p)) and * a set Z_i(x) ⊆ Z(x) with |Z_i(x)| ≥ (d/6)^i|Z(x)| and d_G_c_h(z,V_a_h) ≥ d|V_a_h|/6 for all h ∈ [i] and z ∈ Z_i(x). Now, G^ja_i+1[Z_i(x),V_a_i+1] is (√(),d/2)-regular by Lemma <ref>(i) (the slicing lemma), so Lemma <ref>(ii) implies there are at least (1-√())|_ja_i+1| colours c ∈_ja_i+1 with e(G_c[Z_i(x),V_a_i+1]) ≥ (d/2-√())|Z_i(x)||V_a_i+1| ≥ d|Z_i(x)||V_a_i+1|/3. Let c_i+1 be any such colour which does not lie in the set {c^*,c_1,…,c_i}∪(E(M_p)), which exists since the number of available colours is at least |_ja_i+1D| - 1 - ≥ d|_ja_i+1|/2 ≥ d m/2. By a simple counting argument, the number of vertices z ∈ Z_i(x) with d_G_c_i+1(z,V_a_i+1) ≥ d|V_a_i+1|/6 is at least d|Z_i(x)|/6, and we let the set of these vertices be Z_i+1(x). Thus we can complete the iteration and find Z_ℓ(x). Now let τ(x) be an arbitrary vertex of Z_ℓ(x), and for each i ∈ [ℓ], define the candidate set C_y_i := N_G_c_i(τ(x), V_a_i) and (xy_i):=c_i. Next, we will define τ(x') and colours and candidate sets for every vertex in N(x'). For each y ∈ N(x) ∩ N(x'), the candidate set for y will be a subset of C_y. For this, let Z(x') := N_G_c^*(τ(x),U_j'), so |Z(x')| ≥ d|U_j'|/4 since τ(x) ∈ Z(x). Write N(x')={w_1,…,w_k} and let b_1,…,b_k be such that ϕ(w_i)=b_i. For each w_i ∈ N(x), we already defined C_w_i; for the other w_i, let C_w_i := V_b_i. Using the fact that G^j'b_i[Z(x'),C_w_i] is (√(),d/2)-regular, exactly the same argument for N(x') as for N(x) implies that we can find * for each i ∈ [k], c_i' ∈_j'b_i which are distinct and disjoint from {c^*,c_1,…,c_ℓ}∪(E(M_p)) and * Z_k(x') ⊆ Z(x') with |Z_k(x')| ≥ (d/6)^k|Z(x')| and d_G_c_i'(z,C_w_i) ≥ d|C_w_i|/6 for all i ∈ [k] and z ∈ Z_k(x'). We let τ(x') be an arbitrary vertex of Z_k(x') and for each i ∈ [k], let T_w_i := N_G_c_i'(τ(x'),C_w_i) and (x'w_i):=c_i'. Note that τ(x') ∈ Z(x') ⊆ N_G_c^*(τ(x)) so τ(x)τ(x') ∈ G_c^* = G_(xx'). For y_i ∈ N(x) N(x'), C_y_i has not changed since it was first defined, and we let T_y_i := C_y_i. We have |T_y| ≥ d^2|V_ϕ(y)|/36 ≥ d^2m/36 for all y ∈ N(x) ∪ N(x'). Thus we can complete the iteration and embed the whole of M. This completes the required extension of τ and , so we have obtained τ: X → V and : E(M) ∪ E(H[X,Y]) →, with (E(M))=D. Finally, we extend the embedding to the whole of H by applying Lemma <ref>. Indeed, let F' be the subtemplate of F induced by V',', where V' = {V_1',…,V_r'} and V_i' := V_i τ(X), and '_e := _e ((E(M) ∪ E(H[X,Y])). Lemma <ref>(ii) implies that F' is a template with parameters (m/2,2,d/2,/2). For each y ∈ Y, we have |T_y ∩ V_ϕ(y)'| ≥ d^2|V_ϕ(y)|/36-|ϕ^-1(ϕ(y))| ≥ d^2m/36-_1 m ≥ d^2 m/37. Similarly for each w ∈ W, we have |T_w ∩ V_ϕ(w)'| ≥ (ν-_1)m ≥ν m/2. For all vertices y of H for which T_y is not defined, let T_y := V_ϕ(y). Thus we can apply Lemma <ref> with parameters m/2,2,2_1,min{ν,d^2/37},d/2,/2 playing the roles of m,,,ν,d,, target sets T_y and Y=∅ (so no candidate sets), to find the desired embedding. Most of the required properties are immediate but we justify why (iii) holds. If xx' ∈ E(M) then we already showed that τ(x)τ(x') ∈ G_(xx'). If xy ∈ E(H) E(M) where x ∈ X, we have y ∈ Y, and then the choice of M guaranteed that N_H(y,X) ⊆{x,x'} where xx' ∈ E(M) and, if this neighbourhood equals {x}, we chose τ(y) ∈ T_y ⊆ N_G_(xy)(τ(x)), while if it equals {x,x'} we chose τ(y) ∈ T_y and T_y ⊆ N_G_(xy)(τ(x)) ∩ N_G_(x'y)(τ(x')). If xy ∈ E(H)- X, then this follows from Lemma <ref>. Next we state the usual blow-up lemma, which we will use in the proof of our transversal blow-up lemma. Note that the lemma is usually stated in terms of the stronger `superregular' condition rather than `half-superregular', but the same proof applies. Alternatively, one can apply Lemma <ref>. [Blow-up lemma <cit.>] Let 0 < 1/m ≪,≪ν,d,,1/,1/r ≤ 1. * Let R be a graph on vertex set [r]. * Let G be a graph with vertex classes V_1,…, V_r where m≤ |V_i| ≤ m/ for all i ∈ [r], such that G[V_i,V_j] is (, d)-half-superregular whenever ij ∈ E(R). * Let H be a graph with (H) ≤ for which there is a graph homomorphism ϕ: V(H) → V(R) such that |ϕ^-1(j)| ≤ |V_j| for all j ∈ [r]. * For each i ∈ [r], let U_i ⊆ϕ^-1(i) with |U_i| ≤ m and suppose there is a set T_x ⊆ V_i with |T_x| ≥ν m for all x ∈ U_i. Then there is an embedding of H inside G such that for every i ∈ [r], every x ∈ U_i is embedded inside T_x. The final embedding lemma that we prove in this section applies to embed a spanning graphH which consists of small components and such that a small fraction of its vertices have target sets. The template F is required to be semi-super (every vertex has large total degree) and rainbow, since H could have many edges in every pair, and additionally F must contain many more colours than required for a transversal embedding. Let 0 < 1/m ≪,μ,≪≪ν,d,, 1/,1/r ≤ 1/2 where r is an integer. * Let R be an r-vertex graph and let F=(V,,G) be a semi-super rainbow R-template with parameters (m,,d,). * Let n:=|V| and suppose that H is an n-vertex graph with (H) ≤ which is the union of vertex-disjoint components of size at most μ n, and there is a graph homomorphism ϕ of H into R such that |ϕ^-1(j)| = |V_j| for all j ∈ [r] and e(H[ϕ^-1(i),ϕ^-1(j)])≤ |_ij|- m for all ij ∈ E(R). * For each i ∈ [r], let U_i ⊆ϕ^-1(i) with |U_i| ≤ m and suppose there is a set T_x ⊆ V_i with |T_x| ≥ν m for each x ∈ U_i. Then there is a transversal embedding of H inside F such that for every i ∈ [r], every x ∈ U_i is embedded inside T_x. Choose new parameters μ',,, satisfying ,μ,≪μ' ≪≪≪≪. Note that n and m are similar in size, since rm ≤ n ≤ rm/, so in particular n/r ≤ m ≤ |V_j| ≤ m/≤ n/(r) for each j. We have that H is the vertex-disjoint union of connected components H_1,…,H_t each of size at most μ n. We will partition each ϕ^-1(j) into s parts of size roughly m and one part of size roughly μ' m that respect components. For each h ∈ [t] and j ∈ [r], let A_hj := V(H_h) ∩ϕ^-1(j) and a_hj:=|A_hj|. Let t^* be the largest integer such that b_0j:=|B_0j| ≥μ'm for all j ∈ [r], where B_0j := A_t^*+1,j∪…∪ A_tj. Obtain s ∈N and B_ij⊆ϕ^-1(j) for i ∈ [s], j ∈ [r] iteratively as follows. Let ℓ_0:=0 and a_0j:=0 for all j ∈ [r] and do the following for i ≥ 1. If |ϕ^-1(j)|-(a_1j+… + a_ℓ_i-1j)>q:=2(+1)^r-1 m for all j ∈ [r], let B_ij:= A_ℓ_i-1+1,j∪…∪ A_ℓ_ij where ℓ_i is the smallest integer so that b_ij := |B_ij| is at least m for all j ∈ [r]. Otherwise, set s:= i, let B_sj:=A_ℓ_s-1+1,j∪…∪ A_t^*j, and let b_sj:=|B_sj|. This process always terminates since we remove at least rγ m vertices each time, so defines a partition V_j = B_0j∪ B_1j∪…∪ B_sj for each j ∈ [r]. We have b_0j + b_1j+…+b_sj=|ϕ^-1(j)| = |V_j|. We claim that μ' m ≤ b_0j ≤ 2(+1)^r-1μ' m for all j ∈ [r], m ≤ b_ij ≤ 2(+1)^2r-2 m for all i ∈ [s],j ∈ [r] and s ≤ 1/(). To prove the claim, we first note the following. Since every H_h is a connected graph with (H_h) ≤, |A_hj| ≤ |V(H_h)| ≤ (+1)^r-1|A_hj'| for every j,j' ∈ [r] and h ∈ [t]. (The second inequality can be proved by induction of the number of parts of H_h.) By construction, for every j ∈ [r] we have b_0j≥μ'm. There at least one j' ∈ [r] for which |B_0j'| ≤μ'm+μ n ≤ 2μ'm, otherwise we would have chosen a larger t^*, and so the required upper bound on every b_0j follows from (<ref>). For the second part, suppose first that i ∈ [s-1]. Then every b_ij≥ m by construction. Also there is some j' ∈ [r] for which b_ij'≤ m + μ n ≤ 3 m/2, otherwise ℓ_i would be smaller. Thus b_ij≤ 3(+1)^r-1 m/2 for all j ∈ [r] by (<ref>). Now consider s=i. There is at least one j^* ∈ [r] which caused the partitioning to stop during the s-th step due to q ≥ |ϕ^-1(j^*)|-(a_1j^*+…+a_ℓ_s-1j^*) = |ϕ^-1(j^*)|-(b_1j^*+…+b_(s-1)j^*)=b_0j^*+b_sj^*. Since the partitioning did not stop during step s-1, we similarly have b_0j^*+b_(s-1)j^*+b_sj^*>q. Thus b_sj^*≥ q-2(+1)^r-1μ' m -3(+1)^r-1 m/2 ≥ m. Altogether, m ≤ b_sj^*≤ q. For any j ∈ [r] which did not cause the partitioning to stop, we have b_0j+b_sj > q and hence b_sj >q-2(+1)^r-1μ' m ≥ m. Moreover, (<ref>) implies that b_0j+b_sj=a_ℓ_s-1+1,j+… +a_tj≤ (+1)^r-1(a_ℓ_s-1+1,j^*+… +a_tj^*) = (+1)^r-1(b_0j^*+b_sj^*) ≤ (+1)^r-1q. Thus m ≤ b_sj≤ (+1)^r-1q. Thus for every j ∈ [r] we have m ≤ b_sj≤ (+1)^r-1q, and hence m ≤ b_ij≤ (+1)^r-1q for all i ∈ [s] and j ∈ [r]. For the third part, the lower bound implies that s m ≤ |V_j| ≤ m/ and so d ≤/, completing the proof of the claim that (<ref>) holds. Another related estimate that will be useful later is b_0j-sr^1/3 m > μ' m-(r|V_j|/ m^1/3 m) > μ' m - r^1/3m/≥μ' m/2. Let B^i := ⋃_j ∈ [r]B_ij. We have e(H[B^i]) ≤|B^i| ≤ 2(+1)^2r-1r m for all i ∈ [s] and e(H[B^0]) ≤|B^0| ≤ 2(+1)^r rμ' m. In the next part of the proof, we will embed H[B^1],…,H[B^s] in turn. For this, we first partition each V_j into sets for each stage of the embedding, and find a buffer set of colours which will be used in the final part of the embedding. Given an edge xy of G, write c(xy) := {c ∈: xy ∈ G_c}. For each jj' ∈ E(R) and x ∈ V_j, define M_jj'(x) := {y ∈ V_j': |c(xy)| ≥ d|_jj'|/2}. We claim that each such set satisfies |M_jj'(x)| ≥ d |V_j'|/2. Indeed, semi-superregularity implies that d|V_j'||_jj'| ≤∑_c ∈_jj'd_G_c(x,V_j') = ∑_y ∈ V_j'|c(xy)| ≤ |M_jj'(x)||_jj'| + |V_j'|d|_jj'|/2, as required. For each j ∈ [r], let V^0_j ∪ V^1_j ∪…∪ V^s_j be a random partition of V_j into parts of size b_0j-sr^1/3 m, b_1j+r^1/3 m,…,b_sj+r^1/3 m respectively. For every i ∈ [s], j ∈ [r] and x ∈ B_ij, we will embed x into V^i_j, which has size slightly larger than necessary. For each e ∈ E(R), let _e^0 be a uniform random subset of _e of size |_e| and let _e := _e _e^0. By a Chernoff bound, using (<ref>) and (<ref>) to see that all parts are sufficiently large, we may assume that the following hold: * for all jj' ∈ E(R), x ∈ V_j and y ∈ M_jj'(x), we have |c(xy) ∩_jj'^0| ≥ d|^0_jj'|/4; * for all jj' ∈ E(R) and x ∈ V_j we have |M_jj'(x) ∩ V_j'^0| ≥ d|V^0_j'|/4; * for all 0 ≤ i ≤ s and x ∈ V(B^i) we have |T_x'| ≥ν |V^i_ϕ(x)|/2, where T_x' := T_x ∩ V_ϕ(x)^i. Indeed, each of these lower bounds is at most half the expectation of the quantity in question. We iteratively construct the transversal embedding of H[B^1 ∪…∪ B^s]. Suppose that we have obtained an embedding g_i-1 for some i ≥ 1, such that * g_i-1 is a transversal embedding of H[W_i-1], where W_i-1 := B^1 ∪…∪ B^i-1; * g_i-1(x) ∈ V^i'_j for every x ∈ B_i'j⊆ W_i-1; * for every jj' ∈ E(R) and every embedded pair x,y of vertices where xy ∈ E(H[V_j,V_j']), the colour of xy is in _jj' and these colours are distinct over the embedding; and * if x ∈ U_j ∩ W_i-1, then g_i-1(x) ∈ T_x for every j ∈ [r]. That is, g_i-1 consists of injections τ:W_i-1→ V and : E(H[W_i-1]) → such that τ(x)τ(y) ∈ G_(xy), with the stated properties. Now we will extend g_i-1 to a transversal embedding g_i of W_i := W_i-1∪ B^i with the analogous properties for i. First observe that the vertices we are about to embed, that is, B^i = W_i W_i-1, have no previously embedded neighbours. Therefore it suffices to find a transversal embedding of B^i when B_ij is embedded into V^i_j for each j ∈ [r] which uses unused colours. Obtain _e^i from _e by deleting any colour we have used in the embedding g_i-1. Every colour cluster in the original template contains extra colours, so for every jj' ∈ E(R) , we have |^i_jj'| ≥ |_jj'| - ∑_1 ≤ℓ≤ i-1e(H[B_ℓ j,B_ℓ j'])- |_jj'| ≥ (1-)|_jj'| - e(H[B^i]) ≥ m/2. For each j ∈ [r] and j' ∈ N_R(j), let V^i,j', bad_j := {v ∈ V^i_j : ∑_c ∈_jj'^id_G_c(v,V^i_j') < 2d|V^i_j'||_jj'^i|/3}. Lemma <ref>(i) implies that G^i_jj' := (G_c[V^i_j,V^i_j']: c ∈_jj'^i) is (√(),d/2)-regular and thus, by Lemma <ref>(i) and (<ref>), we have that |V^i,j', bad_j| < √() |V^i_j| < ^1/3m. For each i ∈ [s] and j ∈ [r], obtain a subset Y^i_j of V^i_j by removing V^i,j', bad_j for each j' ∈ N_R(j) and enough additional arbitrary vertices so that |Y^i_j|=|V^i_j|-r^1/3 m=b_ij. Let Y^i := {Y^i_1,…,Y^i_r} and let ^i:= ⋃_e ∈ E(R)^i_e. We claim that the subtemplate F_i := (Y^i,^i,G^i) of F induced by Y^i, ^i is a semi-super rainbow R-template with parameters ( m,√(),d/2,1/(2(+1)^2r-2)). That vertex clusters have suitable sizes follows from (<ref>), and that colour clusters have suitable sizes follows from (<ref>). We have seen that G^i_jj' is (√(),d/2)-regular. Moreover, for all jj' ∈ E(R) and v ∈ Y^i_j, since v ∉ V^i,j', bad_j, we have ∑_c ∈^i_jj'd_G_c(v,Y^i_j') ≥ 2d|V^i_j'||^i_jj'|/3-^1/3m|^i_jj'| > d|V^i_j'||^i_jj'|/2, so G^i_jj' is (√(),d/2)-semi-superregular. This completes the proof of the claim. Therefore, for each jj' ∈ E(R), we can apply Proposition <ref> to see that the -thick graph T^_F_i is such that T^i_jj' := T^_F_i[Y^i_j,Y^i_j'] is (√(),d/4)-half-superregular for all jj' ∈ E(R). Apply Theorem <ref> (the blow-up lemma) with target sets T_x' and parameters m,√(),√(),ν/2,d/4 playing the roles of m,,,ν,d to embed H[B ^i] into T^i_jj' such that for every j ∈ [r], every x ∈ U_j ∩ B^i is mapped to T_x' ⊆ V^i_j, which is possible since, by <ref>, |T_x'| ≥ν m/2 and |U_j ∩ B^i| ≤ m. Since for each edge of T^i_jj', the number of unused colour graphs it lies in is |^i_jj'| (<ref>)> m/2 > 3(+1)^2r-1r m (<ref>)> e(H[B^i]), we can greedily assign colours to the embedding. This completes the construction of g_i which satisfies (i)–(iv). Thus we can obtain g_s with the required properties. At the end of this process, the unembedded part of H is H[B^0], and the set of vertices of V_j which are not an image of any embedded vertex of H is precisely Z_j := V^0_j ∪⋃_i ∈ [s](V^i_j Y^i_j), which has size b_0j. We will use the colours in ^0:= ⋃_e ∈ E(R)^0_e to embed each B_0j into Z_j. We claim that the subtemplate F_0 := (Z,^0,G_0) of F induced by Z:=(Z_j: j ∈ [r]),^0 is a semi-super rainbow template with parameters (μ' m,√(),d^2/16,1/(2(+1)^r-1)). Indeed, (<ref>) implies that vertex clusters have sizes satisfying μ' m ≤ b_0j = |Z_j| ≤ 2(+1)^r-1μ' m. Colour clusters have size |^0_jj'| = |_jj'| ≥ m > μ' m. Lemma <ref>(i) implies that G^0_jj':=(G_c[V^0_j,V^0_j']: c ∈^0_jj') is (√(),d/2)-regular for every jj' ∈ E(R). Additionally, for all x ∈ Z_j ⊆ V_j, ∑_c ∈^0_jj'd_G_c(x,Z_j') = ∑_y ∈ V^0_j'|c(xy) ∩^0_jj'| ≥∑_y ∈ M_jj'(x) ∩ V^0_j'|c(xy) ∩^0_jj'| <ref>,<ref>≥ d |V^0_j'|/4 · d|^0_jj'|/4 =d^2|V^0_j'||^0_jj'|/16, so the template is semi-super. As before, we can apply Proposition <ref> to see that the -thick graph T^_F_0 is such that T^0_jj' := T^_F^0[Z^0_j,Z^0_j'] is (√(),d/4)-half-superregular for each jj' ∈ E(R). We have m < √()|Z_j|, and <ref> implies that |T'_x| ≥ν|V^0_ϕ(x)|/2 ≥ν|Z_ϕ(x)|/3 for all x ∈ V(B^0) with a target set. Thus we can apply Theorem <ref> (the blow-up lemma) with target sets T_x' and parameters μ'm,√(),√(),ν/3,d/4 playing the roles of m,,,ν,d to embed H[B^0] into T^0_jj' such that for every j∈ [r], every x ∈ U_j is mapped to T_x' ⊆ V^0_j, which is possible by <ref>. The number of colours on each edge of T^0_jj' is at least |^0_jj'| = |_jj'| ≥ m > 3(+1)^r rμ' m (<ref>)>e(H[B^0]). Thus we can again greedily assign colours of ^0_jj' to obtain a transversal embedding of H[B^0] using colours untouched by the previous embedding, which completes the transversal embedding of H. Finally we state a transversal blow-up lemma, which we prove in the next section by combining the embedding lemmas of this section. Its statement is slightly stronger than Theorem <ref> since the template is super rather than half-super, but the proof of Theorem <ref> is a simple matter of applying Lemma <ref>(iv) (essentially Lemma <ref>) first. The details can be found at the end of the next section. [Transversal blow-up lemma] Let 0 < 1/m ≪,μ,≪ν,d,,1/,1/r ≤ 1/2 where r is an integer. * Let R be a simple graph with vertex set [r] and let F=(V,,G) be a rainbow super R-template with parameters (m,,d,). * Suppose that H is a μ-separable graph with (H) ≤ and there is a graph homomorphism ϕ of H into R such that |ϕ^-1(j)| = |V_j| for all j ∈ [r] and e(H[ϕ^-1(i),ϕ^-1(j)])=|_ij| for all ij ∈ E(R). * For each i ∈ [r], let U_i ⊆ϕ^-1(i) with |U_i| ≤ m and suppose there is a set T_x ⊆ V_i with |T_x| ≥ν m for each x ∈ U_i. Then there is a transversal embedding of H inside F such that for every i ∈ [r], every x ∈ U_i is embedded inside T_x. § PROOF OF THE TRANSVERSAL BLOW-UP LEMMA §.§ Sketch of the proof The proof is a combination of the usual blow-up lemma applied to the thick graph, together with the rainbow partial embedding lemmas in the previous section and the colour absorbing approach pioneered by Montgomery, Müyesser and Pehova in <cit.>. We begin by outlining the steps of the proof in <cit.>, which is a common sequence of steps for embeddings using absorption, and which we shall also use. The transversal bandwidth theorem in <cit.> also follows this general outline, as well as using the (usual) blow-up lemma as a key tool. Let H=H_∪ H_∪ H_∪ H_∪ H_ be a suitable partition, which will be carefully chosen. These parts will be embedded in turn, each time using new colours to extend the transversal embedding. Step 0.Embed a small `connecting graph'. Embed H_ whose vertex set, as guaranteed by separability, is such that its removal disconnects H into very small components (Lemma <ref>). Step 1.Find a colour absorber. Embed H_ into the edge-coloured graph G of G, and find disjoint sets A,B⊆ of colours such that, given any e(H_)-|A| colours in B, we can choose the colours of the edges of H_ using exactly those colours and the colours in A. Step 2.Use most of the colours outside A∪B. Embed H_ using most of the colours outside A∪B (Lemma <ref>). Step 3.Use the remaining colours outside A∪B. Embed H_ using every unused colour in (A∪B) as well as some colours of B (Lemma <ref>). Step 4.Embed the remaining vertices of H using colours in B. Embed H_ using colours of B (Lemma <ref>). Step 5.Use the colour absorber. The colours for H_ will consist of A along with the unused colours in B. We recall that in <cit.>, these steps were used to find transversal embeddings of spanning trees, and F-factors. When embedding an F-factor where F is a small graph, one can embed each copy of F in turn, ensuring that they are disjoint. Similarly, to embed a tree, one can embed small subtrees one by one, ensuring that the single vertex of the current tree that was already embedded matches up. The new difficulty in our setting (and in <cit.>) is that we would like to embed graphs which are more highly-connected. Nevertheless, they have the property that a small fraction of edges can be removed to produce a graph which consists of small (but still linear) connected components. One difficulty is that we do not have a minimum degree condition. Every vertex has large total degree and every colour graph is dense, but there could be a small proportion of vertices which do not have any edges of a given colour. For Step 1, we use the following lemma from <cit.> which is their key tool for colour absorption. Here we present a slightly weaker version with modified parameters for simplicity. Let 0 < _1 ≪_2 ≪_3<1 and let ℓ,m,n be integers with ℓ=_1 m and 1≤ m ≤_3^2n/8. Suppose that G=(U,) is a bipartite graph with |U|=m and ||=n such that d(v,)≥_3 n for each v∈ U. Then there exist disjoint subsets A, B⊆ with |A|=m-ℓ and |B|= _2 n such that, for every subset B^0 of B with size ℓ, we can find a perfect matching between U and A∪B^0. §.§ Proof of Theorem <ref> Without loss of generality, we may assume that ≪μ≪. We further define constants _1,p_,p_,p_,_3,ν' so that altogether 0<1/m ≪≪μ≪≪_1 ≪ p_≪ p_≪ p_≪_3 ≪ν' ≪ν,d,, 1/,1/r ≤ 1/2. Let p_:=1-(p_+p_+p_). Let R be a graph on vertex set [r], F=(V,,G) a rainbow super template with parameters (m,,d,) and H a graph as in the statement. Let n := |V| be the number of vertices in the template, which equals v(H). Note that r m ≤ n ≤ rm/, and m ≤ |ϕ^-1(i)| = |V_i| ≤ m/ for all i ∈ [r], and m ≤ |_e| ≤ m/ for all e ∈ E(R), where the final assertion follows from the fact that every |_ij| = e(H[ϕ^-1(i),ϕ^-1(j)]) ≤ |ϕ^-1(i)| ≤ m/. Preparation of H. First we will choose a partition H=H_∪ H_∪ H_∪ H_∪ H_ such that each part H_∘ has roughly a given number p_∘ n of vertices, and moreover e(H_∘[ϕ^-1(i),ϕ^-1(j)]) ≈ p_∘ e(H[ϕ^-1(i),ϕ^-1(j)]) for all ij ∈ E(R) and ∘∈{,,,}. Since H is μ-separable, there is a set X of size at most μ n such that H-X consists of disjoint components H_1,…, H_t, each of size at most μ n. For all ℓ∈ [t] and ij ∈ E(R), let H^ℓ_ij := H_ℓ[ϕ^-1(i),ϕ^-1(j)] and let n^ℓ_j := |V(H_ℓ) ∩ϕ^-1(j)| and h^ℓ_ij := e(H^ℓ_ij). Let H_ := H[X]. Note that |X|=|V(H_)| ≤μ n ≤ rμ m/≤√(μ)m. Independently, for each ℓ∈ [t], add H^ℓ:=⋃_ij ∈ E(R)H^ℓ_ij to H_∘ with probability p_∘ for ∘∈{, , , }. Then, for all i ∈ [r] we have 𝔼(|V(H_∘)∩ϕ^-1(i)|)=p_∘|V_i| ± |X| ≥ (p_∘-√(μ)) m, and for all ij ∈ E(R) we have 𝔼(e(H_∘[ϕ^-1(i),ϕ^-1(j)])) =p_∘∑_ℓ∈[t] h^ℓ_ij = p_∘e(H[ϕ^-1(i),ϕ^-1(j)])±|X| =p_∘|_ij| ±√(μ)m ≥ (p_∘-√(μ)) m. Now, t ≥ (1-μ)/μ≥ 1/(2μ) and e(H^ℓ_ij) ≤ n^ℓ_i≤μ n for all ℓ∈ [t] and ij ∈ E(R), and each of these expectations is at least p_ m/2, so a Chernoff bound implies that these values are within a multiplicative factor of (1±1/3) of their expectations with probability at least 1-4(r+r2)exp(-( p_ m/2)^2/9/1/2μ(μ n)^2) ≥ 1 - 4r^2exp(-p_^2^4/18rμ) ≥1/2. Thus we may assume that, for all ∘∈{,,,}, n^∘_i := |V(H_∘) ∩ϕ^-1(i)| =(1±12)p_∘|V_i| ∈ [p_∘ m/2,3p_∘ m/(2)] for all i ∈ V(R); h^∘_ij := e(H_∘[ϕ^-1(i),ϕ^-1(j)]) = (1±12)p_∘ |_ij| ∈ [p_∘ m/2, 3 p_∘m/(2)] for all ij ∈ E(R). We write H^∘_ij := H_∘[ϕ^-1(i),ϕ^-1(j)] for all ij ∈ E(R). Every vertex x ∈ V(H) V(H_) lies in some H_∘ where ∘∈{, , , }, and x can only have neighbours inside H_∘ or V(H_). Note here that for each ∘∈{,,,}, H_∘ is the union of vertex-disjoint components of size at most μ n ≤ (2μ/p_∘)|V(H_∘)| ≤√(μ)|V(H_∘)|, by (<ref>). To find a transversal embedding of H, we need to define two injective maps τ: V(H)→ V and : E(H)→ where τ(x)τ(y) ∈ G_(xy) for all xy ∈ E(H). For this, we will define τ(x) for every vertex in H_, followed by every vertex in H_, H_, H_, H_. When defining τ for vertices in H_∘, we simultaneously define for all future incident edges except for ∘=, for which colours are defined at the end. Whenever we define τ(y) for some y we always choose τ(y) ∈ V_ϕ(y) and if y has a target set T_y, we ensure τ(y) ∈ T_y. We also always choose (xy) ∈_ϕ(x)ϕ(y). Step 0. We first embed H_ into F by applying Lemma <ref>, as follows. Recall that X=V(H_), and define Y := N_H(X) X. Let H'_ be the graph with vertex set X ∪ Y and edge set E(H[X ∪ Y]) E(H[Y]). Then (<ref>) implies that |V(H'_)| ≤ (+1)|V(H_)| (<ref>)≤ 2√(μ) m and hence e(H'_) ≤μ^1/3m. For each w ∈ V(H'_) where T_w is not defined, let T_w := V_ϕ(w). Lemma <ref> applied with 2√(μ) playing the role of (and the other parameters the same) implies that there are injective maps τ: X → V and : E(H'_) → with τ(x)τ(x') ∈ G_(xx') for all xx' ∈ E(H_), and so that τ(x) ∈ T_x for all x ∈ X, and so that there are candidate sets C_y for each y ∈ Y where C_y ⊆ V_ϕ(y)τ(X) such that C_y ⊆⋂_x ∈ N_H(y) ∩ XN_G_(xy)(τ(x)) ∩ T_y and |C_y| ≥ν' m. We update the target sets, by defining T^1_y := C_y ⊆ T_y for y ∈ Y where this set is defined, and T_y^1 := T_y if y ∉ V(H_') and this set is defined. This updates target sets T_y^1 for all unembedded vertices y, and all target sets have size at least ν'm. Note that for all y ∈ Y, if we choose τ(y) ∈ T_y^1, then τ(x)τ(y) ∈ G_(xy) for all x ∈ N_H(y) ∩ X, so the colour (xy) will be used as desired. For each i ∈ [r], let U^1_i be the set of vertices y in ϕ^-1(i) with a target set T_y^1. We have |U^1_i| ≤ |U_i|+|Y| ≤ m+2√(μ) m ≤ 2 m. This set will only shrink during the rest of the proof as vertices with target sets are embedded. Some target sets of vertices in U^1_i will also shrink, but will remain large enough. Let n^_i := |V(H_) ∩ϕ^-1(i)| for all i ∈ [r] and h^_ij := e(H'_[ϕ^-1(i),ϕ^-1(j)]) for all ij ∈ E(R). We have |V_i| =n^_i+n^_i+n^_i+n^_i+n^_i for all i ∈ [r] and |_ij| =e(H[ϕ^-1(i),ϕ^-1(j)])=h^_ij+h^_ij+h^_ij+h^_ij+h^_ij for all ij ∈ E(R). Let ℱ' = (V',',G') be the subtemplate of F obtained by deleting the vertices τ(X) and the colours (E(H_')) from ℱ, so, defining for all i ∈ [r] and e ∈ E(R) V'_i := V_i τ(X), '_e := _e (E(H'_)), we have (1-μ^1/4)|_e| (<ref>),(<ref>)≤ |_e'|(<ref>)=h^_e+h^_e+h^_e+h^_e. By Lemma <ref>(ii), F' is a rainbow super R-template with parameters (m/2,2,d/2,/2). Step 1. Next we embed (the vertices but not the colours of) H_ into ℱ'. For each i ∈ [r], let V^_i ⊆ V_i' be a uniform random subset of size n^_i. Lemma <ref>(iii) applied with F',p_/3,6 playing the roles of F,,k implies that the subtemplate F_ of F' induced by (V^_i: i ∈ [r]) is a rainbow super R-template with parameters (p_ m/3,√(),d^2/64,/6). Furthermore, a Chernoff bound implies that we may assume |T^1_y ∩ V_ϕ(y)^| ≥ 2ν' n_i^/3 ≥ν'|V_i^|/3 for all y ∈ V(H_) with a target set T_y^1. Let V_i” := V_i' V^_i. We have |V_i”| ≥ |V_i|-2√(μ) m-n^_i (<ref>)≥ (1-2p_)|V_i| ≥ m/2. Lemma <ref>(ii) applied with F,2p_ playing the roles of F, implies that the subtemplate F” induced by V_i” is a rainbow super R-template with parameters (m/2,2,d/2,/2), and a Chernoff bound implies that we may assume |T^1_y ∩ V_ϕ(y)”| ≥ 2(ν'm-n^_i)/3 ≥ν' m/2 for all y with a target set T^1_y. We will embed H_ into the _3-thick graph T^_3_ℱ_. By Proposition <ref>, for every ij ∈ E(R), T^_3_ℱ_[V_i^,V_j^] is (√(),d^2/128)-half-superregular. Apply Theorem <ref> (the blow-up lemma) with target sets T^1_y∩ V_ϕ(y)^ and parameters √(),√(),√(ν'),d^2/128 playing the roles of ,,ν,d to find an embedding τ of H_ into T^_3_F_ such that τ(x) ∈ V_ϕ(x) for all x ∈ V(H_) and τ(y)∈ T_y^1 for every y ∈ U^1_i with i ∈ [r]. Note that we haven't yet defined (e) for any e∈ E(H_). However, by our definition, for every such e=xy we have τ(x)τ(y)∈ E(G_c) for at least _3 |_e'| colours c∈_e'. For each e ∈ E(R), let G_, e be the auxiliary bipartite graph with vertex classes Z_e:= {τ(x)τ(y): xy ∈ E(H_^e)} and '_e, where {τ(x)τ(y),c} is an edge whenever τ(x)τ(y) ∈ E(G_c). Then (<ref>) implies that |Z_e|=h^_e ≤ 3p_ |_e|/2 ≤ 2p_|_e'|, and by construction, every τ(x)τ(y) in Z_e has degree at least _3|_e'| in G_,e. For each e ∈ E(R), define constants ℓ_e and p_e via ℓ_e := _1 h^_e and p_e|'_e| := (1-p_)|_e'|-(h^_e-ℓ_e)-h^_e = h^_e+h^_e-p_|_e'|+_1h^_e (<ref>)= (1±14)h^_e ∈ [p_|'_e|/3,3p_|'_e|]. Thus for each e ∈ E(R) we can apply Lemma <ref> with G_,e,Z_e,'_e,_1,p_e,_3 playing the roles of G,U,,_1,_2,_3 to obtain disjoint sets A_e, B_e⊆_e' such that * |A_e|=h^_e -ℓ_e (<ref>)≤ 2p_|_e'|; * |B_e|=p_e|_e'|; * for any set B^0_e⊆B_e of size ℓ_e, there exists a colouring using colours from A_e∪B^0_e that makes the embedding τ of H^_e rainbow. That is, there is a bijection _e: E(H^_e) →A_e ∪B_e^0 such that τ(x)τ(y) ∈ G__e(xy) for all xy ∈ E(H^_e). Preparation for Steps 2–4. Recall that the template F”=(V”,',G”) has vertex clusters V” := (V_i” := V_i' V^_i: i ∈ [r]), colour clusters ('_e: e ∈ E(R)) and graphs (G”)^ij = (G_c[V_i”,V_j”]: c ∈'_ij), and F” is a rainbow super R-template with parameters (m/2,2,d/2,/2). Let n^_i := n^_i+n^_i for each i ∈ [r] and let p_ := p_+p_. During the rest of the proof, for each ♢∈{,} we will identify pairwise disjoint vertex sets V^♢_i in the vertex clusters V_i”, i ∈ [r], so that |V^♢_i|=n^♢_i, and V_i” = V^_i ∪ V^_i, where V^♢ := {V^♢_1,…,V^♢_r} ∀♢∈{,}. We will choose τ(x) ∈ V^_ϕ(x) for all x ∈ V(H_) and τ(x) ∈ V^_ϕ(x) for all x ∈ V(H_) ∪ V(H_). We will identify pairwise disjoint colour sets ^∘, so that ^∘ = ⋃_e ∈ E(R)^∘_e ∀∘∈{,,}, _e' = ⋃_∘∈{,,}^∘_e and for each xy ∈ E(H_∘), we will ensure (xy) ∈ G_c for some c ∈^∘_ϕ(x)ϕ(y). Given such sets of vertices and colours for ∘∈{,,}, we define F_∘ := (V^∘',^∘,G^∘) to be the subtemplate of F” induced by V^∘',^∘, where '= and '='=. Note that these templates are always rainbow R-templates with pairwise disjoint sets of colours, but F_ and F_ share the same vertex clusters. We will now define the vertex sets V^∘'_i for ∘' ∈{,}, but colour sets will be defined sequentially, given the colours used in each step. For each i ∈ [r], do the following. For each j ∈ N_R(i), recall that (G_c[V_i”,V_j”]: c ∈'_ij) is (2,d/2)-superregular. By (<ref>) and <ref>, we have |B_ij| ≥ p_|'_ij|/2, so by Lemma <ref>(i), (G_c[V_i”,V_j”]:c ∈B_ij) is (4/p_,d/4)-regular and hence (√()/r,d/4)-regular. By Lemma <ref>(i), there is a set V^j_i ⊆ V”_i with |V^j_i| ≤√()|V_i”|/r such that ∑_c ∈B_ijd_G_c(v) ≥ d|V_j”||B_ij|/5 for all v ∈ V_i”V^j_i. Let V_i := ⋃_j ∈ N_R(i)V^j_i, so |V_i| ≤√()|V_i”|, and let V^_i be a uniform random subset of V_i”V_i of size n^_i, and let V^_i := V_i” V^_i. So |V^_i| =n^_i. Let T_y^2 := T_y^1 ∩ V^∘'(y)_ϕ(y) for all y ∈ U^1_i and i ∈ [r] (i.e. those y for which T_y^1 has been defined), where ∘'(y)= if y ∈ V(H_) and ∘'(y)= if y ∈ V(H_) ∪ V(H_). For all c ∈'_ij, the superregularity of our graph collections implies that E(e(G_c[V^_i,V^_j])) ≥(n^_i/|V_i”|)(n^_j/|V_j”|) (e(G_c[V_i”,V_j”])-√()|V_i”||V_j”|) ≥ d n^_i n^_j/3(<ref>)≥ d p_^2^2 m^2/12. For all i ∈ [r] and y ∈ U^1_i we have E(|T^2_y|) ≥ (n^∘'(y)_i/|V”_i|)(|T^1_y ∩ V”_i|-√()|V”_i|) (<ref>),(<ref>),(<ref>)≥ p_∘'(y)ν'm/5. Further, for all ij ∈ E(R) and v ∈ V^_i, since v ∉V^j_i we have E(∑_c ∈B_ijd_G_c(v,V^_j))≥(n^_j/|V_j”|)∑_c ∈B_ij(d_G_c(v,V_j”)-√()|V_j”|) (<ref>)≥ d n^_j|B_ij|/11. By Chernoff bounds, we may assume that each of the above quantities are close to their expectations, so * |T_y^2| ≥ p_∘'(y)ν'm/6 for all y ∈ U^1_i and i ∈ [r] (i.e. all those y which have a target set); * e(G_c[V^_i,V^_j]) ≥ dn^_i n^_j/13 for all ij ∈ E(R) and c ∈'_ij; * ∑_c ∈B_ijd_G_c(v,V^_j) ≥ dn^_j|B_ij|/12 for all ij ∈ E(R) and v ∈ V^_i. Let m' := p_ m/8. Now (<ref>) and <ref> imply that, for all i ∈ [r], the number |U^1_i| of vertices y ∈ϕ^-1(i) with a target set T^2_y satisfy |U^1_i| ≤ 2 m ≤√()m', and |T^2_y| ≥ p_ν'm/6 > ν'm'. For all i ∈ [r] we have |V^_i|=n^_i=n^_i+n^_i, so p_|V_i”|/3 (<ref>)≤ p_|V_i|/2 (<ref>)≤ |V^_i| (<ref>)≤ 2p_|V_i| (<ref>)≤ 3p_|V_i”|. We claim that the following properties about templates F_∘ with colour sets ^∘_e for ∘∈{,,} hold. * Let ^_e := C'_e (A_e ∪B_e) for all e ∈ E(R). Then F_, with vertex clusters (V^_i: i ∈ [r]), is a rainbow super R-template with parameters (m/4,4,d/4,/4). * Suppose ^_e = B_e ∪D_e where D_e ⊆^ for all e ∈ E(R). Then F_, with vertex clusters (V^_i: i ∈ [r]), is a rainbow R-template with parameters (m',√(),d/4,/24) with the property that e(G_c[V^_i,V^_j]) ≥ d|V^_i||V^_j|/22 for all c ∈D_e and ij ∈ E(R). * Suppose ^_e ⊆B_e and |B_e ^_e| ≤ 2p_ |_e'| for all e ∈ E(R). Then F_, with colour clusters (V^_i: i ∈ [r]), is a rainbow super R-template with parameters (m',√(),d/13,/24). These properties follow from Lemma <ref> applied to F_∘ as a subtemplate of F”=(V”,',G”) (which has parameters (m/2,2,d/2,/2)), as follows. First, <ref> holds by Lemma <ref>(ii) since F_ is a small perturbation of F”. Indeed, <ref> and <ref> implies that |'_e ^_e| ≤ (2p_+3p_)|'_e| ≤ 4p_|'_e| ≤√(p_)m/2, and |V_i” V^_i| = |V^_i| ≤ 3p_|V_i”| ≤√(p_)m/2 by (<ref>), so we can apply the lemma with 2,√(p_),d/2 playing the roles of ,,d to obtain <ref>. For <ref>, <ref> and (<ref>) imply that |^_e| ≥ |B_e| = p_e|'_e| > p_|_e'|/4. Together with (<ref>), this means that we can apply Lemma <ref>(i) with parameters m/2,2,p_/4,d/2,/2,12 playing the roles of m,,,d,,k to see that F_ is a rainbow R-template with parameters (m',√(),d/4,/24). The second property follows from <ref> since D_e ⊆^_e ⊆'_e. For property <ref>, we have |^_e| ≥ |B_e|-2p_|'_e| ≥ p_|_e'|/4 by (<ref>). Together with (<ref>), this means we can apply Lemma <ref>(i) with the same parameters as previously to see that F_ is a rainbow R-template with the given parameters. For superregularity, we have for all e=ij ∈ E(R) and v ∈ V^_i that ∑_c ∈^_ijd_G_c(v,V^_j) ≥∑_c ∈B_ijd_G_c(v,V^_j)-2p_|'_ij||V_j”| (<ref>),(<ref>),<ref>≥ dn^_j|B_ij|/12-2p_ m^2/^2 ≥ dn^_j|^ vx_ij|/13, and e(G_c[V^_i,V^_j]) ≥ dn^_i n^_j/13 for all c ∈^_ij by <ref>. Thus <ref>–<ref> all hold. Now it is a matter of embedding each H_∘ into its corresponding template, using suitable (unused) colours at each step. The image of V(H_) in each V_i will be the remaining vertices of V^_i after embedding H_, which is most of this set since H_ is much smaller than H_. Step 2. We embed H_ into ℱ_ using Lemma <ref> (embedding lemma with extra colours) with parameters m/4,4,√(μ),√(),2p_,ν'/7,d/4,/4 playing the roles of m,,μ,,,ν,d,. For this, we recall that H_ is the union of components of size at most √(μ)|V(H_)|, and there is a graph homomorphism from H_ into R. By <ref> and <ref>, it now suffices to check that colour clusters have a suitable size to apply the lemma. Indeed, for all e ∈ E(R) we have |^_e|-h^_e <ref>= |'_e|-|A_e|-|B_e|-h^_e <ref>,<ref>=|_e'|-(h^_e-ℓ_e)-p_e|_e'|-h_e^(<ref>)= p_|_e'| ∈ [p_ m/2,p_ m/]. Thus we can apply Lemma <ref> to obtain τ(V(H^)) and (E(H^)) where each y with a target set T^2_y is embedded inside it. For each e ∈ E(R), let D_e := ^_e (E(H_)) and let ^_e := B_e ∪D_e. Step 3. We embed H_ into F_ by applying Lemma <ref> (embedding lemma with target sets and prescribed colours) with m',√(),√(),√(p_),√(p_),p_,ν',d/4,/24 playing the roles of m,,,,_1,_2,ν,d,. To see that this is possible, we have |V(H_) ∩ϕ^-1(i)|=n^_i ≤ 2p_ m/≤√(p_)m'. We also have e(H_[ϕ^-1(i),ϕ^-1(j)]) = h^_ij(<ref>)≥ p_ m/2 ≥ p_ m'. Equation (<ref>) implies that the number of vertices with target sets and the size of these target sets are suitable. By (<ref>), for every e ∈ E(R) we have |D_e|=|_e^|-h_e^≤ p_ m/≤√(p_)m'. By <ref>, it remains to check that |^_e D| ≥ d|^_e|/4. Since the original template was rainbow, ^_e D = ^_e D_e = B_e. Equation (<ref>) implies that |B_e| ≥ |^_e| - √(p_)m' ≥ |^_e|/2 ≥ d|^_e|/4, as required. Thus we can apply Lemma <ref> to obtain τ(V(H_)) and (E(H_)) where each y with a target set T^2_y (i.e. those in U^1_ϕ(y)) is embedded inside it. Let ^_e := B_e (E(H_)) and V^_i := V^_i τ(V(H_)), so V^_i = n^_i. Step 4. Let F_' be the subtemplate of F_ induced by (V^_i: i ∈ [r]). It is a small perturbation: |V^_i V^_i| = n^_i ≤ 2p_m/≤ p_ m', so Lemma <ref>(ii) with m',√(),p_,d/13,/24 playing the roles of m,,,d, implies that F'_ is a rainbow super template with parameters (m'/2,2√(),d/26,/48). Let T^3_y := T^2_y ∩ V^_i for all i ∈ [r] and y ∈ V^_i ∩ U^1_i. We embed H_ into F'_ by applying Lemma <ref> with parameters √(μ),√(),_1^2,ν'/3 playing the roles of μ,,,ν (and template parameters as above). For this, we recall that H_ is the union of vertex disjoint components of size at most √(μ)|V(H_)| and there is a graph homomorphism from H_ into R. By (<ref>), there are suitably few vertices with target sets T^3_y, and all target sets are suitably large. By <ref>, it now suffices to check that every |^_e| - h^_e is large. We have |^_e|-h^_e = |B_e|-h^_e-h^_e <ref>= |_e'|-|A_e|-h^_e-h^_e-h^_e =h^_e-|A_e|=ℓ_e=_1 h^_e ≥_1p_ m/2 ≥_1^2 m'/2. Thus we can apply Lemma <ref> to obtain τ(V(H_)) and (E(H_)) where each y with a target set T^2_y is embedded inside it. Let ^_e := A_e ∪ (^_e (E(H_))). Step 5. Since _e' = ^_e ∪^_e ∪^_e ∪^_e is a disjoint union, and A_e ⊆^_e ⊆A_e ∪B_e, the set ^_e ∩B_e must have exactly the right size: |^_e ∩B_e| = |'_e|-h^_e-h^_e-h^_e-|A_e|=|'_e|-(|_e'|-h^_e)-|A_e|=ℓ_e. Thus, by <ref>, for each e ∈ E(R) we can find _e with image _e(E(H_))=^_e. Extending by all of these _e completes the transversal embedding. §.§ Proof of Theorem <ref> We may assume without loss of generality that 1/m ≪,μ,≪ν,d,,1/,1/r ≤ 1/2. Choose an additional constant ' with ≪' ≪ν,d,,1/,1/r such that the conclusion of Lemma <ref> holds with ,',d,,3 playing the roles of ,',d,,k. We may further assume that the conclusion of Theorem <ref> holds with ',d^2/2 playing the roles of ,d and the other parameters unchanged. Let G,,R,V:=(V_1,…,V_r) be as in the statement of Theorem <ref>. Let ϕ:V(H)→ V(R) be such that ϕ(x)=j whenever x ∈ A_j. So ϕ is a graph homomorphism. Our hypothesis is that F=(V,,G) is a rainbow half-super R-template with parameters (m,,d,). Lemma <ref>(iv) implies that for all c ∈ there is G_c' ⊆ G_c such that, writing G' := (G_c':c ∈), the template F'=(V,,G') is super with parameters (m,',d^2/2,). Apply Theorem <ref> to F' to obtain the required embedding. § PROOF OF THEOREMS <REF> AND <REF> In this section we prove two applications of our transversal blow-up lemma, to super uniformly dense graph collections (Theorem <ref>) and super uniformly dense 3-graphs (Theorem <ref>). The latter is an easy consequence of the former. Let ,d,,>0 be given and let r := +1. Choose additional constants such that 0<1/n_0 ≪η,μ≪≪_1 ≪_2 ≪…≪_r2+1≪ν' ≪≪,d,1/, where we are assuming without loss of generality that ≪,d/1/, and so that the conclusion of Lemma <ref> holds with 3,η^1/4,,^2/6 playing the roles of k,,',d, and Lemma <ref> holds with n_0/r,,√(_1),^4/72 playing the roles of m,,,d and Theorem <ref> holds with _2^2 n_0/2,√(),2μ,_1^1/3,ν',^9,/2 playing the roles of m,,μ,,ν,d,. Let n ≥ n_0 be an integer and let G=(G_c: c ∈) be a graph collection on a vertex set V of size n and H a graph on n vertices satisfying the conditions of the theorem. By assumption, n ≤ e(H)=|| ≤ n. Let m := ⌊ n/r⌋. The Hajnal-Szemerédi theorem implies that there is a partition V(H) = A_1 ∪…∪ A_r into parts of size m and m+1 such that all edges go between different parts. Let V=V_1 ∪…∪ V_r be a random partition of V with |V_i|=|A_i| for all i ∈ [r], and let V := {V_1,…,V_r}. We claim that with high probability, the following hold for any ij ∈ E(K_r)=[r]2: * for all V_h' ⊆ V_h with |V_h'| ≥η^1/4|V_h| for h=i,j and ' ⊆ with |'| ≥η^1/4||, we have ∑_c ∈'e_G_c(V_i',V_j') ≥ d|'||V_i'||V_j'|/2; * for each labelling {g,h}={i,j} and v ∈ V_g, we have ∑_c ∈d_G_c(v,V_h) ≥^2|V_h|||/6; * e(G_c) ≥^2|V_i||V_j|/6 for all c ∈. For <ref>, we have η n^3 < (d/2) ·η^3/4 n^3/(2r)^2 < dη^3/4 nm^2/2 ≤ d(η^1/4)^3|||V_i||V_j|/2 ≤ d|'||V_i'||V_j'|/2. The statement then follows directly from the definition of (d,η)-dense. To prove <ref>, for each vertex v ∈ V, there are at least ||/2 colours c ∈ for which d_G_c(v) ≥ n/2, and a Chernoff-type bound implies that, with high probability d_G_c(v,V_j) ≥|V_j|/3. Part <ref> is proved similarly by swapping colour and vertex. Parts <ref>–<ref> imply that the 3-graph G^(3),ij of G^ij := (G_c[V_i,V_j]: c ∈) is (η^1/4,^2/6)-half-superregular. Lemma <ref> and our choice of parameters implies that every G^(3),ij contains a spanning subhypergraph J^(3),ij which is (,^4/72)-superregular. Thus F := (V,,J) is a super K_r-template with parameters (m,,^4/72,), where J^ij is the graph collection whose 3-graph is J^(3),ij and _ij = for all ij ∈ E(K_r). Let d_ij:=e(H[A_i,A_j])/n for all ij ∈ E(K_r). We have ∑_ij d_ij =e(H)/n ≥. Thus there is at least one ij such that d_ij≥_r2+1. By the pigeonhole principle, there is ℓ∈ [r2] such that for all ij, either d_ij≤_ℓ, or d_ij≥_ℓ+1. Let P^< := {ij ∈[r]2: d_ij≤_ℓ}. We will embed those sparse H[A_i,A_j], with indices ij in P^<, using Lemma <ref> (Embedding lemma with target and candidate sets), while the remaining (dense) pairs will be embedded using Theorem <ref> (Transversal blow-up lemma) with target sets from the initial embedding of sparse pairs. Let X be the union of non-isolated vertices in H[A_i,A_j] over all ij ∈ P^<, let Y := N_H(X) X, and let H^< := H[X ∪ Y] H[Y], whose edge set is precisely those edges of H incident to X. We have v(H^<) ≤ 2r2_ℓ n ≤√(_ℓ)m. Apply Lemma <ref> to F with m,,√(_ℓ),ν',1,^4/72,,,r playing the roles of m,,,ν',ν,d,,,r, and T_w:=V_i for all w ∈ V(H^<) ∩ A_i and i ∈ [r], to find injective maps τ:X → V and : E(H^<) → such that τ(x)τ(x') ∈ J_(xx') for all xx' ∈ E(H[X]), for all i ∈ [r] we have τ(x) ∈ V_i for all x ∈ A_i, and for all y ∈ Y ∩ A_i there exists C_y ⊆ V_i τ(X) such that C_y ⊆⋂_x ∈ N_H^<(y) ∩ XN_G_(xy)(τ(x)) and |C_y| ≥ν' m. Let ':= (E(H^<)) and let V':={V_1',…,V_r'} where V_i':= V_iτ(X) for each i∈ [r]. By Lemma <ref>(ii) applied with = √(_ℓ), the template (V',',J') induced by V',' is super with parameters (m/2,2,^4/144,/2). Let H^> := H-X. Note that H^> (with respect to its slightly smaller vertex set) is 2μ-separable. Now we randomly partition ' into r2 (some perhaps empty) parts '=⋃_ij∈ E(K_r)”_ij such that |_ij”|=e(H^>[V_i',V_j']) for all ij ∈[r]2. This is possible since |'|=||-e(H^<)=e(H)-e(H^<)=e(H^>). Let F” := (V',”,J'), where each ”_ij is as above (so ” = '). Each colour set is either large or empty, indeed, if ij ∈ P^<, then _ij”=∅, but otherwise we have |_ij”| ≥ e(H[A_i,A_j])-e(H^<) ≥_ℓ+1n-√(_ℓ)m ≥_ℓ+1n/2 ≥_ℓ+1|'|/(2) ≥_ℓ+1^2|'|. Lemma <ref>(iii) applied with _ℓ+1^2,1 playing the roles of ,k implies that F” is a rainbow super template with parameters (_ℓ+1^2 m/2,√(),^9,/2). Apply Theorem <ref> (transversal blow-up lemma) with target sets C_y of size at least ν'm for the at most √(_ℓ)m ≤_ℓ^1/3_ℓ+1^2 m/2 vertices y ∈ Y, and parameters _ℓ+1^2 m/2,√(),2μ,_1^1/3,ν',^9,/2 playing the roles of m,,μ,,ν,d, to obtain a transversal embedding of H^> inside F” such that every y ∈ Y is embedded inside C_y. Together with the embedding of H^<, this gives a transversal embedding of H. Let ,d,>0 be given. Without loss of generality, we may assume that ≪ d,1/. Choose constants η',μ,n_0>0 such that the conclusion of Theorem <ref> holds, applied with ,d,1/5,^3 playing the roles of ,d,,. Let η := η'/(2). Let n ≥ n_0 be an integer and let G be a (d,η)-dense 3-graph on n vertices with d_G(v) ≥ n^2 for all v ∈ V(G). Let H be a μ-separable graph with (H) ≤ and v(H)+e(H) ≤ n. First, if H is small, we will enlarge it, as follows. If e(H) > n/4-1, let H' := H. Otherwise, obtain H' from H by successively adding an edge between isolated vertices until e(H)>n/4-1. Let H' be the obtained graph restricted to non-isolated vertices. We have e(H') ≤ n/4, which implies that v(H')+e(H') ≤ 4e(H') ≤ n. In both cases, we have (H') ≤, and H' is μ-separable; e(H')>n/4-1 and v(H') ≥ 3e(H')/ > 3(n-4)/(4)>n/(2); and v(H')+e(H') ≤ n. Let V, be disjoint subsets of V(G) chosen uniformly at random subject to |V|=v(H') > n/(2) and ||=e(H') > n/4-1. Standard Chernoff-type bounds imply that, with high probability, for every v ∈ V we have e(G[{v},V,]) ≥^2|V|||/6 ≥^2n^2/(49) ≥^3n^2 for all v ∈ V |{xyc ∈ E(G): x,y ∈ V}| ≥^2|V|^2/12 ≥^2n^2/(48^2) ≥^3 n^2 for all c ∈. For each c ∈, let G_c be the 2-graph with vertex set V and edge set {xy: x,y ∈ V and xyc ∈ E(G)}, and let G:=(G_c: c ∈). Thus ∑_c ∈d_G_c(v,V)=e(G[{v},V,]) ≥^3n^2, and for every c ∈ we have e(G_c) ≥^3n^2. Since G is (d,η)-dense and |V| ≥ n/(2), we have that G is (d,η')-dense. Theorem <ref> applied with ,d,1/5,^3 playing the roles of ,d,, implies that G contains a transversal copy of H' and thus a transversal copy of H, that is, there are injective maps τ: V(H) → V and : E(H) → so that τ(x)τ(y) ∈ G_(xy) for every xy ∈ E(H). Define ρ: V(H) ∪ E(H) → V(G) by setting ρ(x):=τ(x) for x ∈ V(H) and ρ(e):=(e) for e ∈ E(H). Then ρ is injective and ρ(x)ρ(y)ρ(xy) ∈ E(G) for all xy ∈ E(H); that is, ρ defines a copy of the 1-expansion of H in G, as required. § CONCLUDING REMARKS In this paper, we have proved a transversal blow-up lemma that embeds separable graphs into graph collections, which can be used to apply the regularity-blow-up method to transversal embedding problems. We conclude with some remarks on future directions. Separability. In our proof of the transversal blow-up lemma, the separability condition is necessary because we need to divide the graph into linear-sized pieces and then use the blow-up lemma to embed them piece by piece. For this embedding process to work, the number of edges between different pieces should not be too large. We wonder if our transversal blow-up lemma can be generalised to embed any graph with bounded maximum degree. If such a version can be proven, it would directly generalise the original blow-up lemma for graphs (since a collection of identical superregular pairs is a superregular collection). Future applications to transversal embedding. In a subsequent paper, we will utilise the transversal blow-up technique developed in this paper in combination with the absorption method to provide a new proof of the transversal version of the approximate Pósa-Seymour conjecture, which was recently established in <cit.>, and is a special case of the yet more recent main result of <cit.>. Additionally, we will prove a stability result for transversal Hamilton cycles that has not been demonstrated before. However, using our method to obtain transversal versions of embedding results proved using the regularity blow-up method does not seem quite as straightforward as simply following the same proof. The regularity lemma produces some exceptional vertices which need to be incorporated into the structure built between vertex clusters: they are `absorbed' by clusters. For transversal embedding of a spanning graph H, one needs to insert the exceptional vertices as well as some exceptional colours into the structure built between vertex clusters and colour clusters. Such an insertion may make some new colours exceptional. Right at the end of the process, inserting vertices and colours simultaneously becomes difficult, when there are no more `spare colours'. Thus, we need to construct an absorption set for the remaining vertices and colours prior to embedding H. Applications to hypergraph embedding. We state the full 3-graph version of our transversal blow-up lemma. [weak 3-graph blow-up lemma] Let 0 < 1/m ≪,μ,≪ν,d,,1/,1/r ≤ 1. * Let R be a 2-graph with vertex set [r]. * Let G be a 3-graph with parts V_1,…,V_r and V_ij for ij ∈ E(R) where m ≤ |V_i|≤ m/ for all i ∈ [r], and |V_ij|≥ m for all ij ∈ [r]. Suppose that G[V_i,V_j,V_ij] is weakly (,d)-(half-)superregular for all ij ∈ E(R). * Let H be a μ-separable 2-graph with (H) ≤ for which there is a graph homomorphism ϕ: V(H) → V(R) with |ϕ^-1(i)|=|V_i| for all i ∈ [r] and e(H[ϕ^-1(i),ϕ^-1(j)]) = |V_ij| for all ij ∈ E(R). * Suppose that for each i ∈ [r] there is a set U_i ⊆ϕ^-1(i) with |U_i| ≤ m and T_x ⊆ V_ϕ(x) with |T_x| ≥ν m for all x ∈ U_i. Then G contains a copy of the 1-expansion of H, where each vertex x ∈ V(H) is mapped to V_ϕ(x) and the new vertex for xy ∈ E(H) is mapped to V_ϕ(x)ϕ(y); and moreover, for every i ∈ [r] every x ∈ U_i is mapped to T_x. This is a reformulation of Theorem <ref> since for each ij ∈ E(R), the 3-graph G^(3)_ij of (G_c: c ∈_ij) in Theorem <ref> is weakly half-superregular, so we simply take V_ij := _ij. It would be interesting to extend Theorem <ref> on embeddings in uniformly dense 3-graphs beyond 1-expansions of separable 2-graphs. It is not clear which 3-graphs one should expect to be able to embed in a weakly superregular triple (or in a uniformly dense 3-graph). Even the case of 3-partite 3-graphs seems difficult; that is, to extend Theorem <ref> (simplified weak hypergraph blow-up lemma). Given 0<1/n ≪≪ d ≪≤ 1, which J are subhypergraphs of any weakly (,d)-superregular triple G[V_1,V_2,V_3], where n ≤ |V_1|,|V_2|,|V_3| ≤ n/? A 3-partite 3-graph J is equivalent to an edge-coloured bipartite 2-graph J (where each edge can receive multiple colours) using our usual identification of one part with a set of colours. If J has (J) ≤, then (J) ≤ and the colouring is -bounded. (If J is linear, then the edge-colouring is proper.) Thus we would like to extend Theorem <ref> (simplified weak transversal blow-up lemma). to find an embedding of J with a given edge-colouring up to permutation of colours. If every colour only appears on a single edge, that is, J is an expansion of some 2-graph H where the 2-edge e is replaced by some bounded number t_e ≥ 1 of 3-edges (which is not linear if some t_e>1), it seems plausible that our techniques would work. However, the colour absorption we use from <cit.> does not seem to be able to deal with colours playing multiple roles. Let F be a 3-partite 3-graph of fixed size. Recall that an old result of Erdős <cit.> implies that any large 3-graph of positive density contains a copy of F. However, the results of <cit.> imply that any large uniformly dense 3-graph G (whose number of vertices is a multiple of v(F) and where every vertex sees a positive fraction of pairs) contains an F-factor if and only if there is a vertex v^* ∈ V(F) such that for any two edges e,e' where e contains v^* and e' does not, e and e' share at most one vertex. Probably, the characterisation for weakly superregular triples with balanced classes is the same. Another instructive case is the tight Hamilton cycle (whose number of vertices is divisible by 6). As a graph collection problem, this corresponds to finding a (2-graph) Hamilton cycle on 4n vertices with 2n colours, where, cyclically labelling the edges e_1,…,e_4n, the consecutive edges e_2c-1,e_2c,e_2c+1 are all in G_c, for each c ∈ [2n] where indices are taken modulo 4n. A construction in <cit.> (see also <cit.>) shows that a weakly superregular triple need not contain a tight Hamilton cycle, even if its density is close to 1/8. Let V be a vertex set of size 6n, and let V=V_1 ∪ V_2 ∪ V_3 be a partition, where V_1,V_2,V_3 have equal size 2n, and let X ⊆ V where |X ∩ V_1| is odd. Independently for each distinct i,j ∈ [3] add uniform random edges between parts V_i,V_j with probability 1/2, to obtain a 3-partite 2-graph J. Now form a 3-partite 3-graph G by, for each triple abc ∈ A × B × C, as follows: * add abc to G if |{a,b,c}∩ X| is even and {a,b,c} spans a triangle in J. * add abc to G if |{a,b,c}∩ X| is odd and {a,b,c} spans an independent set in J. Suppose v^1_1v^1_2v^1_3… v^2n_1v^2n_2v^2n_3 is a tight cycle H in J, where without loss of generality, v^j_i ∈ V_i for all j. For any 1 ≤ j < 2n, Since v^j_1v^j_2v^j_3, v^j_2v^j_3v^j+1_1 are edges of H, we have that |{v^j_1,v^j_2,v^j_3}∩ X|, |{v^j+1_1,v^j_2,v^j_3}∩ X| are both odd or both even. This implies that v^j_1,v^j+1_1 are either both in X or neither in X. By considering the disjoint pairs v_1^1v_1^2,v_1^3v_1^4,…,v_1^2n-1v_1^2n, this is a contradiction to |X ∩ V_1| odd. It can easily be checked using Chernoff bounds (see <cit.>) that, if every |X ∩ V_i| has size about n, then with high probability G is weakly (1/8,o(1))-superregular. plain § APPENDIX §.§ Weak hypergraph regularity lemmas We first introduce the following special form of the weak hypergraph regularity lemma for k-partite k-graphs. Its proof follows easily from the original lemma of Chung <cit.>, which is very similar to the 2-graph regularity lemma of Szemerédi <cit.>. For every L_0,k≥ 1 and every ,>0, there is an n_0>0 such that for every k-partite k-graph G on n ≥ n_0 vertices with parts V_1,…,V_k where n≤ |V_i|≤ n/, there exists a refined partition V_i=⋃_j=1^t_iV_i,j for each i∈ [k] such that the following properties hold: (i)L_0 ≤ t:=t_1+…+t_k ≤ n_0; (ii)| |V_i,j|-|V_i',j'|| ≤ 1 for any i,i'∈[k], j∈ [t_i] and j'∈ [t_i']; (iii) all but at most t^kk-tuples (V_1,j_1,…,V_k,j_k) are -regular. The following weak hypergraph regularity lemma <cit.> is proved in the same way as the original degree form of the regularity lemma (see <cit.>), which in turn can be derived from the standard regularity lemma via some cleaning. [Degree form of the weak hypergraph regularity lemma] For all integers L_0 ≥ 1 and every >0, there is an n_0=n_0(,L_0) such that for every d ∈ [0,1) and for every 3-graph G=(V,E) on n ≥ n_0 vertices there exists a partition of V into V_0,V_1,…,V_L and a spanning subhypergraph G' of G such that the following properties hold: (i)L_0 ≤ L ≤ n_0 and |V_0| ≤ n; (ii)|V_1|=…=|V_L|=: m; (iii)d_G'(v) > d_G(v)-(d+)n^2 for all v ∈ V; (iv) every edge of G' with more than one vertex in a single cluster V_i for some i ∈[L] has at least one vertex in V_0; (v) for all triples {h,i,j}∈[L]3, we have that (V_h,V_i,V_j)_G' is either empty or (,d)-regular. The proof of Lemma <ref> (the regularity lemma for graph collections) is a routine but tedious consequence of this theorem applied to the 3-graph G^(3) of a graph collection G. By increasing L_0 and decreasing , as necessary, we may assume without loss of generality that 0 < 1/L_0 ≪≪≪ 1. Let L_0' := 2L_0/. Choose new constants ',, which we may assume satisfy 0 < 1/L_0' ≪' ≪≪. Let n_0' be obtained from Theorem <ref> applied with parameters ',L_0'. We may assume that 1/n_0' ≪ 1/L_0' by increasing n_0'. Altogether, 0 < 1/n_0' ≪ 1/L_0' ≪' ≪≪≪≪ 1. Let n_0 := 2n_0'/. Let n ≥ n_0 be an integer and suppose that G is a graph collection on a vertex set V of size n and with colour set , where n ≤ || ≤ n/. Let d ∈ [0,1) and let G^(3) be the 3-graph of G with vertex set U := V ∪. Theorem <ref> implies that there is a partition U_0,U_1,…,U_K of U and a spanning subhypergraph G^(3)' of G^(3) such that * L_0' ≤ K ≤ n_0' and |U_0| ≤'|U|; * |U_1|=…=|U_K|=: m'; * d_G^(3)'(u) > d_G^(3)(u)-(2d+')|U|^2 for all u ∈ U; * every edge of G^(3)' with more than one vertex in a single cluster U_i has at least one vertex in U_0; * for all triples {h,i,j}∈[K]3, we have that (U_h,U_i,U_j)_G^(3)' is either empty or (',2d)-regular. Partition each cluster U_i, i ∈ [K], into 1/ subclusters of size at most m so that all but at most two subclusters of U_i have size exactly m and the property that they lie entirely within V, or within . If a subcluster does not have this property, add it to U_0. The new exceptional set has size at most |U_0|+2 m'K, and we let V_0 be its intersection with V, and _0 its intersection with . Relabel the subclusters so that those which are subsets of V are V_1,…,V_L and those which are subsets of are _1,…,_M. Let G'_c be the graph with vertex set V and edge set {xy: xyc ∈ G^(3)'} for all c ∈. We claim that the properties of the lemma are satisfied. For (i), we have m'K ≤ |U| so |V_0|+|_0| ≤ |U_0|+2 m'K <ref>≤ ('+2)|U| ≤ ('+2)(n+n/) ≤ n. Also, L_0 ≤ L_0' ≤ L_0'/≤ K/≤ L+M ≤ 2K/≤ 2n_0'/=n_0, proving the required upper bound for both L and M. Furthermore, m(L+M) ≤ n+|| so L ≥n-|V_0|/m ≥(n-|V_0|)(L+M)/n+||≥(n-|V_0|)L_0'/n+n/≥ L_0'/2 = L_0 and similarly for M. Part (ii) follows by construction. For (iii), we have ∑_c ∈d_G_c'(v)=d_G^(3)'(v) for all v ∈ V, and e(G_c')=d_G^(3)'(c) for all c ∈, and similarly for G_c. Further, (2d+')(n+||)^2<(2d+')(1+1/)^2n^2≤(3d/^2+2'/^2)n^2 ≤ (3d/^2+)n^2. Thus <ref> implies the required. For (iv), if G'_c has an edge xy with x,y ∈ V_i for some i ∈ [L], then x,y ∈ U_i' for the cluster U_i' containing V_i. Since xyc ∈ E(G^(3)'), <ref> implies that c ∈ U_0. Thus c ∈_0, as required. For (v), suppose that ({h,i},j) ∈[L]2× M and G'_hi,j is non-empty. Let h',i',j' ∈ [K] be such that V_h ⊆ U_h', V_i ⊆ U_i' and _j ⊆ U_j'. Since G'_hi,j is non-empty, we have that (U_h',U_i',U_j')_G^(3)' is non-empty. So h',i',j' are distinct by <ref>. Part <ref> implies that (U_h',U_i',U_j')_G^(3)' is (',2d)-regular. Therefore, for each c ∈ U_j', letting J_c be the bipartite graph with partition (U_h',U_i') and edge set {xy: x ∈ U_h', y ∈ U_i', xyc ∈ G^(3)'}, we have that J := (J_c: c ∈ U_j') is (',2d)-regular. Lemma <ref>(i) (the slicing lemma) implies that G'_hi,j is ('/,d)-regular and thus (,d)-regular. This completes the proof. We conclude the appendix with a proof of Lemma <ref>, which states that a half-superregular (k-graph) k-tuple contains a spanning superregular subhypergraph. The proof is the same as the one in <cit.> for k=2. Choose a new parameter ” such that 0<≪”≪'. Apply Lemma <ref> to G with parameters L_0 := 1, ”, to obtain n_0>0. Increasing n_0,n and decreasing if necessary, we may assume that 1/n ≪≪ 1/n_0 ≪”. Now let G=(V_1,…,V_k)_G be an (,d)-half-superregular k-graph with n ≤ |V_i| ≤ n/ for all i ∈ [k]. Lemma <ref> implies that there is a refinement V_i=⋃_j=1^t_iV_i,j for each i∈ [k] such that t := t_1+…+t_k ≤ n_0, every pair of subparts differ in size by at most one, and all but at most ” t^3k-tuples (V_1,j_1,…,V_k,j_k) are ”-regular. Obtain a spanning subhypergraph G' of G by, for each ”-regular triple (X_1,…,X_k) of subparts, removing every k-edge in (X_1,…,X_k) independently at random with probability 1-d/d', where d' is the density of (X_1,…,X_k). (Note that by |X_i|≥ |V_i|/n_0 ≥ |V_i| for each i∈ [k], we have d'≥ d due to half-superregularity.) We claim that, with high probability, * d_G'(v)≥ d^2(|V_1|…|V_k|)/(2|V_i|) for any i∈[k] and v∈ V_i, * any ”-regular k-tuple (of subparts) in G is 2”-regular in G', * any ”-regular k-tuple (of subparts) in G has density d ± 2” in G'. This follows from Chernoff bounds. Now we claim that, given that the above hold, G' is (',d^2/2)-superregular. We only need to check that G' is (',d^2/2)-regular. Let (A_1,…,A_k) be any k-tuple where A_i⊆ V_i and |A_i|≥'|V_i| for all i∈ [k]. For each i∈ [k], we have partition A_i=⋃_j=1^t_i(A_i∩ V_i,j). We say a part A_i,j := A_i∩ V_i,j is big if |A_i∩ V_i,j|≥” |V_i,j| and otherwise it is small. Note that in total we have at most ” (|V_1|+…+|V_k|)≤” kn/ vertices in small parts. Also note that we have at most ” t^k non-regular k-tuples that contain in total at most ” (n/)^k edges. Thus e_G'(A_1,…,A_k) = ∑_(A_1,j_1,…,A_k,j_k) all big and (V_1,j_1,…,V_k,j_k) ”-regulare_G'(A_1,j_1,…,A_k,j_k) ±(2” kn/ · (n/)^k-1 + ” (n/)^k). For every summand index (j_1,…,j_k), we have e_G'(A_1,j_1,…,A_k,j_k) = (d± 2”)|A_1,j_1|…|A_k,j_k| by 2”-regularity. Thus e_G'(A_1,…,A_k) = (d ± 2”)|A_1|…|A_k| ± 3” kn^k/^k = (d ±')|A_1|…|A_k|. This implies that G' is (',d^2/2)-superregular and thus we finish the proof.
http://arxiv.org/abs/2306.08759v1
20230614220452
Fano resonances for tilted linear and quadratic band touching dispersions in a harmonically driven potential well
[ "Anton Gregefalk", "Annica Black-Schaffer", "Tanay Nag" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
http://arxiv.org/abs/2306.04648v1
20230605135723
On training locally adaptive CP
[ "Nicolo Colombo" ]
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
A Plaque Test for Redundancies in Relational Data Stefanie Scherzinger July 31, 2023 ================================================= We address the problem of making Conformal Prediction (CP) intervals locally adaptive. Most existing methods focus on approximating the object-conditional validity of the intervals by partitioning or re-weighting the calibration set. Our strategy is new and conceptually different. Instead of re-weighting the calibration data, we redefine the conformity measure through a trainable change of variables, A →ϕ_X(A), that depends explicitly on the object attributes, X. Under certain conditions and if ϕ_X is monotonic in A for any X, the transformations produce prediction intervals that are guaranteed to be marginally valid and have X-dependent sizes. We describe how to parameterize and train ϕ_X to maximize the interval efficiency. Contrary to other CP-aware training methods, the objective function is smooth and can be minimized through standard gradient methods without approximations. Conformal prediction, local adaptivity, conditional validity, conformity score, regression. § INTRODUCTION Two features of Conformal Prediction (CP) contribute to their increasing popularity, finite-sample validity and straightforward applicability. Unlike the Bayesian approach, CP algorithms are computationally cheap, provide practical non-asymptotic guarantees, and can augment any pre-existing or pre-trained point-prediction model. The very reasons for their success may hide a few structural limitations. On the one hand, finite-sample validity is hard to achieve on heteroskedastic data, as the prediction intervals should have attribute-dependent sizes. On the other hand, defining the same (fixed) CP algorithm for any point-prediction model may be a sub-optimal strategy for certain types of tasks or data. Lastly, it is unlikely that machine learning algorithms trained to generate good point-like predictions are automatically CP-efficient without being retrained. We introduce a new CP-aware fine-tuning scheme to make CP algorithms more efficient and locally adaptive. Compared to analogous strategies, our approach does not break data exchangeability. This implies the obtained locally-adaptive prediction intervals are marginally valid, as the non-adaptive ones, but have attribute-dependent sizes. Moreover, since the scheme does not modify the underlying point-prediction model, it can be merged with other localization methods. The idea is to boost the local adaptivity of the prediction intervals by optimizing a set of learnable transformations of the conformity scores. The trained transformations are guaranteed to produce marginally valid prediction sets by construction. The prediction sets become locally adaptive when we map them back to the label space by inverting the trained transformations. The scheme generalizes the Error Re-weighted Conformal approach (ERC) of <cit.>, in which the conformity scores are re-weighted by a pre-fitted model of the residuals. More precisely, let f be a pre-trained point-prediction algorithm, f(X) ≈ E_Y|X(Y), A the standard conformity score, A = (f(X) - Y)^2, and (1-α)∈ [0, 1] a user-defined confidence level. ERC can be viewed as a coordinate transformation A →ϕ_X(A) = A/g(X)^2 + γ, where g is a model of the conditional prediction error, i.e. g(X) ≈ E_Y|X(f(X) - Y), and γ > 0. The boundaries of the prediction intervals are the solutions of ϕ_X(A) = (Y - f(X))^2/g(X)^2 + γ = q̂, where q̂ is the (1 - α)-quantile of the conformity scores distribution, estimated on the calibration set. By definition of empirical quantile, the prediction interval at X, C = {y: ϕ_X(A(X, y)) ≤q̂}, is marginally valid. The size of C is |C|=√(q̂ (g(X)^2 + γ)) and explicitly depends on X, which means the intervals grow and shrink over the attribute space. We extend the ERC idea in two ways, [For simplicity, we focus on regression and make standard i.i.d. assumptions, but the approach generalizes with minor changes to classification problems and exchangeable data.] * we define a general parametric class of coordinate transformations, Φ_θ, that are guaranteed to produce marginally valid and locally adaptive prediction intervals (Sections <ref>, <ref> and <ref>) and * we show how to find a Φ_θ that maximizes the efficiency of the intervals (Sections <ref> and <ref>). To guarantee the validity of the transformed prediction sets and their interpretability as prediction intervals in the label space, we require the attribute-dependent transformations, ϕ_X ∈Φ_θ, to be monotonic in A and have the same codomain for all X (Assumption <ref>). Validity and local adaptability are proven in Theorem <ref> by adapting a standard proof of CP marginal validity and using the assumed invertibility of ϕ_X. To train Φ_θ, we introduce a confidence-specific cost function that measures the non-efficiency of the intervals at a given confidence level (ℓ_α in (<ref>)). The confidence-specific cost function contains non-differentiable terms but becomes smooth and tractable when it is averaged over all possible confidence levels and estimated empirically on a finite-size data set (ℓ_all in (<ref>)). This makes the learning task a smooth and unconstrained optimization problem, which can be solved with gradient approaches even if the inverse map, ϕ_X^-1, is not available analytically. In the experiments, we compare four classes of attribute-dependent transformations with the standard non-adaptive CP algorithm and ERC. §.§ Related work Redefining the conformity functions to include an attribute-dependent factor is not new. The first example of this technique is the ERC method of <cit.> (see also Section 5 of <cit.>). Our transformation of the conformity score is more general and is not pre-trained by fitting the residuals. A growing stream of works addresses the problem of producing prediction intervals that are approximately object-conditional valid (whit finite-size data). Approximate conditional validity is obtained through an attribute-dependent re-weighting of the calibration samples. Data exchangeability is temporarily broken and needs to be restored at the end. The idea of <cit.> and <cit.> has been refined in the last few years by establishing links to Kernel Density Estimation and the covariate shift problem (<cit.>). Here, we follow a conceptually different path, where data exchangeability is preserved at all stages. In the Conformalized Quatile Regression model of <cit.>, the standard conformity function is replaced with a proxy of the pinball loss. Given (1 - α) ∈ [0, 1], the optimum of the pinball loss is an estimate of the conditional (1-α)-quantile function. This implies the method can not quantify the uncertainty of a pre-trained point-prediction model and needs to be retrained for each α∈ [0, 1]. See <cit.> for a review of conformal quantile regression methods. Evaluation scores similar to ℓ_α defined in (<ref>) are popular in the CP literature. Recent works exploit objective functions similar to (<ref>) to train the underlying prediction models in a CP-aware way. Here, we fix the underlying prediction model and learn a new conformity function, ϕ_X(A). We also average ℓ_α over α∈ [0, 1], which makes the optimization problem smooth. This is a significant improvement compared to most CP-aware objectives used in the past, which often require smooth relaxations that may be hard to optimize (<cit.>). A recent work, <cit.>, is particularly close to ours. The authors propose to train a point-prediction model by forcing the distribution of conformity scores to be uniform on [0, 1] for all X. The definition of the conformity scores does not change. It would be interesting to investigate whether tuning the underlying model or the conformity score is equivalent, theoretically and practically. Also, while our approach extends straightforwardly to classification tasks (provided the distribution of the conformity scores is continuous), it may be hard to adapt <cit.> to the regression setup. As for Conformalized Quantile Regression, the CP algorithm of <cit.> requires fine-tuning the underlying model and cannot quantify the uncertainty of a pre-trained non-conformal classifier. Using the Inverse Function Theorem to compute the gradient of implicit functions is a common technique for minimizing objective functions without analytical form (<cit.>). The idea has become popular in the domain of implicit layers optimization (<cit.>). §.§ Contribution To the best of our knowledge, this is the first time locally adaptive and efficient conformal prediction intervals are obtained from a data-driven definition of the conformity scores. Previous works on training CP, e.g. <cit.>, <cit.>, or <cit.>, also aim to optimize the efficiency of the intervals but focus on tuning the underlying point-predictor instead of the CP construction on top of it. See Section <ref> for a brief discussion of those works. Proposing a differentiable CP-aware objective function is also relevant. The averaged objective function presented here may help define a differentiable version of <cit.>, <cit.>, or <cit.>. As <cit.>, our approach is orthogonal to methods that fine-tune the underlying point-prediction model, e.g. <cit.>, and is compatible with localization approaches based on the selection/re-weighting of the calibration objects (<cit.>). Merging different localization techniques, as in <cit.>, may boost the usability of CP on specific tasks. Finally, we believe this work is one of the first to exploit, at the same time, the invariance of CP intervals under monotone transformations of the conformity scores and recent ideas on conformity-aware training. § METHODS §.§ A regression task Let {D, (X_test, Y_test)} = {(X_n, Y_n) ∈ X×ℝ}_n=1^N∪ (X_test, Y_test) be a collection of i.i.d. random variables following an unknown joint distribution, P_XY, f: X→ℝ a pre-trained point-prediction model that approximates the object-conditional mean of Y, i.e. f(X) ≈ E_Y|X(Y), and D = { (x_n, y_n) ∈ X×ℝ}_n=1^N+1 a sample of {D, (X_test, Y_test)}. In what follows, we use D to train a set of conformity score transformations, Φ_θ, through a Leave-One-Out strategy where N samples are interpreted as a calibration set and the remaining sample as a test object. The resulting CP algorithm will be tested on unseen data, i.e. a new sample of {D, (X_test, Y_test)}. §.§ Prediction intervals Given f and {D, (X_test, Y_test)}, CP algorithms use a conformity function, a = a(f(X), Y), to evaluate the conformity of f(X_test) with the predictions made on the calibration set, { f(X_n)}_n=1^N. Let (1-α) ∈ [0, 1] be a user-defined confidence level. The corresponding prediction set C = { y: a(f(X_test, y)) ≤q̂}, q̂ s.t.| { a(f(X_n), Y_n) ≤q̂}_n=1^N | = ⌈ (N + 1) (1 - α) ⌉ where | S| is the cardinality of S, is said to be marginally valid with confidence (1-α) because Prob( Y_test∈ C) ≥ 1 - α Lower bounds for Prob( Y_test∈ C) can also be obtained as a function of N. When a is monotonic in |f(X) - Y|, C is a symmetric interval centred on f(X_test). The size of C can be associated with the reliability of f(X_test). For example, if a(f(X), Y) = (f(X)-Y)^2, the prediction interval can be written explicitly as C = [f(X_test) - Δ, f(X_test) + Δ], where Δ = 2 a^-1(q̂) = 2 √(q̂) and a^-1 is defined by S = a^-1(a(S)). Other possible choices for a include a_abs = |f(X) - Y| or a_log = log((f(X) - Y)^2). Here, we always let a = (f(X)-Y)^2. Some advantages of using a_log are outlined in Section <ref>. §.§ Coordinate transformations CP intervals are invariant under composition with a monotone function, e.g. if a → a_log = log√(a) or a → a_abs = √(a). This happens because the intervals only depend on the ranking of the conformity scores, which is preserved if the conformity scores are transformed monotonically. This work is a generalization of this idea. Let Φ_θ = {ϕ_X:ℝ_+ → B_X, X ∈ X} be a set of attribute-dependent parametric functions of a base conformity scores, A = a(f(X), Y) ∈ℝ_+. The task is to find a Φ_θ that maximizes the efficiency of a CP algorithm based on the conformity scores transformed by ϕ_X ∈Φ_θ. To avoid overfitting, all ϕ_X ∈Φ_θ should share a smooth functional dependence on X, e.g. we may require ϕ_X(A) ∼ϕ_X'(A) if X ∼ X' for all A ∈ℝ_+. Some assumptions are needed to guarantee that B=ϕ_X(A) is a well-defined conformity score. If so, the transformed prediction sets are defined like in <ref> for any A and X. More explicitly, we let C = { y: ϕ_X_test(A), y))≤q̂}, q̂ s.t.| {ϕ_X_n(A_n) ≤q̂}_n=1^N | = ⌈ (N + 1) (1 - α) ⌉ where (X_n, Y_n)∈ D, ϕ_X_n∈Φ_θ, A_n = a(f(X_n), Y_n), n=1, …, N, and α∈ [0, 1]. If a(f(X), Y) = (f(X) - Y)^2 and under certain assumptions on Φ_θ, e.g. if Φ_θ is defined as in Assumption <ref> (see below), C can be rewritten as C = [f(X_test) - Δ, f(X_test) + Δ], Δ = √(ϕ^-1_X_test(q̂)) Differently from the case of a rigid, i.e. object-independent, monotone transformation, the prediction intervals produced by different Φ_θ may not be equivalent. See Section <ref> for an example. §.§ Model classes In the examples of Section <ref>, the coordinate transformations belong to simple model classes, e.g. Φ_θ = {ϕ_X(A) = √(A) - θ X, θ∈ℝ}. Varying the value of the parameter with a class produces non-equivalent intervals. Choosing different classes may also change the intervals. For concreteness, we restrict ourselves to model classes that satisfy the following assumption. Let X = ℝ^d, B_X ⊆ℝ, and Φ_θ = {ϕ_X: ℝ_+ → B_X, X ∈ X} Φ_θ is such that ϕ_X'(A) > 0, for all X ∈ X and all A ∈ℝ_+ B_X = B_X', for all X, X' ∈ X where A = a(f(X), Y), a = (f(X) - Y)^2, and h'(s) = d/ds h(s) = d/ds' h(s')|_s' = s. Requiring ϕ_X' >0 guarantees that B=ϕ_X(A) is a well-defined conformity score and the existence of the inverse map, ϕ_X^-1: B_X →ℝ_+, which is defined implicitly by ϕ_X^-1(ϕ_X(A)) = A. Requiring the codomain of the transformations to be the same for all X guarantees that we can convert the transformed prediction sets in (<ref>) into lable-space symmetric intervals centred in f(X_test). Imagine that Φ_θ does not fulfil the second requirement of Assumption <ref> and, for example, B_test⊂ B_X_n for any n=1, …, N. Then the inequalities in (<ref>) may not have a real solution because q̂ = ϕ_X_n*(A_n*), n* ∈{1, …, N }, may lay outside the codomain of ϕ_X_test. Both assumptions are satisfied if, for example, ϕ_X(A) →ϕ_X(log A) + ϵlog A, ϕ_X'(A) > 0 for all X and A, and ϵ > 0. §.§ Examples §.§.§ Inequivalent intervals Let X = ℝ, B = ϕ_X(A) = √(A) - θ X, and θ_± =± 1. Let the calibration set be { (x_n, y_n)}_n=1^3, f such that (a_1, x_1) = (1, 1), (a_2, x_2) = (2, 2), and (a_3, x_3) = (3, 3), x_test = 0, and α = 1/2. If θ = 1, { b_n}_n=1^3 = {2, √(2) + 2, √(3) + 3} and q̂ = q̂_+ = √(2) + 2. If θ = -1, { b_n}_n=1^3 = {0, √(2) - 2, √(3) - 3} and q̂ = q̂_- = √(2) - 2. The prediction intervals, C_± = [f(x_test) - Δ_±, f(x_test) + Δ_± ] with Δ_± = √(ϕ_x_test^-1(q̂_±)) = √((q̂_± + θ x_test)^2) = |q̂_±|, have sizes |C_+| = 2( 2 + √(2)) > |C_-| = 2(2-√(2)), i.e. C_+ and C_- have different sizes. §.§.§ ERC Let a = (f(X) - Y)^2 and ϕ_X be the ERC coordinate transformations of <cit.>, i.e. ϕ_X(A) = A/g(X)^2 + γ, A = (f(X) - Y)^2, g:ℝ^d →ℝ, γ > 0 Φ_θ = {ϕ_X: ℝ_+ → B_X, X ∈ X} fulfils the requirements of Assumption <ref> because ϕ'_X(A) = 1/g(X)^2 + γ > 0 and B_X = ℝ_+ for all X. The inverse transformation is ϕ^-1_X(B) = B (g(X)^2 + γ) and C = [f(X_test) - Δ, f(X_test) + Δ], where Δ = √(q̂ (g(X_test)^2 + γ)) and q̂ is the transformed conformity score of a calibration object. If q̂ is the (⌈(N + 1)(1 - α)⌉)th smallest object of { (ϕ_X_n(A_n)) }_n=1^N, the interval is marginally valid with confidence (1 - α) and has X_test-dependent size |C|=2Δ. In <cit.>, g is chosen to be a model of the conditional residuals, i.e. g(X) ≈ E_Y|X(f(X) - Y), but other choices are also possible. §.§.§ Non-adaptive transformations If the transformation does not depend on X, the second requirement in Assumption <ref> is always satisfied. For example, consider ϕ_0(A) = log A + γ, A = (f(X) - Y)^2, γ∈ℝ then ϕ_0'(A) = 1/A > 0, B_X = (-∞, ∞) for all X, and ϕ_0^-1(B) = exp(B - γ). The corresponding prediction intervals have size |C| = 2 √(exp(q̂ - γ)), which does not depend on X_test. §.§.§ Different codomains A set of transformations that satisfy ϕ_X'(A)>0 for all X ∈ X but may have B_X ≠ B_X' is ϕ_X(A) = A + g(X)^2 , A = (f(X) - Y)^2, g:ℝ^d →ℝ as ϕ_X'(A) = 1 > 0 and B_X = [g(X)^2, ∞) ⊂ℝ_+ for all X and A. The inverse transformation, ϕ_X^-1(B) = B - g(X)^2 is well defined for all B and all X. The prediction intervals, however, may not have a straightforward interpretation in the label space. If q̂ = ϕ_X_n*(A_n*) = A_n* + g(X_n*) is the (⌈(N + 1)(1 - α)⌉)=th smallest object of {B_n = A_n + g(X_n)^2 }_n=1^N, the interval at X_test is defined by A_test≤ A_n* + g(X_n*)^2 - g(X_test)^2, which is meaningless if A_n* + g(X_n*)^2 < g(X_test)^2 because A_test = (f(X_test)-Y_test)^2 > 0. The problem does not arise if ϕ(A) →ϕ(log A). §.§ Validity of the prediction intervals Global monotone transformations of the conformity scores do not affect the resulting prediction intervals. The first example of Section <ref> shows this is not the case for locally-defined transformations. Under Assumption <ref>, however, the inverse map produces label-space prediction intervals that are marginally valid and have attribute-dependent sizes. Let (X_1, Y_1), …, (X_N, Y_N), (X_test, Y_test) ∼ P_XY be a collection of i.i.d. random variables, f ∼ E_Y|X(Y) a point-prediction model, and A_n = (f(X_n) - Y_n)^2, n = 1, …, N. Let Φ_θ be a family of strictly increasing scalar functions satisfying the requirements of Assumption <ref> and B_n = ϕ_X_n(A_n), n= 1, …, N. Assume B_n ≠ B_n' if n ≠ n'. Then there exists a permutation of {1, …, N }, { m_n = π(n) }_n=1^N such that B_m_1 < … < B_m_n <… < B_m_N and, for any α∈ [1/N+1, 1], the interval C = [f(X_test) - Δ, f(X_test) + Δ], Δ = √(ϕ^-1_X_test( B_m*))), m* = ⌈ (N + 1)(1 - α)⌉ is marginally valid with confidence 1 - α, i.e. Prob( Y_test∈ C)) ≥ 1 - α Proof of Theorem <ref> If B_n ≠ B_n' if n ≠ n', the exchangeability of (X_n, Y_n), n-1, …, N, and (X_test, Y_test) implies Prob(B_test≤ B_m_n) = m_n/N + 1, n=1, …, N where B_test = ϕ_X_test(A_test) = ϕ_X_test((f(X_test) - Y_test)^2). In particular, Prob(B_test≤ B_m*) = ⌈ (N + 1)(1 - α)⌉/N + 1≥ 1 - α or, equivalently, Prob(ϕ_X_test(A_test) ≤ B_m*) ≥ 1 - α. The strict monotonicity of ϕ_X(A) for all X ∈ℝ^d implies that ϕ_X_test is invertible with inverse ϕ^-1_X_test: B_X_test→ℝ_+ defined by A = ϕ_X_test^-1(ϕ_X_test(A)). ϕ^-1_X_test and is also strictly increasing because ϕ_X_test^-1' = 1/ϕ_X_test' and ϕ_X_test'>0. [ϕ_X_test^-1' = 1/ϕX_test' follows from 1 = d/dAϕX_test^-1(ϕX_test(A)) = ϕ_X'(ϕX_test^-1(A))ϕ_X_test^-1'(A).] The validity of (<ref>) follows from 1 - α ≤ Prob(B_test≤ B_m*) = Prob(ϕ^-1_X_test(B_test) ≤ϕ^-1_X_test(B_m*)) = Prob(A_test≤ϕ^-1_X_test(B_m*)) = Prob((f(X_test) - Y_test)^2 ≤ϕ^-1_X_test(B_m*)) = Prob(Y_test∈ [f(X_test) - Δ, f(X_test) + Δ]) where Δ = √(ϕ^-1_X_test(B_m*)) and the first equality holds because ϕ_X_test^-1 is strictly increasing. □ §.§ Efficiency of the prediction intervals If the conformity function does not depend on X, the size of the prediction intervals is minimal. [From the definition of empirical quantile.] If the conformity scores are locally re-defined, i.e. A →ϕ_X(A), the intervals can grow and shrink over the attribute space. Under Assumption <ref>, Theorem <ref> ensures that the obtained locally-adaptive intervals remain valid. Their sizes, |C| = 2 √(ϕ^-1_X_test(q̂)), which depend on X_test, Φ_θ, and the data, are not guaranteed to be optimal. Exact (optimal) conditionally-valid prediction intervals for all X can be obtained only if a finite-size data set is available for any X ∈ X, which is impossible for real-valued attributes (<cit.>). Like other locally-adaptive CP algorithms, our scheme aims to approximate such unachievable conditional validity with finite data. To evaluate the efficiency of Φ_θ, we compute the average size of the prediction intervals at a user-defined confidence level, (1 - α) ∈ [0, 1], i.e. ℓ_α = E( 2 √(ϕ_X_test^-1(q̂))), q̂ s.t. |{ϕ_X_n(A_n) ≤q̂}| = ⌈ (N + 1)(1 - α) ⌉ where A_n = a(f(X_n), Y_n), a(f(X), Y) = (f(X), Y)^2, and (X_n, Y_n) ∈ D. Training Φ_θ by minimizing (<ref>) over θ has two disadvantages. Firstly, ℓ_α depends on the empirical quantile of the transformed conformity scores, i.e. on the ranking of { B_n = ϕ_X_n(A_n) }_n=1^N, which needs to be estimated numerically. Moreover, the dependence of q̂ on Φ_θ may be hard to estimate without ad-hoc smooth relaxations. A second possible issue of (<ref>) is it requires retraining the model if the target confidence level changes. A possible way out is to average ℓ_α over all possible confidence levels, (1-α) ∈ [0, 1]. Taking the expectation of (<ref>) over α produces a model that works well for any α∈ [0, 1]. More importantly, it bypasses the non-differentiability of (<ref>) without introducing arbitrary relaxations or approximations. Intuitively, this happens because sorting the transformed conformity scores becomes redundant if we assume a flat prior over α∈ [0, 1], i.e. if we let ℓ_all = E_α∼ U_[0, 1]( ℓ_α) = ∫_[0, 1] dα ℓ_α where ℓ_α is defined in (<ref>). The number of non-equivalent confidence levels in an empirical version of ℓ_all coincides with the size of the calibration set, |D| = N. Each non-equivalent confidence level is associated with one of the transformed conformity scores, {B_n = ϕ_X_n(A_n)}_n=1^N. Since the integral becomes a sum, the commutativity of the addends makes the reordering redundant. The size-N approximation of (<ref>), ℓ_all = E( ∑_n=1^n √(ϕ^-1_X_test(ϕ_X_n(A_n)))) is then smooth in θ if Φ_θ is smooth in θ for all X and A. In practice, Φ_θ is a fixed set of parametric transformations. The free parameter, θ, controls the shared functional dependence of all ϕ_X ∈Φ_θ. Specific functional dependencies may be chosen to ensure that Φ_θ is differentiable in θ and satisfies Assumption <ref> (see Section <ref> for a few examples). Given such Φ_θ, we can safely estimate (<ref>) and its gradient (see Section <ref>) from the training data set, D defined in (<ref>), by computing B_n=ϕ_x_n((f(x_n)-y_n)^2), n=1, …, N, their empirical quantile, q̂, and ϕ^-1_x_test(q̂). A data-efficient strategy is to average over N+1 splits of D deformed in (<ref>). In the n-th split, we use (x_n, y_n) as a test object and the remaining objects as a calibration set. §.§ Gradient optimization The smoothness of (<ref>) allows using any gradient-based optimization method. As ϕ_X^-1(ϕ_X'(A)) depends on θ directly and through its argument, the parameter updates are θ→θ + η d_θℓ_all, where η > 0 and d_θℓ_all is the total derivative of (<ref>) with respect to θ, i.e. d_θℓ_all = E( ∑_n=1^n ∇_θϕ_X_test^-1(ϕ_X_n(A_n)) + ϕ_X_test^-1'(ϕ_X_n(A_n))∇_θϕ_X(A_n)/2√(ϕ^-1_X_test(ϕ_X_n(A_n)))) Under Assumption <ref>, the monotone transformations ϕ_X ∈Φ_θ are guaranteed to be invertible. Assumption <ref>, however, does not guarantee a closed-form expression of ϕ_X^-1. In that case, ϕ_X^-1(B) should be obtained numerically through standard root-finding methods, e.g. the Bisection method. As for the backward pass, ∇_θϕ_X^-1 and ϕ_X^-1' can be obtained from ∇_θϕ_X and ϕ_X' using 0 = d_θϕ_X(ϕ_X^-1(B)) = ∇_θϕ_X(ϕ_X^-1(B)) + ϕ_X'(ϕ_X^-1(B)) ∇_θϕ_X^-1(B) 1 = d/dBϕ_X(ϕ_X^-1(B)) = ϕ_X'(ϕ_X^-1(B)) ϕ_X^-1'(B) Similar techniques are increasingly popular in machine learning. Especially because they allow differentiating prediction models that include implicitly-defined outputs, e.g. the numerical solutions of an ODE (<cit.>). § EXPERIMENTS We test the feasibility and performance of the proposed scheme by training and comparing the prediction intervals produced by four different CP algorithms. Each algorithm is based on one of the following classes of transformations, Φ_ ERC = {ϕ_X(A) = A/g(X)^2 + γ, γ >0 } Φ_ linear = {ϕ_X(A) = logA + g(X) } Φ_ exp = {ϕ_X(A) = A e^g(X)} Φ_ sigma = {ϕ_X(A) = σ( log A + g(X) ) } where g is an unconstrained trainable function of X. In all experiments, we let g be a fully connected ReLu network with five hidden layers of size 100. All model classes satisfy Assumption <ref> and are trained by minimizing ℓ_all defined in (<ref>). The ERC model, Φ_ ERC is also trained as suggested in <cit.>, i.e. by minimizing E((g(X) - (f(X)-Y)^2)^2). In the plots and tables, we refer to that setup as ERC (error fit). Our baseline is the non-adaptive CP algorithm, Φ_0 = {ϕ_X(A) = A }. The underlying point-prediction model is the same for all CP algorithms We use a KNN algorithm with the Euclidean distance and cross-validated K and pre-train it on a separate proper-train data set. The model efficiencies are measured by computing the interval sizes and empirical validities. We test all models on a synthetic data set (Section <ref>) and five real-world benchmark data sets (Section <ref>). To optimize the parameterized localization network, g = g(X, θ), over θ, we use a PyTorch implementation of ADAM (<cit.>), with a fixed learning rate, default parameters, and mini-batches of size 16. To avoid overfitting, we evaluate the efficiency of the intervals on a small validation set at each epoch and retrain the model up to the one associated with the most efficient intervals. All data sets are normalized and split into three parts, one to define the KNN point-prediction algorithm, one for training Φ_θ, and one for testing. §.§ Synthetic data In Figure <ref>, we show the locally adaptive intervals obtained by the models on four randomly generated synthetic data sets. In this case, the confidence level is set to α=0.05. Each data set consists of 1000 samples generated by perturbing an order-2 polynomial regression model, f_true(X) = w_0 + w_1 X + w_2 X^2, w ∈ N(0, 1)^3, with one of the four following X-depedent error functions, ε_ cos = (ρ + 2 cos(π/2|X|) 1(|X|< 0.5)) ξ ε_ squared = (ρ + 2 X^2 1(|X|>0.5) ) ξ ε_ inverse = (ρ + 2/ρ + |X| 1(|X|>0.5)) ξ ε_ linear = (ρ + (2- |X|) 1(|X|<0.5)) ξ ρ = 0.1, ξ∼ N(0, 1). In all cases, we sample X uniformly in [-1, 1] and then normalize the input vector (1, X, X^2)^T across all samples. [Normalizing the input causes the x-coordinate of some dots to exceeds [-1, 1] in Figure <ref>.] Figure <ref> shows that the trained models can produce intervals of locally adaptive size. As noted in previous works, e.g. <cit.>, ERC may become unstable when the localization network, g, is trained by fitting the conditional squared errors. For the fairness of the comparison, we regularize ERC (error fit) through the cross-validated early-stopping technique we use for the other models instead of tuning an additional regularization hyper-parameter, γ. §.§ Real data To check the scalability and efficiency of the algorithms on real-world data comparison, we have used the following regression data sets, * energy, the Energy Efficiency Data Set, 768 observations with eight attributes, <cit.>, * concrete, the Concrete Compressive Strength Data Set, 1030 observations with nine attributes, <cit.>, * homes, the King County Home Price Data Set, 21613 observations with 14 attributes, <cit.>, * CASP, the Physicochemical Properties of Protein Tertiary Structure Data Set, 45730 observations with nine attributes, <cit.>, * facebook_1, the Facebook Comment Volume Data Set, 40949 observations with 54 attributes, <cit.>. These are popular public benchmarks for evaluating the local adaptability of conformal prediction. Visit <cit.> or see <cit.> for more details about the data sets. Table <ref> shows the interval sizes and empirical validities on all datasets and for different confidence levels. Each value is obtained by averaging the performance over five runs of the training-testing procedure. The results confirm that ERC may be unstable when the localization function is fitted to the conditional residual. All models, however, are consistently more efficient than the non-adaptive baseline, ϕ_0. The choice of the model class does not look to be crucial, with minor and data set-dependent performance differences. § LIMITATIONS AND FUTURE WORK We recognize that our work should have included a more extensive real-world validation of the methods. Firstly, we only consider a simple point-prediction model and do not run a careful ablation study involving more or less advanced regression models. While, in many cases, a localization network with five hidden layers of size 100 seems to make Φ_θ∈{Φ_ ERC, Φ_ linear, Φ_ exp, Φ_ sigma} flexible enough, the efficiency of the intervals may be improved further by considering more general model classes. For example, it may be interesting to train a linear combination of Φ_ ERC, Φ_ linear, Φ_ exp, and Φ_ sigma with nonnegative weights. While the model automatically satisfies Assumption <ref>, its inverse is not available in analytic form. The parameters of the localization network should then be trained through the implicit differentiation strategy described in Section <ref>. As mentioned in Section <ref>, the proposed approach is not incompatible with existing methods that approximate object-conditional validity by re-weighting the empirical distribution of the conformity scores, e.g. <cit.>, Combining a global redefinition of the conformity measure with a sample-specific re-weighting of the calibration distribution may help localize the interval on specific tasks. Including the proposed strategy in the quantile regression approaches of <cit.> would also be possible. In a follow-up of this work, we will also explore how to interpret Φ_θ as a Normalizing Flow between the distribution of the original conformity scores and their transformed versions. The goal is then to force the joint distribution of the conformity scores and the attributes to factorize, i.e. Φ_θ should be trained so that P_ϕ(A)X = P_ϕ(A)P_X, as this would make conditional and marginal validity equivalent. The code to reproduce the numerical experiments described in Section <ref> and possibly updated versions of this manuscript can be found in this https://github.com/nicoloRHUL/onTrainingLocalizedCP Github directory. § ACKNOWLEDGEMENTS We thank the COPA reviewers for their useful comments and questions. We are grateful to V. Vovk for the inspiring discussions in the preliminary stages of this work.
http://arxiv.org/abs/2306.04315v1
20230607102508
Inferring unknown unknowns: Regularized bias-aware ensemble Kalman filter
[ "Andrea Nóvoa", "Alberto Racca", "Luca Magri" ]
stat.ME
[ "stat.ME" ]
T1 boldT1 @nomlist
http://arxiv.org/abs/2306.03632v1
20230606124158
Uniform Inference for Cointegrated Vector Autoregressive Processes
[ "Christian Holberg", "Susanne Ditlevsen" ]
math.ST
[ "math.ST", "econ.EM", "stat.TH" ]
Uniformly valid inference for cointegrated vector autoregressive processes has so far proven difficult due to certain discontinuities arising in the asymptotic distribution of the least squares estimator. We show how asymptotic results from the univariate case can be extended to multiple dimensions and how inference can be based on these results. Furthermore, we show that the novel instrumental variable procedure proposed by <cit.> (IVX) yields uniformly valid confidence regions for the entire autoregressive matrix. The results are applied to two specific examples for which we verify the theoretical findings and investigate finite sample properties in simulation experiments. Novel DeepONet architecture to predict stresses in elastoplastic structures with variable complex geometries and loads [ ====================================================================================================================== § INTRODUCTION Persistence, i.e., long term sensitivity to small shocks, appears to be a commonly occurring characteristic of many stochastic systems encountered in practice. Such processes are often modeled with cointegration where the persistence can be attributed to a number of shared stochastic trends (random walks). This approach has been extensively researched and fathered numerous papers. Due to their relative simplicity, cointegration models are widely applied. In the realm of vector-valued autoregressive processes, cointegration arises when the characteristic polynomial possesses at least one unit root. The asymptotic theory and, hence, inference is heavily reliant upon the fact that a fixed number of roots can be assumed to be exactly one and the rest stay sufficiently far away from one. This is the case even for simple regression methods such as ordinary least squares. In practice, such assumptions are overly restrictive and more flexible models are often needed for a better description of the empirical data. As was shown in <cit.>, for example, slight deviations from the unit root assumption can severely deteriorate the results of the statistical analysis. Thus, the need arises for inference methods that are uniformly valid over a range of stationary and non-stationary behaviours. So far, most methods of inference with proven uniform guarantees only work in the univariate case. Inference algorithms based on variations of the bootstrap are presented in <cit.> with uniform guarantees given in <cit.>. Furthermore, <cit.> provides a uniform asymptotic framework for one-dimensional autoregressive processes with potential unit roots, which served as an inspiration for much of the work in this paper. The problem is well-understood in one dimension, while less progress has been made in multiple dimensions. The only methods with uniform guarantees known to the authors employ lag augmentation <cit.>. The idea is simple and easy to apply, but there is a price to be paid in terms of efficiency since it essentially involves overfitting the model. Other methods, seeking to avoid this problem, impose restrictive assumptions on the process such that the inference is no longer uniform, but only holds for specific configurations of parameters. The main approach is to assume that the roots are all of a similar proximity to one. In particular, the autoregressive matrix is modeled as a sequence of matrices that approach the identity matrix at some given rate k_n. This is the setup in the instrumental variable methodology (IVX) developed in <cit.>. This offers some flexibility in terms of how close the roots can be to one. However, the framework as presented in the literature so far does not allow processes with simultaneously different degrees of persistence. A lot of focus has been given to predictive regression problems in which the predictive power of the past of one process on the future of another is assessed. Efficient tests are developed in <cit.> and, based on the ideas of uniform inference for univariate autoregressive processes, <cit.> show how these methods can be adapted to yield uniformly valid inference. Unfortunately, most of the work only covers the bivariate case in which the regressor is a univariate autoregressive process so that the theory for one dimension directly applies. Another branch of research concerns inference on cointegrating relations robust to deviations from the unit root assumption <cit.>, but, again, specific parameterizations of the deviations limit the generality of these results which are therefore not uniformly valid in a meaningful sense. Extending the theory in <cit.> to multiple dimensions runs into several difficulties. Firstly, it is not immediately clear what assumptions to put on the autoregressive matrix to ensure that the asymptotic results hold uniformly while still covering all the relevant cases. In one dimension the autoregressive parameter is a scalar and it is sufficient that it is real, bounded in norm by 1, and bounded away from -1 by some small δ. We somehow need to extend this idea to matrices. Secondly, while many of the results on the asymptotics of the sample covariance matrices generalize nicely to multiple dimensions, the proofs are more involved. For example, <cit.> uses Skorohod's embedding, which famously only works in one dimension, to prove that the errors can be assumed to be Gaussian. The third and perhaps most profound difficulty is the fact that the multivariate setting allows for cointegrated systems (or almost cointegrated systems in the case where the roots are only close to unity). This gives rise to certain asymptotic discontinuities that will have to be handled. In particular, it necessitates a proper normalization of the sample covariances. These problems extend to the inferential side of things where, additionally, computational challenges arise. Naively adapting, for example, the grid bootstrap approach of <cit.> would cause the computational complexity to explode. The present work sets out to solve these problems. We provide methods of inference for multivariate vector autoregressive processes that are proven to be valid uniformly over a set of parameters, including processes that are cointegrated and with roots arbitrarily close to 1. The main contributions are the following. First, we extend the asymptotic results of <cit.> to vector autoregressive processes and where uniformity also holds over a family of martingale difference error processes. The main result is Theorem <ref>, which says that the asymptotic distributions of the relevant sample covariances can be suitably approximated by stochastic integrals of Ornstein-Uhlenbeck processes (a direct analog to the univariate case). Both the result and the proof are interesting in their own rights due to the reasons mentioned above. Second of all, we provide several ways to construct confidence regions for the autoregressive parameter and show that these are uniformly valid in the current setting. This involves a proof that confidence regions constructed with IVX are uniformly valid, but only for the entire autoregressive matrix. Thirdly, we show how these confidence regions can be used to answer more general inference questions. The two main applications are confidence intervals for a single coordinate and predictive regression testing with a multivariate regressor. To the best of our knowledge, there have thus far been no attempt in the literature to deal with these applications in a uniform fashion (apart from lag augmentation). We run Monte Carlo experiments to verify the theoretical results and compare the finite sample properties of the different methods. Our last contribution is the development of efficient algorithms to solve these inferential tasks. We show how the Evaluation-Approximation-Maximizatoin (EAM) algorithm from <cit.> can be used in this setting to circumvent the exploding computational cost inherent in algorithms relying on grid-search methods. The rest of the paper is structured as follows. Section 2 introduces notation and relevant concepts. In particular, it explains the concept of uniform convergence of random variables and presents vector autoregressive processes. Section 3 is devoted to presenting and proving the main asymptotic results. Section 4 deals with inference and shows how the results of Section 3 can be applied to obtain uniformly valid confidence regions. Furthermore, it contains a section on predictive regression, lag augmentation, and IVX. Section 5 contains the results of our Monte Carlo experiments. Finally, Section 6 concludes and gives some suggestions for further research. The proofs and more technical details can be found in the Appendix. § PRELIMINARIES This paper involves uniform convergence of random variables and, as such, it seems natural to first explain exactly what we mean by this. We will only be concerned with vector autoregressive processes of order 1 (VAR(1) processes) fulfilling certain assumptions. Specifically, we are interested in VAR(1) processes that may be integrated of order 1 and cointegrated, that is, processes for which the first difference is stationary and there exists some linear combinations of the coordinate processes that are stationary. The exact requirements are laid out below. Notation: For a matrix A∈ℝ^d× d, A^T denotes its transpose and its trace is (A). We write σ_max(A), respectively σ_min(A), for the largest, respectively smallest, singular value of A. Similarly, we write λ_max(A), respectively λ_min(A), for the eigenvalue of A with the largest, respectively smallest, magnitude. ||A|| is the Frobenius norm and ||A||_2 is the spectral norm, i.e., ||A||=√((A^T A)) and ||A||_2 = √(σ_max(A)). For vectors, ||·|| is the usual Euclidean norm. Define S_d to be the set of d× d positive semidefinite matrices and let 𝒥_d denote the set of all d× d Jordan block matrices, that is, for any J∈𝒥_d, there exist some N∈ℕ and (m, λ)∈ℕ^N×ℂ^N such that ∑_i=1^N m_i = d and J is block diagonal with the i'th diagonal block given by J_m_i(λ_i)∈ℝ^m_i× m_i with λ_i on the diagonal, ones on the super-diagonal, and zeros everywhere else. We employ the usual big-O and little-o notation and use o_p to denote convergence in probability and O_p to denote boundedness in probability. §.§ Uniform convergence of random variables We start by defining what we mean by uniform convergence in probability and in distribution. The definition is essentially the same as in <cit.>. Throughout, we assume some background probability space, (Ω, ℱ, ℙ), on which all future random variables are defined. For any two random vectors, X and Y, taking values in (ℝ^d, ℬ(ℝ^d)), we denote by P_X and P_Y the law of X and Y and write X Y if they are equal in law. We define BL_1 as the space of functions f:ℝ^d→ [-1, 1] that are Lipschitz continuous with constant at most 1. Let 𝒫(ℝ^d, ℬ(ℝ^d)) be the set of probability measures on (ℝ^d, ℬ(ℝ^d)). The bounded Lipschitz metric on 𝒫(ℝ^d, ℬ(ℝ^d)) is given by d_BL(μ, ν):=sup_f∈ BL_1|∫_ℝ^d f dμ - ∫_ℝ^d f dν|, μ, ν∈𝒫(ℝ^d, ℬ(ℝ^d)). We use the shorthand d_BL(X, Y) = d_BL(P_X, P_Y) to denote the bounded Lipschitz metric between the laws of two random variables, X and Y. It is well known that d_BL metrizes weak convergence which motivates the following definition of uniform convergence. Let (X_n,θ)_n∈ℕ, θ∈Θ and (Y_n,θ)_n∈ℕ, θ∈Θ be two sequences of families of random d-dimensional vectors defined on (Ω, ℱ, ℙ) and indexed by some set Θ (of possibly infinite dimension). * We say that X_n, θ converges uniformly to Y_n,θ over Θ in distribution (or, for short, X_n,θ→_w Y_n, θ uniformly over Θ) if lim_n→∞sup_θ∈Θd_BL(X_n, θ, Y_n, θ) = 0. * We say that X_n, θ converges uniformly to Y_n,θ over Θ in probability (or, for short, X_n,θ→_p Y_n, θ uniformly over Θ) if, for every ϵ > 0, lim_n→∞sup_θ∈Θℙ(||X_n, θ - Y_n, θ|| > ϵ) = 0. There are two things to note about this definition. Uniform convergence could also be stated as convergence along all sub-sequences θ_n⊂Θ (see Definition 2 and Lemma 1 in <cit.>). Additionally, we allow the limiting distribution to be a sequence. This is because some of the results below are stated in terms of an approximating sequence of random variables. We obtain the conventional notion by letting Y_n, θ = Y_θ. §.§ Model The present paper primarily concerns VAR(1) processes. Consider some Θ⊂ℝ^d× d× S_d×ℝ_+. For any θ∈Θ there exist Γ_θ, Σ_θ, and c_θ such that θ = (Γ_θ, Σ_θ, c_θ). We write N_θ∈{1,...,d} to denote the number of distinct eigenvalues of Γ_θ and λ_θ∈ℝ^N_θ the corresponding vector of ordered eigenvalues, that is, |λ_θ, 1|≥ |λ_θ, 2| ≥ ... ≥ |λ_θ, N_θ| with multiplicities m_θ, 1,..., m_θ, N∈{1,..., d}.Where this does not cause confusion, we shall omit the subscript and simply write Γ, Σ, and c and similarly for the eigenvalues. Let (X_t, θ)_t∈ℕ, θ∈Θ and (ϵ_t, θ)_t∈ℕ, θ∈Θ be two families of ℝ^d-valued stochastic processes and (ℱ_t,θ)_t∈ℕ the filtration generated by (ϵ_t, θ)_t∈ℕ. M We assume that X_t, θ and ϵ_t, θ satisfy the following: * ϵ_t, θ is a stationary martingale difference sequence wrt. ℱ_t, θ, that is, sup_t∈ℕ, θ∈Θ𝔼||ϵ_t, θ|| < ∞ and 𝔼(ϵ_t, θ|ℱ_t-1, θ)=𝔼ϵ_0, θ=0 for all t≥ 1, θ∈Θ. * For all θ∈Θ, the conditional covariance matrix of ϵ_t, θ exists and is given by 𝔼(ϵ_t, θϵ_t, θ^T|ℱ_t-1, θ)=𝔼ϵ_0, θϵ_0, θ^T=Σ_θ a.s. for all t≥ 1. * There exists some small δ > 0 such that 𝔼||ϵ_t, θ||^2+δ≤ c_θ a.s. for all t∈ℕ, θ∈Θ. * X_t, θ is a VAR(1) process, that is, for all θ∈Θ, X_t, θ = Γ_θ X_t-1, θ + ϵ_t, θ for t≥ 1 and X_0, θ=0. These assumptions ensure that X_t, θ is a VAR(1) process started at 0 with model parameters given by the index θ. Assumptions <ref> and <ref> also imply that X_t, θ is adapted to ℱ_t, θ. The constant c is not really a parameter but of a more technical nature. We need to include it in the index since the uniform results below hold only over a subset of Θ bounded in c. For every θ, there are infinitely many processes X_t, θ and ϵ_t, θ satisfying the assumptions above. However, as shown below, certain requirements on the parameter θ ensure that all these processes exhibit the same limiting behaviour. It is therefore not a restriction to simply pick one pair of processes per θ. For any θ∈Θ, there exist matrices F_θ∈ℝ^d× d and J_θ∈𝒥_d such that the i'th diagonal block of J is J_m_i(λ_i) and Γ_θ = F_θ J_θ F_θ^-1 or, equivalently, F_θ^-1Γ_θ F_θ = J_θ. The Jordan matrix J_θ is unique and is called the Jordan canonical form. Thus, the map θ↦ J_θ is well-defined. When it does not cause confusion, we omit the subscript and write J for the matrix satisfying the aforementioned properties. § ASYMPTOTIC PROPERTIES The key building blocks for inference are the two covariance matrices S_XX = 1/n∑_t=1^n X_t-1, θX_t-1, θ^T, S_Xϵ = 1/n∑_t=1^n X_t-1, θϵ_t, θ^T. Obviously, S_XX and S_Xϵ are families of stochastic processes depending on n and θ. This dependence is not made explicit to avoid cluttering up the notation, but it should be kept in mind. We first need to determine what happens to S_XX and S_Xϵ when n goes to infinity and for varying θ. We cannot hope to say anything uniformly without further assumptions on Θ. The following assumptions are sufficiently general to cover a wide range of behaviours while still allowing for uniform asymptotic results. U We assume that Θ satisfies the following: * sup_θ∈Θ c_θ < ∞. * sup_θ∈Θ{σ_max(Σ_θ) + σ_min(Σ_θ)^-1} < ∞. * sup_θ∈Θ |λ_θ, 1|≤ 1. * There exists α∈(0, 1) such that for all θ∈Θ, |λ_θ, i|>1-α implies that ||J_m_θ, i(λ_θ, i)-I_m_θ, i||_2 ≤ 1 - |λ_θ, i|. * For all θ∈Θ, there exists a matrix F_θ, such that F_θ^-1Γ_θ F_θ = J_θ∈𝒥_d and sup_θ∈Θ{σ_min(F_θ)^-1 + σ_max(F_θ)}<∞. Assumption <ref> is a moment condition on the error process which is needed for some of the triangular array martingale difference limit results that we apply. It is implied by the other conditions if the errors are i.i.d. Gaussian. Assumption <ref> states that ||Σ_θ|| and ||Σ_θ^-1|| are uniformly bounded for any matrix norm. In particular, Σ_θ is always of full rank. This is a natural condition when considering uniform convergence. Assumption <ref> is also standard and ensures that the process is not explosive. The important assumptions are <ref> and <ref>. Assumption <ref> states that, for any eigenvalue with magnitude sufficiently close to one, the corresponding Jordan block is close to the identity matrix. This ensures that X_t, θ is uniformly I(1), that is, we disallow processes of higher order of integration and seasonal cointegration. As stated in Remark <ref>, it is always possible to write Γ_θ in Jordan normal form so the real content of assumption <ref> is that F_θ can be chosen such that it is uniformly invertible and uniformly bounded over Θ. With a slight abuse of notation, we define M_i = ∑_j≤ i m_i and write i_k = min{i ≥ 1 | M_i ≥ k}. The limiting behaviour of the k'th coordinate of the process X_t, θ depends on how close the corresponding eigenvalue, λ_i_k, is to 1. As in the univariate case (see <cit.>), we can unify the range of asymptotics with an Ornstein-Uhlenbeck process. For any θ, we let C_n(θ) be the d× d block diagonal matrix whose i'th block is given by nlog(|λ_i|)I_m_i with the convention that log(0)=-K for some large K∈ℕ. For convenience we sometimes suppress the dependence on θ and n and just write C instead. In what follows, uniform convergence of random matrices means uniform convergence of the vectorization of these matrices so that Definition <ref> applies directly. Under Assumptions <ref> and <ref> and after possibly enlarging the probability space (Ω, ℱ, ℙ), there exists a standard d-dimensional Brownian motion, (W_t)_t∈[0,1], and a family of processes, (J_t, C)_t∈[0,1], n∈ℕ, θ∈Θ, with J_t, C = ∫_0^t e^(t-s)CF_θΣ^1/2dW_s, J_0, C = 0 such that the following approximations hold for n→∞ H^-1/2F_θS_XXF_θ^T H^-1/2→_w G^-1/2∫_0^1 J_t, CJ_t, C^T dt G^-1/2, √(n)H^-1/2F_θ S_Xϵ→_w G^-1/2∫_0^1 J_t, C dW_t^T Σ^1/2, uniformly over Θ where the normalizing matrices are given by H = F_θ𝔼(1/n∑_t=1^n X_t-1, θX_t-1, θ^T)F_θ^T, G = F_θ𝔼(∫_0^1 J_t, C J_t, C^T dt)F_θ^T. We will prove the uniform results for the special case of F_θ being the identity, i.e., under the assumption that Γ is a Jordan matrix. This case is obviously unrealistic in practice, but it is not hard to generalize to a general F_θ fulfilling Assumption <ref>. Indeed, as mentioned in Remark <ref>, for a process X_t, θ generated by Γ∈ℝ^d× d, there exist matrices F∈ℝ^d× d and J∈𝒥_d such that F is invertible with F^-1JF = Γ. The transformed process X̃_t, θ = FX_t, θ is then of the required form with parameters θ̃=(J, FΣ F^T, ||F||^2+δc). Assuming that F is uniformly invertible and bounded in norm then ensures that θ̃ satisfies all the assumptions. We prove Theorem <ref> in several steps. The main idea is to split the parameter space Θ into overlapping regions and prove that Theorem <ref> holds in each region. We first consider the following two regions (depending on n), R_n, 0 := {θ∈Θ : |λ_1| ≤ 1 - log n/n}, R_n, d:= {θ∈Θ : |λ_N| ≥ 1 - n^-η}, where η∈(0, 1) is to be specified later. The two regions correspond to the stationary and local-to-unity (non-stationary) regimes, respectively, in the univariate case. Different asymptotics arise depending on the region and, in particular, on how fast the eigenvalues converge to unity. We study each region separately starting with R_n, d. Throughout the rest of this section we assume, unless otherwise specified, that Assumptions <ref> and <ref> hold with F_θ = I. §.§ Non-stationary asymptotics The strategy is to first apply a strong invariance principle for multidimensional martingales difference arrays due to <cit.>, allowing us to assume without loss of generality that ϵ_t, θ is i.i.d. Gaussian. The result only holds for martingale difference sequences, but it is easy to generalize to the class of arrays considered in this paper (see Theorem <ref>). Under the gaussianity assumption, we prove L_2-approximations similar to ones found in <cit.> for the univariate case, which imply the approximations in Theorem <ref>. As a first step, Lemma <ref> allows us to assume that ϵ_t, θ is an i.i.d. Gaussian sequence with mean zero and covariance matrix Σ for each θ∈ R_n, d. Indeed, let ρ_t, θ be as given in the Lemma and define the family (Y_t, θ)_t∈ℕ, θ∈Θ by Y_t, θ = Γ Y_t-1, θ + ρ_t, θ, Y_0, θ = 0 and define the corresponding sample covariances S_YY = 1/n∑_t=1^n Y_t-1, θY_t-1, θ^T, S_Yρ = 1/n∑_t=1^n Y_t-1, θρ_t, θ^T. Then, by Lemma <ref>, we can pick η close enough to 1 such that sup_θ∈ R_n, d{||H^-1/2(S_XX- S_YY) H^-1/2|| + ||√(n)H^-1/2(S_Xϵ - S_Yρ)||} = o_p(1). Let (W_t)_t∈[0,1] be a standard d-dimensional Brownian motion. Since Σ^1/2(W_t/n - W_(t-1)/n)ρ_t, θ/√(n) for all n∈ℕ, 0≤ t ≤ n and θ∈Θ, we get H^-1/2S_Yρ∫_0^1∫_0^t f(t, s, n, θ) dW_s dW_t^T Σ^1/2, H^-1/2S_YY H^-1/2∫_0^1(∫_0^t f(t, s, n, θ) dW_s)(∫_0^t f(t, s, n, θ) dW_s)^T dt, where f(t, s, n, θ) = √(n)H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1Σ^1/21{s ≤⌊ nt⌋ / n}. Let g(t, s, n, θ) = G^-1/2e^(t-s)CΣ^1/2. We have lim_n→∞sup_θ∈ R_n, d𝔼||∫_0^1∫_0^t f(t, s, n, θ) - g(t, s, n, θ)dW_s dW_t^T||^2 = 0 and lim_n→∞sup_θ∈ R_n, d𝔼|| .. ∫_0^1(∫_0^t f(t, s, n, θ) dW_s)(∫_0^t f(t, s, n, θ) dW_s)^T - (∫_0^t g(t, s, n, θ) dW_s)(∫_0^t g(t, s, n, θ) dW_s)^T dt ..||^2 = 0. §.§ Stationary asymptotics The stationary case follows a different strategy. We first show that the classical asymptotic theory for stationary VAR(1) processes applies. Since the regions R_n, 0 and R_n, d overlap, we then prove that the right hand sides of (<ref>) and (<ref>) converge to the standard stationary limiting distributions for C going to -∞ (in terms of its diagonal entries). The standard stationary theory in multiple dimensions mimics the univariate case. We follow the same strategy as in <cit.> but allowing for multiple dimensions and a family of error processes ϵ_t, θ. For θ∈ R_n, 0, we find that, when properly normalized, S_XX converges in probability to the identity matrix and vec(S_Xe) converges in distribution to a d^2-dimensional standard Gaussian. Let V∼𝒩(0, I_d^2). We have, for all ϵ>0 and s∈ [0, 1], lim_n→∞sup_θ∈ R_n, 0ℙ(||1/nH^-1/2(∑_t=1^⌊ ns⌋X_t-1, θX_t-1, θ^T)H^-1/2 - sI|| > ϵ) = 0 and lim_n→∞sup_θ∈ R_n, 0 d_BL(vec(√(n)H^-1/2S_XϵΣ^-1/2), V) = 0. For the special case s=1, equation (<ref>) shows that S_XX converges in probability to the identity matrix. By virtue of Theorem <ref> and Proposition 8 in <cit.>, the proof of Theorem <ref> in the stationary regime is complete if we can show that, for any (θ_n)_n∈ℕ⊂ R_n, 0, G^-1/2∫_0^1 J_t, CJ_t, C^T dt G^-1/2→_w I, G^-1/2∫_0^1 J_t, C dW_t^T →_w N. We emphasize that G and C in eqs. (<ref>)-(<ref>) are functions of θ_n and therefore sequences of matrices. In particular, C_ii≤ nlog(1-log(n)/n)→ -∞ for n→∞ and 1≤ i ≤ d. Eqs. (<ref>)-(<ref>) are therefore a consequence of the following. Let (C_n)_n∈ℕ, (Ω_n)_n∈ℕ⊂ℝ^d× d be sequences of matrices such that C_n is diagonal, (C_n)_ii→ - ∞ for n→∞ and 1≤ i ≤ d, and Ω_n is positive definite with singular values bounded from below and above uniformly over n. Let (W_t)_t∈ [0, 1] be a standard d-dimensional Brownian motion and define the family of d-dimensional Ornstein-Uhlenbeck processes, (J_t, n)_t∈[0,1], n∈ℕ, given by J_t, n = ∫_0^t e^(t-s)C_nΩ_n^1/2dW_s, J_0, n = 0 along with the normalizing matrices G_n = 𝔼(∫_0^1 J_t, n J_t, n^T dt). Then, for n→∞, G_n^-1/2∫_0^1 J_t, nJ_t, n^T dt G_n^-1/2→_p I, vec(G_n^-1/2∫_0^1 J_t, n dW_t^T) →_w V, where V∼𝒩(0, I_d^2). §.§ Mixed asymptotics So far, we only considered cases in which all the eigenvalues are in the same regime, i.e., either close to unity (non-stationary regime) or staying sufficiently far from unity (stationary regime) for a given sample size corresponding to the sets R_n, d and R_n, 0. We have yet to explore what happens when there are eigenvalues in both regimes. We call this case the mixed regime. Define for 1≤ k ≤ d-1 and some fixed γ∈ (0, 1-η) the sets R_n, k = {θ∈Θ : M_i_k = k, |λ_i_k| ≥ 1 - n^-η - γ, |λ_i_k+1|≤ 1 - n^-η - γ} Since 1-n^-η≤ 1 - n^-η - γ≤ 1 - log(n)/n, then for θ∈ R_n, k there are at least k coordinates in the non-stationary regime and d-k coordinates in the stationary regime (and some might be in both). Furthermore, for any n∈ℕ, Θ = ⋃_0≤ k≤ d R_n, k. Thus, showing that (<ref>) and (<ref>) hold uniformly over R_n, k for any fixed 1≤ k≤ d-1 completes the proof of Theorem <ref>. This is the content of the following lemma proved in Appendix <ref>. Let 1≤ k≤ d-1. We have lim_n→∞sup_θ∈ R_n, kd_BL(H^-1/2S_XXH^-1/2, G^-1/2∫_0^1 J_C, tJ_C, t^T dt G^-1/2) = 0, lim_n→∞sup_θ∈ R_n, kd_BL(√(n)H^-1/2S_Xϵ, G^-1/2∫_0^1 J_C, t dW_t^T Σ^1/2) = 0. § UNIFORM INFERENCE Having established the asymptotic properties of the sample covariance matrices S_XX and S_Xϵ, we now seek to develop uniformly valid methods of inference. In particular, Theorem <ref> can be leveraged to construct a confidence region for Γ which is useful for a number of applications. We focus on two important cases in which uniformly valid inference has so far proven challenging: predictive regression testing and coordinate confidence intervals. Inference in these settings can be hard even from a point-wise perspective since the presence of exact unit roots makes it problematic to construct test statistics with standard asymptotic distributions. It is not trivial to conduct inference on Γ even in lieu of Theorem <ref>. The main problem is the presence of the nuisance parameter C(n, θ) in (<ref>)-(<ref>), which cannot be uniformly consistently estimated. Indeed, for a sequence of parameters Γ_n=I - C/n where the real part of the eigenvalues of C∈ℝ^d× d are all strictly negative, the problem is essentially equivalent to estimating C = n(I - Γ). But it is well known that, in this setting, Γ can only be estimated at rate O(n^-1). One way to solve this issue is by the use of test inversion or so-called grid bootstrap methods, which have been widely applied in the unitary case (see for example <cit.> for grid bootstrap and <cit.> for an application to predictive regression). While this is fairly easy in one dimension, adopting these methods to vector autoregressive processes is prohibitive since the computational complexity quickly explodes. We now present a sensible approach based on the results in Section <ref> which keeps computational overhead to a minimum. Throughout this section we omit the dependence on θ in the subscript of all random variables. Consider the least squares estimator, Γ̂, given by Γ̂ = 1/n∑_t=1^n X_tX_t-1^T(1/n∑_t=1^n X_t-1X_t-1^T)^-1 = Γ + S_ϵ X^T S_XX^-1. It follows immediately from Theorem <ref> that Γ̂ is a uniformly consistent estimator of Γ with a rate of convergence depending on the proximity of the eigenvalues of Γ to one. From this we define a uniformly consistent estimator of Σ by averaging the squared residuals, i.e., with ϵ̂_t = X_t - Γ̂X_t-1 we define Σ̂ = S_ϵ̂ϵ̂ where the latter is defined analogously to S_XX with X_t-1 replaced by ϵ̂_t. Another consequence of Theorem <ref> is a uniform approximation of the t^2-statistic. In particular, t̂_Γ^2 = (nΣ̂^-1/2(Γ̂ - Γ)S_XX(Γ̂ - Γ)^TΣ̂^-1/2) →_w ((∫_0^1 Ĵ_t, C dW_t^T)^T(∫_0^1 Ĵ_t, CĴ_t, C^T dt)^-1∫_0^1 Ĵ_t, C dW_t^T ) := t^2_Γ uniformly over Θ where C = C(n, θ) is given in Section <ref> and Ĵ_t, C is defined analogously to J_t, C but with Σ replaced by the consistent estimator Σ̂. Thus, for a fixed significance level, α∈ (0, 1), a uniformly valid 100(1-α)% confidence region for Γ can be constructed by test-inversion. Letting q_n, Γ(1-α) denote the 100(1-α)% quantile of t^2_Γ, we define the confidence region CR_a(α) = {Γ : t̂^2_Γ≤ q_n, Γ(1-α)}. However, the distribution of t^2_Γ is non-standard and, therefore, computing the quantiles q_n, Γ requires extensive simulations and can be quite expensive. Another approach relies on the Gaussian approximations described in Section <ref> and Appendix <ref>. It is similar to Andrew's Method (see <cit.>), and similar in spirit to grid bootstrap. It was originally suggested by <cit.> but has so far only been applied in the univariate case. The approach goes as follows. For a given Γ, define the VAR(1) process, (Y_t)_t∈ℕ, by Y_t = Γ Y_t-1 + e_t, Y_0 = 0 where e_t∼𝒩(0, Σ̂) i.i.d. Let t̃^2_Γ = (nΣ̂^-1/2S_eY S_YY^-1 S_eY^TΣ̂^-1/2) and denote by q̃_n, Γ(1-α) the 100(1-α)% quantile of t̃^2_Γ. A confidence region for Γ is then obtained by CR_b(α) = {Γ : t̂^2_Γ≤q̃_n, Γ(1-α)}. The distribution of t̃^2_Γ is still non-standard, but the quantiles q̃_n, Γ are much easier to compute by simulation. The following Theorem states that both of the above confidence regions are uniformly asymptotically valid over the parameter space Θ. A proof can be found in Appendix <ref>. Fix α∈ (0, 1) and let CR_a(α) and CR_b(α) be as given in (<ref>) and (<ref>). Then, under Assumptions <ref> and <ref>, both are asymptotically uniformly valid over Θ in the sense that lim inf_n→∞inf_θ∈θℙ(Γ∈ CR_a(α)) ≥ 1 - α and lim inf_n→∞inf_θ∈θℙ(Γ∈ CR_b(α)) ≥ 1 - α. Both regions are constructed by test inversion. In particular, we test the hypothesis H_0:Γ = Γ_0 against the alternative H_A: Γ≠Γ_0 and obtain the confidence regions by keeping all values of Γ_0 at which we cannot reject. As such, the above result states that the test has uniform asymptotic level for both choices of the quantile. Once a confidence region for the entire matrix parameter Γ is obtained, we can compute confidence intervals for the individual elements by projection. Say, for example, that we were interested in building an asymptotically uniformly valid confidence interval around Γ_ij. Then, in light of Theorem <ref>, two possible choices would be CI_a, ij(α)={e_i^TΓ e_j : Γ∈ CR_a(α)} and CI_b, ij(α)={e_i^TΓ e_j : Γ∈ CR_b(α)} where e_i, e_j∈ℝ^d is the i'th and j'th unit vector, respectively. Testing whether X_j, t Granger causes X_i, t, which in this case amounts to testing the null H_0: Γ_ij = 0, is then equivalent to checking 0∈ CI_a, ij(α) or 0∈ CI_b, ij(α). The projection approach is known to yield conservative confidence intervals and methods akin to the calibrated projection procedure developed in <cit.> might yield serious improvements in terms of the width of these confidence intervals. §.§ Predictive Regression One application of the above result is robust inference in the predictive regression model. For a deeper discussion of why uniformly valid inference methods are important in this setting see <cit.>, which cover the case of a univariate regressor, but the same considerations hold more generally. To fix ideas, consider the following set of parameters Θ_P = {θ∈Θ : Γ_1j= 0 ∀ j=1,..., d}. If X_t is a VAR(1)-process satisfying Assumption <ref> parameterized by θ∈Θ_P, then we can split X_t=(Y_t, X̃_t^T)^T and ϵ_t = (ρ_t, ϵ̃_t^T)^T into their first coordinate and their last d-1 coordinates such that Y_t = γ^T X̃_t-1 + ρ_t, X̃_t = Γ̃ X̃_t-1 + ϵ̃_t, where γ^T = (Γ_1j)_2≤ j≤ d and Γ̃ = (Γ_ij)_2≤ i,j≤ d. The parameter of interest is γ. The standard approach is to compute the least squares estimator γ̂ and base inference on the t^2-statistic. Unfortunately, we encounter the same issues as described above. To see this, let Σ_Y = Σ_11, Σ_X = (Σ_i, j)_2≤ i,j≤ d, Σ_YX = (Σ_1, j)_2≤ j≤ d, and Σ_XY=Σ_YX^T and define δ = Σ_X^-1Σ_XY. Then, adopting previous notation, Theorem <ref> yields t̂_γ^2 →_w (∫_0^1 J̃_t,CdB_1,t)^T(∫_0^1 J̃_t, CJ̃_t, C^T dt)^-1∫_0^1 J̃_t, C dB_1,t =: t_γ^2 uniformly over Θ_P where J̃_t, C consists of the last d-1 coordinates of Ĵ_t, C and B_1, t is the first coordinate and B_2, t the last d-1 coordinates of W_t Σ̂^1/2. Since B'_1,t = (Σ_Y - δ^T Σ_Xδ)^-1/2(B_1, t - δ B_2, t) is a standard (d-1)-dimensional Brownian motion independent of B_2, t satisfying B_1, t = δ B_2, t + (Σ_Y - δ^T Σ_Xδ)^1/2B'_1, t, we find that t_γ^2 ||(Σ_Y - δ^T Σ_Xδ)^1/2Z + Z_Γ̃δ||^2, where Z_Γ̃=(∫J̃_t, CJ̃_t, C^Tdt)^-1/2∫J̃_t, CdB_2, t, and Z is a (d-1)-dimensional standard normal vector independent of Z_Γ̃. The nuisance parameter, C=C(n, θ), is therefore also present in the distribution of t^2_γ via Z_Γ̃ necessitating alternative methods of inference. Using the results of Theorem <ref>, we can adopt the univariate Bonferroni approach of <cit.> to obtain uniformly asymptotically valid confidence intervals. Say we want to find a confidence region for γ with significance level α∈ (0, 1). For α_1, α_2∈ (0, 1) with α_1 + α_2 = α, the construction proceeds as follows: First construct a 100(1-α_1)% confidence region for Γ̃ using, e.g., either CR(α_1) = CR_a(α_1) or CR(α_1) = CR_b(α_1) (suitably modified for d-1 dimensions). Then, for each Γ̃∈ CR(α_1), let CR_γ|Γ̃(α_2) be a 100(1-α_2)% confidence region for γ given Γ̃. A confidence region not depending on Γ̃ and with coverage of at least 100(1-α)% is then obtained via a Bonferroni correction: CR_γ(α_1, α_2) = ⋃_Γ̃∈ CR(α_1)CR_γ|Γ̃(α_2). Let γ̂_Γ̃ be the estimator obtained by regressing Y_Γ̃, t = Y_t - Σ̂_YXΣ̂_X^-1(X̃_t - Γ̃X̃_t-1) on X̃_t-1 with standard error σ̂^2_Y = Σ̂_Y - Σ̂_YXΣ̂_X^-1Σ̂_XY. A choice for CR_γ|Γ̃(α_2) is then given by CR_γ|Γ̃(α_2) = {γ : σ̂_Y^-2t̂_γ|Γ̃^2 ≤ q_d-1, 1-α_2}, with q_d-1, 1-α_2 denoting the 1-α_2 quantile of the χ^2_d-1 distribution and t̂^2_γ|Γ̃ the usual t^2-statistic for the estimator γ̂_Γ̃ evaluated at γ. A proof of the following is given in Appendix <ref>. For fixed significance levels α_1, α_2∈ (0, 1) with α_1 + α_2∈ (0, 1), let CR_γ(α_1, α_2) be the confidence interval given in (<ref>) with CR(α_1) having uniform asymptotic level and CR_γ|Γ̃(α_2) as given in (<ref>). Then, under Assumptions <ref> and <ref>, lim inf_n→∞inf_θ∈Θ_Pℙ(γ∈ CR_γ(α_1, α_2)) ≥ 1 - α_1 - α_2. The above result is easy to extend to hypothesis testing. Say, e.g., that we want to test the null of no predictive information in the regressor, H_0:γ = 0, versus the alternative H_A:γ≠ 0. This is equivalent to checking whether 0∈ CR_γ(α_1, α_2). Alternatively, the test ϕ_n:(ℝ^d)^n→{0, 1} given by ϕ_n = 1(inf_Γ̃∈ CR(α_1)σ̂^-2_Y t̂^2_0|Γ̃≤ q_d-1, 1-α_2) has asymptotic uniform level and does not require the explicit computation of the confidence regions CR_γ|Γ̃(α_2). §.§ Lag agumentation Let us end this section by introducing some other approaches that have been suggested to be robust against deviations from exact unit root assumptions. The first one we shall focus on is the lag-augmented VAR methodology proposed by <cit.>. In our setup of VAR(1) processes this approach regresses X_t on X_t-1 and the additional augmented lag X_t-2 upon which standard inference methodology is valid. In particular, consider the case of testing whether X_j, t Granger causes X_i, t, that is, testing H_0:Γ_ij = 0 against the alternative H_A: Γ_ij≠ 0. Let Π̂_LA∈ℝ^d× 2d denote the least squares estimator in the lag-augmented regression of X_t on X̅_t = (X_t-1^T, X_t-2^T)^T and, with D = (I_d, 0)^T, define Γ̂_LA = Π̂_LAD along with the t^2-statistic t̂^2_LA, Γ_ij = n (Γ̂_LA, ij - Γ_ij)^2 /σ̂^2_LA, ij where σ̂^2_LA, ij = E_ijΣ̂_LAE_ij^T, E_ij = e_j^T⊗ e_i^T and Σ̂_LA = Σ̂^-1⊗Σ̂. We then get the following lemma which shows that including an additional lag in the regression makes uniform inference straightforward. We note that previous results of this kind have only been stated in a pointwise manner whereas the lemma below fits into the current framework of uniformly valid inference. For 1≤ i, j≤ d, let t̂^2_LA, Γ_ij be defined as in (<ref>). Assume <ref> and <ref>. Then, as n→∞, t̂^2_LA, Γ_ij→_w χ^2_1 uniformly over Θ. Thus, for fixed α∈ (0, 1), the confidence interval CI_LA, ij(α) = (Γ̂_LA, ij - z_1-α/2σ̂_LA, ij/√(n), Γ̂_LA, ij + z_1-α/2σ̂_LA, ij/√(n)), where z_1-α/2 is the 1-α/2 standard normal quantile, has asymptotic uniform level in the sense that lim inf_n→∞inf_θ∈Θℙ(Γ_ij∈ CI_LA, ij(α)) ≥ 1 - α. The Lemma holds more generally for confidence intervals of any linear function of vec(Γ). The key ingredient that facilitates standard inference is the fact that √(n)(Γ̂_LA - Γ) converges uniformly in distribution over Θ to a family of d-dimensional Gaussians. In particular, there is no need for normalization, since all components converge at the same rate O(√(n)). This is contrary to the limiting behaviour of Γ̂ which needs to be normalized by the matrix H^-1/2 since the presence of roots close to unity make certain parts of Γ̂ super efficient. This also suggests some loss of efficiency when using lag augmentation, which does not come as a surprise since we are essentially overfitting the model. §.§ IVX Another approach, known as IVX, that deals specifically with the potential presence of unit roots uses endogenously constructed instrumental variables to slow down the rate of convergence of the estimator enough to ensure mixed Gaussian limiting distributions. It was first suggested by <cit.> and later extended in <cit.>. The most general framework considered so far was condidered in <cit.>. However, they make the crucial assumption that all roots converge to unity at the same speed. While this allows for easy construction of confidence intervals of general linear functions of Γ and simplifies the theory somewhat, this is a significant restriction. In particular, it does not yield uniform guarantees as the ones discussed in this paper. This excludes, for example, the mixed regime discussed in Section <ref> covering cases where parts of the process are stationary and others exhibit random walk behaviour. In this section we detail how one may achieve truly uniform results. This comes at the cost of less general confidence regions, which is essentially because we need to employ different normalizations depending on how close the different roots are to unity. This is akin to using t̂^2_Γ for inference. The idea of IVX is fairly simple. We can achieve Gaussian asymptotics by constructing an endogenously generated instrument that lies in the stationary regime and then perform IV-regression. For some fixed β∈(1/2, 1), we define, for each n∈ℕ, the instrument (Z_t)_t∈ℕ by Z_t = (1-n^-β)Z_t-1 + Δ X_t, Z_0 = 0, where we have suppressed the dependence on n in the notation. For each n∈ℕ, Z_t is a VAR(1) process with the error terms given by Δ X_t and the sequence of coefficients, (1-n^-β)I, fall inside the stationary regime. The IVX estimator is the IV estimator of regressing X_t on X_t-1 and the instrumental variable Z_t-1, i.e., Γ̂_IV = ∑_t=1^n X_tZ_t-1^T(∑_t=1^n X_t-1Z_t-1^T)^-1. The corresponding t^2-statistic for testing the null H_0:Γ = Γ_0 is then given by t̂^2_IV, Γ_0 = (nΣ̂^-1/2(Γ̂_IV - Γ_0)S_XZS_ZZ^-1S_ZX(Γ̂_IV - Γ_0)^TΣ̂^-1/2), where S_XZ and S_ZZ are defined analogously to S_XX. It turns out that inference based on this statistic is standard. In particular, t̂^2_IV, Γ has the standard asymptotic χ^2_d^2 distribution uniformly over the parameter space Θ. Assume that Assumptions <ref> and <ref> are true. Let Γ̂_IV be the IVX estimator and t̂^2_IV, Γ the corresponding t^2-statistic as defined above. Then, for n→∞, t̂^2_IV, Γ→χ^2_d^2 uniformly over Θ. Thus, for fixed α∈ (0, 1), the confidence region CR_IV, Γ(α) = {Γ : t̂^2_IV, Γ≤ q_d^2, 1-α} has asymptotic uniform level in the sense that lim inf_n→∞inf_θ∈Θℙ(Γ∈ CR_IV, Γ(α)) ≥ 1 - α. The use of the instrumental variable Z_t simplifies inference. Indeed, the asymptotic distribution is standard and therefore there is no need for extensive simulations as in the case of Theorem <ref>. There is, however, some loss of efficiency. Lemma <ref> shows that the estimator Γ̂_IV converges at rate of n^β or slower. If β is close to one or if all the roots converge to unity at rate that is slower than n^β, this will not be a problem, but in general Γ̂ is a more efficient estimator of Γ. § SIMULATIONS In this section we investigate the finite sample properties of the methods described in the preceding section. First we consider the problem of constructing a confidence interval for Γ_ij. Since there is nothing special about the choice of i and j we choose to focus on Γ_11 for simplicity. The second problem we consider is that of testing H_0:γ = 0 in the predictive regression model. §.§ Confidence intervals Throughout we fix the significance level at α = 0.05. We compare three different ways of constructing confidence intervals for Γ_11. The first is lag-augmentation yielding the confidence interval CI_LA(α)=CI_LA, 11(α) as given in Lemma <ref>. The other two methods first compute a confidence region for the entire matrix Γ and then find a confidence interval for Γ_11 by projecting the confidence region onto the first coordinate. That is, for a given confidence region of Γ with (1 - α)100% coverage, CR(α), we obtain the projected confidence interval CI(α)= (inf_Γ∈ CR(α)Γ_11, sup_Γ∈ CR(α)Γ_11). We let CI_b(α) (respectively CI_IV(α)) be the confidence interval obtained by projecting CR_b(α) (respectively CR_IV(α)). Of these three confidence intervals, CI_b is by far the most costly to compute as the dimension increases. The constraint in the optimization problem is costly to evaluate due to the need for simulations to compute the critical value q̃_n, Γ(1-α) at each Γ. This issue can, however, be partly resolved by the use of the EAM-algorithm as described in <cit.>. See Appendix <ref> for details. CI_IV also involves an optimization problem, but the constraint is cheap to evaluate since the critical value in CR_IV is fixed and standard. CI_IV can be computed by using standard solvers. We let β=0.9 in the IVX regression. To verify that the confidence intervals are truly uniform, we look at choices of Γ with eigenvalues in different regimes. In particular, for a fixed dimension d and sample size n, we consider Γ∈ℝ^d× d with eigenvalues λ_1(Γ) = 1 and λ_i(Γ) = 1 - (1/n)^1/(i-1) for i=2,...,d. For each simulation we draw a new set of random eigenvectors and the errors are i.i.d. Gaussian with non-diagonal covariance matrix. For a detailed explanation of the setup, see Appendix <ref>. The results are recorded in Table <ref>. All three confidence intervals have coverage greater than 0.95 in every case. As expected, both CI_b and CI_IV are conservative with practically a 100% coverage. This is already apparent in 3 dimensions. Despite the loss of efficiency, however, both yield shorter intervals than CI_LA in 3 dimensions for all three sample sizes. This can most likely be attributed to Γ having multiple roots close to unity, implying that the lag-augmented estimator converges at a rate slower than the IVX and the LS estimators. This advantage more or less vanishes in 4 dimensions and in 5 dimensions CI_LA is the clear winner. Intuitively, the dimension of the confidence regions is quadratic in d and the loss suffered by projection methods therefore quickly sets in. Interestingly, this phenomenon seems less pronounced for higher sample sizes. Another key result in Table <ref> is that CI_b is wider than CI_IV in every case. This is counter to the fact that the least squares estimator should be more efficient. One possible explanation is that Γ has roots that converge to 1 at a slower rate than (1 / n)^β limiting the loss of efficiency of the IVX estimator. Another possible explanation is that finite sample behaviour of t̂^2_IV is different from the asymptotic χ^2-distribution for the sample sizes considered here resulting in a confidence region for Γ with slightly lower coverage but still with conservative coverage when projected onto the first coordinate. §.§ Predictive regression testing Throughout we fix α = 0.1. Consider the problem of testing H_0:γ = 0 against the alternative H_A:γ≠ 0 in the predictive regression model. Similar to the previous section, we compare three methods. The first one uses lag-augmentation. Similar to Lemma <ref> and as noted in Remark <ref>, lag-augmentation can be applied to test general linear hypotheses on Γ. Instead of regressing Y_t only on X̃_t-1, we regress Y_t on (X̃_t-1^T, X̃_t-2^T)^T and compute the corresponding t^2-statistic for the null of no predictive information. By the same arguments as in Lemma <ref> the statistic converges in distribution to χ^2_d-1 uniformly over Θ_P. Thus, picking q_d-1, 1-α as the critical value yields a test with uniform asymptotic level. We denote the test based on lag-augmentation by ϕ_LA. The other two tests employ the Bonferroni strategy described in Section <ref>. In particular, for α_1 = 0.05, α_2=0.05, and CR(α_1) a confidence region for Γ̃ with uniform asymptotic level α_1, we define, as in Remark <ref>, the test ϕ_n = 1(inf_Γ̃∈ CR(α_1)σ̂^-2_Yt̂^2_0|Γ̃≤ q_d-1, 1-α_2). We consider CR(α_1) = CR_b(α_1) and CR(α_1) = CR_IV(α_1) denoting the corresponding tests by ϕ_b and ϕ_IV.[Here, CR_b(α_1) and CR_IV(α_1) should be considered as confidence regions for Γ̃ and not the entire matrix Γ.] By Lemma <ref> in combination with Theorem <ref> and Theorem <ref>, the tests will have uniform asymptotic level. Again, computing the two latter test statistics involves an optimization problem. As in the case of the projection confidence intervals, it is much costlier to compute ϕ_b and we employ the EAM-algorithm described in Appendix <ref> as a practically feasible solution to this problem. We perform two sets of simulation experiments for the three tests. To verify that the uniform guarantees hold, we consider a sequence of Γ̃ in the mixed regime. Throughout, we let d=4, 5, 6 and Γ̃∈ℝ^(d-1)× (d-1) is chosen as above. We also consider the case Γ̃=I so that X̃_t is a random walk, i.e., Γ̃ is in the non-stationary regime. First we investigate the size properties of the three tests for different sample sizes. The results are depicted in Figure <ref>. Evidently, ϕ_b and ϕ_IV quickly achieve a rejection rate well below the significance level for all three dimensions and in both regimes. This is in line with the theory since the Bonferroni corrections inherent in these tests should result in tests with conservative sizes. For ϕ_LA the asymptotics take a little longer to set in. At around n=150 it achieves the correct size for 3 and 4 dimensions, but convergence is slower in 5 dimensions. To compare the power of the three tests at finite sample sizes we consider a sequence of alternatives increasingly closer to 0. In particular, we let γ = δ1 for δ∈{0.005, 0.01,..., 0.1} and fix the sample size at n=100. In both regimes and for all three choices of d, ϕ_b and ϕ_IV vastly outperform ϕ_LA correctly rejecting the null around 90% of the time in the mixed regime for δ=0.04 compared to a rejection rate of only around 18% for ϕ_LA. It looks as though ϕ_b is slightly better than ϕ_IV especially as the dimension increases, but the two are close overall. Interestingly, both tests seem to fare better in the mixed regime than in the stationary regime although we would expect the LS and the IVX estimator to converge at a slower rate in the mixed regime. This might be related to the Bonferroni correction and the shape of the confidence regions in the different regimes. The performance of ϕ_LA does not depend much on the regime, but it does seem to slightly improve with the size of the dimension. The latter observation also holds for the other two tests and is probably a reflection of the fact that the alternative is easier to detect in larger dimensions. § CONCLUSION We proved two major uniform asymptotic results for the sample covariance matrices of VAR(1) processes with potential unit roots. First we showed that S_XX and S_Xϵ can be uniformly approximated by their Gaussian counterparts S_YY and S_Yρ. This result was used to derive another uniform approximation involving integrals of Ornstein-Uhlenbeck processes. These results are a direct extension of similar results in one dimension <cit.> (and, in fact, slightly more general since our results also hold uniformly over the error processes provided that certain assumptions are fulfilled). While uniform asymptotic results akin to those presented here have been given in the literature for specific sequences of Γ, this is the first time anything has been proven that is truly uniform over the parameter space Θ. In particular, the results cover cases in which the eigenvalues converge to 1 at different rates (the so-called mixed regime). As such, we have provided a unified characterization of the asymptotics of VAR(1) processes with potential unit roots. As an application of the uniform approximation results, we showed how to construct confidence regions for Γ with uniform asymptotic level. Unfortunately it is not possible to construct confidence regions for general linear hypotheses because of the normalizing matrix H. Similarly, we proved that the IVX methodology also yields a confidence region for Γ with uniform asymptotic level. This is a stronger version of the guarantees for IVX given in the literature so far in that the usual results require that the eigenvalues of Γ all fall in the same regime. However, as mentioned, this result does not hold for general linear hypotheses. While the asymptotic theory only provided confidence regions for the entire matrix Γ we showed how these confidence regions might be utilized for different kinds of inference. We considered two different problems. First, by projecting the confidence region, we showed how to construct confidence intervals for individual entries of Γ. Next, we extended the Bonferroni method for predictive regression testing to the multivariate setting. In our simulation experiments we saw that both of these methods compare favourably to the use of lag augmentation, although the loss suffered by projection increases quickly for higher dimensions. This opens the door for future research into calibration methods like those presented in <cit.>. Correct coverage might be achieved by adjusting the critical value since the ones used are effectively too large. Similarly, there might be better ways of predictive regression testing than a simple Bonferroni correction. One could, for example, draw on the ideas presented in <cit.>. We leave these directions open for future research. Finally, we argued that the EAM-algorithm in <cit.> can be used to solve the optimization problems that arise in computing CI_b and ϕ_b making these methods feasible in moderate dimensions. We did not actually prove that the algorithm is guaranteed to converge in our setting. While this should not be a major difficulty, it is beyond the scope of this paper. Other possible extensions are the inclusion of deterministic trends and higher order VAR processes. § PROOFS §.§ Nonstationary asymptotics We first tackle the proof of (<ref>). Define the functions h_1(t, s, n, θ) = √(n) H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1Σ^1/2, h_2(t, s, n, θ) = √(n) H^-1/2e^(t-s)C̃Σ^1/2, h_3(t, s, n, θ) = √(n) H^-1/2e^(t-s)CΣ^1/2, where C̃ = nlogΓ is well defined for all θ∈ R_n, d. When it does not cause confusion, we shall omit the arguments of functions and simply write f, g, h_1, h_2, and h_3. By applying the Itô isometry twice we find that the expectation in (<ref>) is equal to ∫_0^1∫_0^t ||f - g||^2 ds dt ≤ 4∫_0^1∫_0^t ||f - h_1||^2 + ||h_1 - h_2||^2 + ||h_2 - h_3||^2 + ||h_3 - g||^2 ds dt where the inequality is Jensen's inequality. For the first term, for any θ∈ R_n, d, ∫_0^1∫_0^t ||f - h_1||^2 ds dt = ∫_0^1∫_⌊ nt⌋ / n^t || √(n) H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1Σ^1/2||^2 ds dt = (n∫_0^1 t - ⌊ nt⌋/n dt)|| H^-1/2Γ^-1Σ^1/2||^2 ≤|| H^-1/2Γ^-1Σ^1/2||^2 ≤||Γ^-1Σ^1/2||^2(H^-1). Here Γ^-1 is well-defined since θ∈ R_n, d. Thus, lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t ||f - h_1||^2 ds dt ≤lim_n→∞sup_θ∈ R_n, d||Γ^-1Σ^1/2||^2 (H^-1) = 0 where we use that Γ^-1Σ^1/2 is uniformly bounded on R_n, d in combination with Lemma <ref>. For the second term, we first note that, due to the construction of Θ, for n large enough, we may assume that ||Γ - I||< 1 for any θ∈ R_n, d. We then get Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1 = e^(⌊ nt⌋ - ⌊ ns⌋ - 1)logΓ whence ||h_1 - h_2||^2 = ||√(n) H^-1/2e^(⌊ nt⌋ - ⌊ ns⌋ - 1)logΓ(I - e^((t-s)n - (⌊ nt⌋ - ⌊ ns⌋ - 1))logΓ)Σ^1/2||^2 ≤||√(n) H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1||^2 c(t, s, n, θ) ^2||logΓ||^2 where c(t, s, n, θ)=((t-s)n - (⌊ nt⌋ - ⌊ ns⌋ -1))||Σ^1/2||e^||((t-s)n - (⌊ nt⌋ - ⌊ ns⌋ - 1))logΓ|| and the inequality follows from ||e^A - e^B|| ≤ ||A - B|| e^max{||A||, ||B||} for any A, B ∈ℝ^d× d. It is easy to see that lim_n→∞sup_θ∈ R_n, d||logΓ||^2 = 0 and that lim sup_n→∞sup_θ∈ R_n, dsup_t∈[0,1]sup_s ∈[0, t] c(t, s, n, θ)^2 ≤ c_0 < ∞. We also have ∫_0^1∫_0^⌊ nt⌋ /n||√(n) H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1||^2 ds = 1/n∑_t=2^n∑_s=1^t-1||H^-1/2Γ^t-1-s||^2 ≤||Σ^-1||1/n∑_t=2^n∑_s=1^t-1||H^-1/2Γ^t-1-sΣ^1/2||^2 = d||Σ^-1|| where the last equality follows from 1/n∑_t=2^n∑_s=1^t-1||H^-1/2Γ^t-1-sΣ^1/2||^2 = (H^-1/21/n∑_t=2^n∑_s=1^t-1Γ^t-1-sΣ(Γ^t-1-s)^T H^-1/2) = (H^-1/2𝔼(1/n∑_t=1^n X_t-1, θX_t-1, θ^T) H^-1/2) = (I)=d. As shown above lim sup_n→∞sup_θ∈ R_n, d∫_0^1∫_⌊ nt⌋ /n^t||√(n)H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1||^2 ds dt = 0 so that lim sup_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t||√(n) H^-1/2Γ^⌊ nt ⌋ - ⌊ ns ⌋ - 1||^2 ds dt ≤ c_1 < ∞. Combining these results then yields lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t ||h_1 - h_2||^2 ds dt ≤ c_0c_1 lim_n→∞sup_θ∈ R_n, d||logΓ||^2 = 0. For the third term, for n sufficiently large, Γ is diagonal for all θ∈ R_n, d and therefore C̃ and C commute. A similar argument as that applied to the second term then yields lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t ||h_2 - h_3|| = 0. Finally, for the fourth term it suffices to show that ||√(n)H^-1/2G^1/2-I|| converges uniformly over R_n, b to 0 for n going to infinity. Indeed, since ∫_0^1∫_0^t||h_2 - g||^2 ≤||√(n)H^-1/2G^1/2-I||^2 ∫_0^1∫_0^t || G^-1/2e^(t-s)CΣ^-1/2||^2 ds dt ≤||√(n) H^-1/2G^1/2-I||^2 ∫_0^1∫_0^t || G^-1/2e^(t-s)CΣ^-1/2||^2 ds dt = ||√(n) H^-1/2G^1/2-I||^2 (G^-1/2𝔼(∫_0^1J_t, CJ_t, C^T dt) G^-1/2) = d||√(n) H^-1/2G^1/2-I||^2, this would imply lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t||h_3 - g||^2 ≤ c_1 lim_n→∞sup_θ∈ R_n, d||√(n)H^-1/2G^1/2-I|| = 0. To prove the claim, we first consider nH^-1/2GH^-1/2. By the Itô isometry we have nH^-1/2GH^-1/2 = ∫_0^1∫_0^t h_3 h_3^T ds dt. Also, similar to above, ∫_0^1∫_0^t ff^T ds dt = H^-1/21/n∑_t=2^n∑_s=1^t-1Γ^t-1-sΣ(Γ^t-1-s)^T H^-1/2 = I. Thus, || nH^-1/2GH^-1/2 - I|| = ||∫_0^1∫_0^t h_3h_3^T - ff^T ds dt|| ≤∫_0^1∫_0^t ||(h_3 - f)h_3^T|| ds dt + ∫_0^1∫_0^t ||f(h_3 - f)^T|| ds dt. Now, since lim sup_n→∞sup_θ∈ R_n, d∫_0^t ||h_3||^2 + ||f||^2 ds dt < ∞, and, as shown above, lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t ||h_3 - f||^2 ds dt = 0, Hölder's inequality yields lim_n→∞sup_θ∈ R_n, d|| nH^-1/2GH^-1/2 - I|| = 0. By Lemma <ref> this implies that ||√(n)H^-1/2G^1/2-I|| converges uniformly over R_n, b to 0 for n going to infinity. For the proof of (<ref>) we start with the following chain of inequalities ||∫_0^1∫_0^t f dW_s(∫_0^t f dW_s)^T - ∫_0^t g dW_s(∫_0^t g dW_s)^T dt || ≤ ||∫_0^1∫_0^t f dW_s(∫_0^t f - g dW_s)^Tdt|| + ||∫_0^1∫_0^t f - g dW_s(∫_0^t g dW_s)^Tdt|| ≤ ∫_0^1(||∫_0^t f dW_s|| + ||∫_0^t g dW_s||)||∫_0^t f - g dW_s|| dt ≤ (∫_0^1(||∫_0^t f dW_s|| + ||∫_0^t g dW_s||)^2 dt)^1/2(∫_0^1||∫_0^t f - g dW_s||^2 dt)^1/2 ≤ (2∫_0^1||∫_0^t f dW_s||^2 + ||∫_0^t g dW_s||^2 dt)^1/2(∫_0^1||∫_0^t f - g dW_s||^2 dt)^1/2 where the second to last inequality is Hölder's inequality. By the Itô isometry and Fubini's theorem we have, for any θ∈ R_n, d, 𝔼(∫_0^1||∫_0^t f dW_s||^2 + ||∫_0^t g dW_s||^2 dt) = ∫_0^1∫_0^t ||f||^2 + ||g||^2 ds dt = 2d and 𝔼(∫_0^1||∫_0^t f - g dW_s||^2 dt) = ∫_0^1∫_0^t ||f - g||^2 ds dt so that, by the same argument as in the proof of equation (<ref>), lim_n→∞sup_θ∈ R_n, d||∫_0^1∫_0^t f dW_s(∫_0^t f dW_s)^T - ∫_0^t g dW_s(∫_0^t g dW_s)^T dt ||^2 ≤ 4d lim_n→∞sup_θ∈ R_n, d∫_0^1∫_0^t ||f - g||^2 ds dt = 0. §.§ Stationary asymptotics Before proving Theorem <ref> we need two auxiliary results on the rate of convergence of X_t-1, nX_t-1, n^T and S_Xϵ akin to Lemma 3.1 in <cit.>. For all s∈ [0, 1], we have sup_θ∈ R_n, 0||X_⌊ ns⌋, θX_⌊ ns ⌋, θ^T|| = o_p(n/√(log n)) and sup_θ∈ R_n, 0||1/n∑_t=1^⌊ ns⌋X_t-1, θϵ_t, θ^T|| = o_p(1/√(log n)). We first proof (<ref>). Since X_⌊ ns⌋, θX_⌊ ns ⌋, θ^T is positive semidefinite, it suffices to show that sup_θ∈ R_n, 0(𝔼(X_⌊ ns⌋, θX_⌊ ns ⌋, θ^T)) = o(n/√(log n)) for all s∈ [0, 1]. Now, fix some s∈[0, 1]. We have, for all θ∈Θ, (𝔼(X_⌊ ns⌋, θX_⌊ ns ⌋, θ^T)) ≤(𝔼(X_n, θX_n, θ^T)) = (∑_t=0^n-1Γ^t Σ(Γ^t)^T). The result then follows directly from part <ref> of Lemma <ref>. For the proof of (<ref>), fix some θ∈Θ and write 𝔼((∑_t=1^n X_t-1, θϵ_t, θ)(∑_t=1^n X_t-1, θϵ_t, θ)^T) = 𝔼(∑_t=1^n∑_s=1^t-1Γ^t-1-sϵ_s, θe_t, θ^Tϵ_t, θϵ_s, θ^T(Γ^t-1-s)^T) = (Σ)∑_t=1^n∑_s=1^t-1Γ^t-1-sΣ(Γ^t-1-s)^T ≤ (Σ) n ∑_t=1^n Γ^t Σ(Γ^t)^T so that another application of part <ref> of Lemma <ref> shows that sup_θ∈ R_n, 0𝔼||1/n∑_t=1^n X_t-1, θϵ_t, θ^T||^2 = o_p(1/log n). The result then follows since 𝔼||1/n∑_t=1^⌊ ns ⌋ X_t-1, θϵ_t, θ^T||^2 ≤𝔼||1/n∑_t=1^n X_t-1, θϵ_t, θ^T||^2 for all s∈ [0, 1] and θ∈Θ. We shall first tackle the proof of (<ref>). Fix some s∈[0, 1] and define S̃_XX = 1/n∑_t=1^⌊ ns⌋ X_t-1, θX_t-1, θ^T for ease of notation. From the relation X_t, θ = Γ X_t-1, θ + ϵ_t, θ, it follows that Γ X_t-1, θX_t-1, θ^TΓ^T - X_t-1, θX_t-1, θ^T - ϵ_t, θϵ_t, θ^T = X_t, θX_t, θ - X_t-1, θX_t-1, θ^T - Γ X_t-1, θϵ_t, θ^T - ϵ_t, θX_t-1, θ^TΓ^T. Summing over t and dividing by n then gives S̃_XX = ΓS̃_XXΓ^T + sΣ - S_n where S_n = 1/n(X_n, θX_n, θ^T - ∑_t=1^⌊ ns⌋(ϵ_t, θϵ_t, θ^T - Σ) - ∑_t=1^⌊ ns⌋Γ X_t-1, θϵ_t, θ^T - ∑_t=1^⌊ ns⌋ϵ_t, θX_t-1, θ^TΓ^T). We can iterate this identity to get S̃_XX = ∑_t=0^⌊ ns⌋-2Γ^tΣ(Γ^t)^T + ∑_t=0^⌊ ns⌋-2Γ^t S_n (Γ^t)^T + Γ^⌊ ns⌋-1S̃_XX(Γ^⌊ ns⌋-1)^T. Now, define A_n = H^-1/2∑_t=0^⌊ ns⌋-2Γ^t S_n (Γ^t)^T H^-1/2, B_n = Γ^⌊ ns⌋-1S̃_XX(Γ^⌊ ns⌋-1)^T. By Lemma <ref>, we are done if we can show that sup_θ∈ R_n, 0 ||A_n|| + ||B_n|| = o_p(1). By Lemma <ref> and <ref> we see that √(log n)S_n converges uniformly to 0 over R_n, 0. But then, since Σ is uniformly bounded from below over Θ, for a fixed ϵ>0, we can find n_0∈ℕ large enough so that sup_θ∈ R_n, 0ℙ(||√(log n)A_n|| ≥||H^-1/2(∑_t=0^⌊ ns⌋-2Γ^t Σ(Γ^t)^T)H^-1/2||) < ϵ for all n≥ n_0. It then follows from Lemma <ref> that sup_θ∈ R_n, 0||A_n|| = o_p(1). Next, we see that 𝔼B_n = Γ^⌊ ns⌋-1H̃(Γ^⌊ ns⌋-1)^T where H̃ = 𝔼(S̃_XX) and therefore sup_θ∈ R_n, 0||𝔼B_n|| ≤ C (1 - log n/n)^2(⌊ ns⌋-1)||H̃|| by part <ref> of Lemma <ref>. Finally, from the inequality 1 - n/log n = e^log(1 - log(n)/n)≤ e^-log(n)/n = 1/n^1/n and part <ref> of Lemma <ref> we see that sup_θ∈ R_n, 0||𝔼B_n|| = o(1). Since B_n is positive semidefinite, this implies that B_n converges in probability to 0 uniformly over R_n, 0 and therefore concludes the proof of (<ref>). For the proof of (<ref>), let (θ_n)_n∈ℕ⊂Θ be such that θ_n∈ R_n, 0 and define the array (e_t, n)_t≥ 1, n∈ℕ by e_t,n = vec(n^-1/2H^-1/2X_t-1, θ_nϵ_t, θ_n^TΣ_n^-1/2) = n^-1/2Σ^-1/2ϵ_t, θ_n⊗ H^-1/2X_t-1, θ_n. Proving (<ref>) is then equivalent to proving ∑_t=1^n e_t, n→_w 𝒩(0, I). Since X_t-1, θ_n is measurable wrt. ℱ_t-1, we see that e_t, n is a martingale difference array and, by (<ref>), ∑_t=1^n𝔼(e_t, ne_t, n^T | ℱ_t-1) →_p I for n→∞. Our aim is to apply the martingale difference array CLT given in Theorem <ref> which amounts to checking that, for each γ > 0, ∑_t=1^n 𝔼(||e_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1) = o_p(1). Now, fix some γ >0 and note that ||e_t, n||^2 = ||H^-1/2X_t-1, θ_n||^2 ||ϵ_t, θ_n||^2 so that ∑_t=1^n 𝔼(||e_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1) ≤ 1/n∑_t=1^n ||H^-1/2X_t-1, θ_n||^2𝔼(||Σ^-1/2ϵ_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1) ≤ C_nmax_1≤ t≤ n{𝔼(||ϵ_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1)} where C_n = sup_θ∈ R_n, 0(H^-1/2S_XXH^-1/2)(Σ) = O_p(1) because of (<ref>). An application of Hölder's inequality and the Markov inequality gives us 𝔼(||ϵ_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1) ≤ 𝔼(||ϵ_t, θ_n||^2 + δ | ℱ_t-1, n)^2/2+δℙ(||e_t, n|| > γ | ℱ_t-1, n)^δ/2+ δ ≤ 𝔼(||ϵ_t, θ_n||^2 + δ | ℱ_t-1, n)^2/2+δ(𝔼(||H^-1/2X_t-1, θ_n||^2||Σ^-1/2ϵ_t, θ_n||^2|ℱ_t-1)/nγ^2)^δ/2+δ ≤ C (1/nH^-1/2X_t-1, θ_nX_t-1, θ_n^T H^-1/2)^δ/2+δ where C = d^δ/2+δsup_θ∈Θ𝔼(||ϵ_t, θ||^2 + δ | ℱ_t-1, n)^2/2+δ < ∞ because of Assumptions <ref> and <ref>. Thus, the proof is complete if we can show that sup_θ∈ R_n, 0max_1≤ t ≤ n(1/nH^-1/2X_t-1, θ_nX_t-1, θ_n^T H^-1/2)^δ/2+δ = o_p(1). But this follows from exactly the same argument as in the proof of equation (5) <cit.>. (The multivariate case is essentially the same once (<ref>) is established.) Define the sequence (c_n)_n∈ℕ⊂ℝ^d given by c_n, i=e^(C_n)_ii/n for 1 ≤ i ≤ d. By assumption, we have c_n→ 0 for n→∞ so we can assume without loss of generality that max_i c_n, i≤ 1 and min_i c_n, i > 0 and, by potentially passing to a sub sequence, that c_n is monotonically decreasing. For each n, we can then find k_n∈ℕ such that 1 - k_n^-η≤min_i e^C_n, ii/k_n≤max_i e^C_n, ii/k_n≤ 1 - log k_n/k_n. Passing to another sub sequence if necessary, we may assume that k_n is strictly increasing. Now, define sequences (λ_k)_k∈ℕ⊂ℂ^d and (Σ_k)_k∈ℕ⊂ℝ^d× d such that |λ_k, i| = 1-log(k)/k and Σ_k = I for k < k_0 and |λ_k, i| = e^C_n, ii/k_n and Σ_k = Ω_n for k_n ≤ k < k_n+1. We then have 1-k^-η≤ 1 - k_n^-η≤min_i |λ_n, i|≤max_i |λ_n, i| ≤ 1 - log k_n/k_n≤ 1 - log k/k. If we define (θ_k)_k∈ℕ⊂Θ by θ_k = (Γ_k, Σ_k, c) where Γ_k is the diagonal matrix whose diagonal entries are given by λ_k and c∈ℝ_+, above inequalities together with the assumptions on Ω_n imply that θ_k∈ R_k, 1∩ R_k, d for all k. Define S_k = 1/k∑_t=1^k X_t-1, θ_kX_t-1, θ_k^T, T_k = 1/k∑_t=1^k X_t-1, θ_kϵ_t, θ_k, and H_k = 𝔼(S_k). By the triangle inequality, d_BL(G_n^-1/2∫_0^1 J_t, nJ_t, n^T dt G_n^-1/2, I) ≤ d_BL(H_k_n^-1/2S_k_n H_k_n^-1/2, G_n^-1/2∫_0^1 J_t, nJ_t, n^T dt G_n^-1/2) + d_BL(H_k_n^-1/2S_k_n H_k_n^-1/2, I) and d_BL(G_n^-1/2∫_0^1 J_t, n dW_t^T , N) ≤ d_BL(√(k_n)H_k_n^-1/2T_k_nΣ_k_n^-1/2, G_n^-1/2∫_0^1 J_t, n dW_t^T) + d_BL(√(k_n)H_k_n^-1/2T_k_nΣ_k_n^-1/2, N). By equations (<ref>), (<ref>) and (<ref>) along with Theorem <ref>, all four terms on the right hand side tend to 0 for n→∞. §.§ Mixed Asymptotics This section is devoted to the proof of Lemma <ref>. We assume throughout that Assumptions <ref> and <ref> are satisfied and consider only the special case where F_θ = I. For ease of notation we define A = ∫_0^1 J_C, tJ_C, t^T dt, B = ∫_0^1 J_C, tdW_t^T. We hold 1≤ k≤ d-1 fixed throughout and start by partitioning R_n, k. Let r = d-k ≥ 1 and define w(j, l) = (1-|λ_i_k+j|)/(1-|λ_i_k+l|) for 1≤ j < l ≤ r. We introduce the sets U_n, 0 = {θ∈ R_n, k : w(0, 1) ≤ n^-γ/r}, U_n, r = {θ∈ R_n, k: w(0, r) ≥ n^-γ}, U_n, j = {θ∈ R_n, k: w(0, j) ≥ n^-jγ/r, w(j, j+1)≤ n^-γ/r} for j=1,..., r-1. We have R_n, k=⋃_j U_n, j for all n∈ℕ. Indeed, fix n and take some θ∈ R_n, k and define j_0 = min(inf{0≤ j≤ r-1 : w(j, j+1) ≤ n^-γ/r}, r) where we use the convention inf∅ = ∞. If j_0 = 0, then clearly θ∈ U_n, 0 = U_n, j_0. Otherwise, we find that w(0, j_0) = ∏_j=0^j_0-1w(j, j+1) ≥∏_j=0^j_0 - 1n^-γ/r = n^-j_0γ/r so that, again, θ∈ U_n, j_0. Fix some 0≤ j ≤ r. It therefore suffices to show that (<ref>) and (<ref>) hold uniformly over U_n, j. To do so, we need to split the covariance matrices and the normalizing matrix into four blocks. In particular, we write H = [ H_11 H_12; H_21 H_22 ] where H_11 is (k+j)× (k+j) and the other blocks of conforming dimensions. Analogously, S_XX, S_Xϵ, G, A and B can be written as block matrices. Block coordinates are written in the subscript when possible and otherwise in the superscript. For example, S_XX^12 and A_12 are the top right (k+j)×(d-k-j) blocks of S_XX and A. For fixed 1≤ k ≤ d-1 and 0≤ j≤ r, let N∈ℝ^(d-k-j)× d be a random matrix on (Ω, ℱ, ℙ) such that vec(N)∼𝒩(0, I). We have the following block-wise limits lim_n→∞sup_θ∈ U_n, jd_BL(H_11^-1/2S_XX^12H_22^-1/2, 0) = 0 lim_n→∞sup_θ∈ U_n, jd_BL(H_11^-1/2S_XX^11H_11^-1/2, G_11^-1/2A_11G_11^-1/2) = 0 lim_n→∞sup_θ∈ U_n, jd_BL(H_22^-1/2S_XX^22H_22^-1/2, I) = 0. lim_n→∞sup_θ∈ U_n, j d_BL(√(n)H^-1/2_11(S_Xϵ^11, S_Xϵ^12), G_11^-1/2(B_11, B_12)) = 0 lim_n→∞sup_θ∈ U_n, j d_BL(√(n)H^-1/2_22(S_Xϵ^21, S_Xϵ^22), N) = 0 Fix some θ∈ U_n, j. For any i ≤ i_k+j, it holds that |λ_i| ≥ |λ_i_k+j| ≥ 1 - n^-η - γn^jγ/r≥ 1 - n^-η and, for any i ≥ i_k+j, |λ_i| ≤ |λ_i_k| ≤ 1 - n^-η - γ≤ 1 - log n/n. Equations (<ref>), (<ref>), (<ref>) and (<ref>) then follow immediately from the proofs in Sections <ref> and <ref>. For the proof of (<ref>), we first note that S_XX^12 = Γ_11^n S_XX^12Γ_22^n + ∑_t=0^n-1Γ_11^t S_n (Γ_22^t)^T where S_n = S^12_ϵϵ + 1/n(X_n, θX_n, θ^T)_12 - Γ_11S_Xϵ^12 - (S_Xϵ^21)^TΓ_22^T and S_ϵϵ = (∑_t=1^n ϵ_t, θϵ_t, θ^T)/n. An application of Hölder's inequality yields ||H_11^-1/2Γ_11^nS_XX^12(Γ_22^n)^T H_22^-1|| ≤(H_11^-1/2Γ_11^n S_XX^11(Γ_11^n)^TH_11^-1/2)^1/2× (H_22^-1/2Γ_22^n S_XX^22(Γ_22^n)^TH_22^-1/2)^1/2 and it follows from (<ref>) and (<ref>) along with the fact that sup_θ∈ U_n, j||Γ_22^n||=o(1) that sup_θ∈ U_n, j(H_11^-1/2Γ_11^n S_XX^11(Γ_11^n)^TH_11^-1/2) = O_p(1), sup_θ∈ U_n, j(H_22^-1/2Γ_22^n S_XX^22(Γ_22^n)^TH_22^-1/2) = o_p(1) so that sup_θ∈ U_n, j||H_11^-1/2Γ_11^n S_XX^12(Γ_22^n)^T H_22^-1|| = o_p(1). For the second term, we have, for all θ∈ U_n, j, ||H_11^-1/2∑_t=1^n-1Γ_11^t S_n (Γ_22^t)^T H_22^-1/2|| ≤ C_n ||H_22^-1/2||∑_t=0^n-1||H_11^-1/2Γ_11^tΣ_11^-1/2|| where C_n = sup_θ∈ U_n, jsup_t≥ 1||S_n||||Σ_11^1/2||||Γ_22^t||. Hölder's inequality yields ∑_t=0^n-1||H_11^-1/2Γ_11^tΣ_11^-1/2|| ≤(H_11^-1/2∑_t=0^n-1Γ_11^tΣ_11(Γ_11^t)^T H_11^-1/2) so that, by part <ref> of Lemma <ref> and Lemma <ref>, sup_θ∈ U_n, j||H_22^-1/2||∑_t=0^n-1||H_11^-1/2Γ_11^tΣ_11^-1/2|| = o_p(1). Since ||Σ_11^1/2|| and sup_t≥ 1||Γ_22^t|| are uniformly bouded over U_n, j, it therefore suffices to show that sup_θ∈ U_n, j||S_n|| = O_p(1). From Lemma <ref> and part <ref> of Lemma <ref> we have sup_θ∈ U_n, j|| S_ϵϵ + 1/nX_n, θX_n, θ|| = O_p(1) and it follows from (<ref>) and (<ref>) along with the fact that H_11, H_22 = O(n) uniformly over Θ that sup_θ∈Θ||S_Xϵ|| = O_p(1) which completes the proof. We now define H̃ as the block diagonal matrix obtained by deleting the off-diagonal blocks of H. Lemma <ref> determines the limiting behaviour of H̃^-1/2S_XXH̃^-1/2. The next lemma explains why this is sufficient. For fixed 1≤ k ≤ d-1 and 0≤ j≤ r, let H̃ and G̃ be the block-diagonal matrices obtained by deleting the off-diagonal blocks of H and G, respectively. We then have sup_θ∈ U_n, j{||H^-1/2H̃^1/2 - I|| + ||G^-1/2G̃^1/2 - I||} = o(1). It suffices to show that sup_θ∈ U_n, j||H_11^-1/2H_12H_22^-1/2|| = o(1). To do so, we first note that, arguing as in the proof of part <ref> of Lemma <ref>, sup_θ∈ U_n, jσ_min(H_11^-1) = O(1 - |λ_i_k+j|) and, consequently, sup_θ∈ U_n, j||H_11^-1/2(1 - |λ_i_k+j|)^-1/2|| = O(1). Let Λ∈ℝ^(d - k - j)×(d - k - j) be the diagonal matrix satisfying Λ_ll = 1 - |λ_i_k+j+l|. Then, by part <ref> of Lemma <ref>, we have sup_θ∈ U_n, j||H_12Λ^1/2|| = O((1 - |λ_i_k+j+1|)^-1/2) and, for any θ∈ U_n, j, σ_min(Λ^1/2H_22Λ^1/2) ≥σ_min(Σ_22)/n∑_t=1^n-1∑_s=0^t-1σ_min(Λ^1/2Γ^s)^2 ≥σ_min(Σ_22) min_1≤ l ≤ d-k-jΛ_ll/n∑_t=1^n-1∑_s=0^t-1σ_min(J_m_i_k+l(λ_i_k+l)^s)^2. For any 1≤ l ≤ d-k-j, it holds that Λ_ll/n∑_t=1^n-1∑_s=0^t-1σ_min(J_m_i_k+l(λ_i_k+l)^s)^2 ≥n-1/nα if |λ_i_k+l|≤ 1 - α, and otherwise J_m_i_k+l(λ_i_k+l) is diagonal because of Assumption <ref> and therefore Λ_ll/n∑_t=1^n-1∑_s=0^t-1σ_min(J_m_i_k+l(λ_i_k+l)^s)^2 = 1-|λ_i_k+l|/n∑_t=1^n-1∑_s=0^t-1|λ_i_k+j|^2 = 1/1+|λ_i_k+l|(1 - 1 - |λ_i_k+l|^2n/n(1-|λ_i_k+l|^2)). Now, since sup_θ∈ U_n, j|λ_i_k+l|^2n→ 0 and inf_θ∈ U_n, jn(1-|λ_i_k+l|^2)→∞ for n→∞ and 1≤ l ≤ d-k-j, we get that sup_θ∈ U_n, jσ_max(Λ^-1/2H_22^-1Λ^-1/2) = sup_θ∈ U_n, jσ_min(Λ^1/2H_22Λ^1/2)^-1 = O(1) from which it follows that sup_θ∈ U_n, j||Λ^-1/2H_22^-1/2|| = O(1). Combining all these rates yields sup_θ∈ U_n, j||H_11^-1/2H_12H_22^-1/2|| = O(1 - |λ_i_k+j|/1 - |λ_i_k+j+1|)^1/2 = O(n^-γ/2r). With these two lemmas we can complete the proof of (<ref>) and (<ref>). First, for any θ∈ U_n, j, we have ||H^-1/2S_XXH^-1/2 - H̃^-1/2S_XXH̃^-1/2|| ≤ C_n ||H̃^-1/2S_XXH̃^-1/2|| where, by Lemma <ref>, C_n = sup_θ∈ U_n, j||H̃^1/2H^-1/2 - I||(||H̃^1/2H^-1/2|| + √(d)) = o(1). It then follows from Lemma <ref> that sup_θ∈ U_n, j||H^-1/2S_XXH^-1/2 - H̃^-1/2S_XXH̃^-1/2|| = o_p(1) and, similarly, sup_θ∈ U_n, j||G^-1/2AG^-1/2 - G̃^-1/2AG̃^-1/2|| = o_p(1), sup_θ∈ U_n, j||√(n)H^-1/2S_Xϵ - √(n)H̃^-1/2S_Xϵ|| = o_p(1), sup_θ∈ U_n, j||G^-1/2B - G̃^-1/2B|| = o_p(1). Finally, arguing as in the proof of Lemma <ref>, we find lim_n→∞sup_θ∈ U_n, jd_BL(G_11^-1/2A_12G_22^-1/2, 0) = 0 lim_n→∞sup_θ∈ U_n, jd_BL(G_22^-1/2A_22G_22^-1/2, I) = 0 lim_n→∞sup_θ∈ U_n, jd_BL(G_11^-1/2(B_11, B_12), N) = 0 so that (<ref>) and (<ref>) follow from Lemma <ref>. § MARTINGALE LIMIT THEOREMS We start by stating a strong invariance principle for stationary martingale difference arrays due to <cit.>. A martingale difference array is a doubly infinite array, (e_t, n)_t,n ∈ℕ, along with an array of filtrations, (ℱ_t, n)_t,n ∈ℕ, such that, for each n, e_t, n is a martingale difference sequence wrt. ℱ_t, n. Let (e_t, n)_t, n∈ℕ be a stationary ℝ^d-valued martingale difference array wrt. (ℱ_t, n)_t, n∈ℕ such that 𝔼(e_t, ne_t, n^T|ℱ_t-1, n)=𝔼e_0, ne_0, n^T = I a.s. for all t≥ 1, n ∈ℕ and assume there exists some small δ > 0 such that sup_t,n∈ℕ𝔼||e_t,n||^2+δ < ∞. Then, after possibly enlarging (Ω, ℱ, ℙ), there exist a triangular array of random variables, (ρ_t, n)_t≥ 1, n∈ℕ, where each row, (ρ_t, n)_t∈ℕ, is i.i.d. standard normal and such that 𝔼(||sup_1 ≤ k ≤ n||∑_t = 1^k e_t, n - ∑_t = 1^k ρ_t, n||||)=O(n^1/2+δ(log n)^1+δ/2(2+δ)). The proof is essentially the same as the proof of Theorem 2.1 in <cit.> and we will not go into much detail here, but only mention the key steps and how they generalize to the martingale difference array setting. First, we note that under above assumptions, Lemma 4.1 in <cit.> holds for triangular arrays as well. Indeed, the constant C depends only on δ, d and 𝔼||e_1, n||^2+δ and the latter is uniformly bounded over n. After possibly enlarging the initial probability space, we can assume that it is large enough to contain a doubly infinite array (u_t, n)_t, n∈ℕ and a sequence (ρ_1,n)_n∈ℕ such that, for each fixed n, u_t, n is i.i.d. uniform on [0, 1] and independent of e_t, n and ρ_1, n is d-dimensional standard normal independent of e_t, n and u_t, n. Now for each n, we can follow the steps in the proof of Theorem 2.1 in <cit.> and construct a sequence, (ρ_t, n)_t≥ 1, of i.i.d. d-dimensional standard normal random variables satisfying certain inequalities. Now define, for L∈ℕ, D_L, n = sup_l≤ 2^L||∑_i=2^L + 1^2^L + l e_n, t - ρ_n, t||. By the same arguments as in <cit.>, we have, for any N∈ℕ, sup_1≤ k≤ 2^N+1||∑_t=1^k e_t, n - ∑_t=1^k ρ_t, n|| ≤||e_1, n - ρ_1, n|| + ∑_L=0^N-1 D_L, n + D_N, n. Furthermore, they show that the array, ρ_t, n, was constructed in such a way that there exists a constant c_0 depending only on δ, d and 𝔼||e_1, n||^2+δ such that ||D_L, n||_1 ≤ C 2^L/2+δL^1 + δ/2(2+δ) for all L∈ℕ. Since 𝔼||e_1, n||^2+δ is uniformly bounded in n, we may assume that c_0 depends only on δ and d. For 2^N ≤ n < 2^N+1, we have ∑_L=1^N-1 2^L/2+δL^1 + δ/2(2+δ) ≤(log n/log 2)^1 + δ/2(2+δ)∑_L=0^N-12^L/2+δ = (log n/log 2)^1 + δ/2(2+δ)1 - 2^N/2+δ/1 - 2^1/2+δ ≤1/(1-2^1/2+δ)(log 2)^1+δ/2(2+δ)(log n)^1+δ/2(2+δ)(n^1/2+δ + 1) ≤ c_1 n^1/2+δ(log n)^1+δ/2(2+δ) where c_1 does not depend on n. Finally, under the assumptions of the theorem, there exists a constant c_2 not depending on n and such that 𝔼||e_1, n - ρ_1, n|| ≤ c_2. Putting all the pieces together, we find that 𝔼(||sup_1≤ k≤ n||∑_t=1^k e_t, n - ∑_t=1^k ρ_t, n||||) ≤ c_2 + c_0(c_1 + 1)n^1/2+δ(log n)^1+δ/2(2+δ) which is the result we wanted. Next we adapt the weak law of large numbers from Theorem 6 in <cit.> to the multidimensional setting. Let (e_t, n)_t, n∈ℕ be an ℝ^d-valued martingale difference array wrt. (ℱ_t,n)_t, n∈ℕ. Assume there exists δ >0 such that sup_t, n𝔼||e_t, n||^1 + δ < ∞. Then, for any ϵ>0, it holds that ||1/n∑_t=1^n e_t, n|| = o_p(n^1/1+δ - 1 + ϵ). Fix ϵ>0 and let a∈ℝ^𝕕 and ẽ_t, n = a^T e_t, n. By Cauchy-Schwartz, we have sup_t, n ∈ℕ|ẽ_t, n|^1 + δ≤ ||a||^1+δsup_t, n ∈ℕ||e_t, n||^1+δ = C < ∞. Now, define k_n = n^1/1+δ+ϵ. Then, with p = 1 + δ, k_n^-p∑_t=1^n (𝔼|ẽ_t, n|)^p ≤ C k_n^-pn = o(1) and, by Theorem 6 in <cit.>, |1/n∑_t=1^n ẽ_t, n| = o_p(n^1/1 + δ - 1 + ϵ). The result then follows since a was arbitrary. Finally, we need the following well-known martingale difference array central limit theorem (see, e.g., Theorem 1 of Chapter VIII in <cit.>). Let (e_t, n)_t, n∈ℕ be an ℝ^d-valued martingale difference array wrt. (ℱ_t,n)_t, n∈ℕ. Assume that ∑_t=1^n𝔼(e_t, ne_t, n^T| ℱ_t-1, n) →_p I and, for each γ > 0, ∑_t=1^n 𝔼(||e_t, n||^2 1(|| e_t, n|| > γ)|ℱ_t-1, n) →_p 0 for n→∞. Then, ∑_t=1^n e_t, n→_w 𝒩(0, I) for n→∞. § GAUSSIAN APPROXIMATION In this section we detail how the Gaussian approximation described in Section <ref> is achieved. Throughout we assume that X_t, θ and ϵ_t, θ satisfy Assumptions <ref> and <ref> with F_θ = I. The key is Theorem <ref> above. Although it is stated in terms of martingale difference arrays, an easy consequence is the following lemma. We can enlarge the initial probability space such that there exists a family of stochastic processes (ρ_t, θ)_t≥ 1, θ∈Θ where, for each θ, the sequence ρ_t, θ is i.i.d. d-dimensional gaussian with mean 0 and covariance matrix Σ and such that sup_θ∈Θsup_1 ≤ k ≤ n||∑_t = 1^k ϵ_t, θ - ∑_t = 1^k ρ_t, θ|| = o_p(n^1/2-β). for some β>0. By Proposition 8 in <cit.>, it suffices to show, for any sequence (θ_n)_n∈ℕ⊂Θ, there exists an array, (ρ_t, θ_n)_t≥ 1, n∈ℕ, where each row is i.i.d. d-dimensional Gaussian with mean 0 and covariance matrix Σ_n and such that sup_1 ≤ k ≤ n||∑_t = 1^k ϵ_t, θ_n - ∑_t = 1^k ρ_t, θ_n|| = o_p(n^1/2-β). Consider (e_t, n)_t, n∈ℕ given by e_t, n= Σ_n^-1/2ϵ_t, θ_n for all t, n∈ℕ which is a martingale difference array satisfying the assumptions of Theorem <ref> with δ as in Assumption <ref>. Let (ρ_t, n)_t≥ 1, n∈ℕ be its approximating Gaussian sequence as in the statement of Theorem <ref>. Since δ > 0, there exists β>0 such that 1/(2+δ) - 1/2 + β < 0 in which case sup_1 ≤ k ≤ n||∑_t = 1^k ϵ_t, θ_n - ∑_t = 1^k Σ^1/2ρ_t, n|| ≤||Σ_n^1/2||sup_1 ≤ k ≤ n||∑_t = 1^k e_t, n - ∑_t = 1^k ρ_t, n|| = o_p(n^1/2 - β) because Σ_n is bounded. Thus, ρ_t, θ_n=Σ_n^1/2ρ_t, n satisfies (<ref>). For any ϵ > 0, we have n^-1/1 + δ-ϵ∑_t=1^n (ϵ_t, θϵ_t, θ^T - Σ)→_p 0 uniformly over Θ. Fix a, b∈ℝ^d and define ξ_t, θ=a^T(ϵ_t, θϵ_t, θ^T - Σ)b so that ξ_t, θ is a one-dimensional martingale difference sequence for each θ∈Θ. By Cauchy-Schwartz, we have for all t∈ℕ and θ∈Θ |ξ_t, θ| ≤ ||aΣ^1/2||||bΣ^1/2||(||Σ^-1/2ϵ_t, θ||^2 + 1) so that, by assumption, sup_θ∈Θ𝔼|ξ_t, θ|^1+δ/2 < ∞. Theorem <ref> then implies that for any ϵ > 0 such that and any sequence (θ_n)_n∈ℕ, we have |1/n∑_i=1^nξ_t, θ_n| = o_p(n^-ϵ) which, by Proposition 8 in <cit.>, completes the proof. The following is similar to Lemma 4 in <cit.>. It shows that many of the important statistics can be replaced by their Gaussian counterpart. There exists β > 0 such that * sup_θ∈Θsup_1≤ t≤ n||X_t, θ/√(n) - Y_t, θ/√(n)|| = o_p(n^-β), * sup_θ∈Θsup_1≤ t≤ n{ ||X_t, θ/√(n)|| + ||Y_t, θ/√(n)||} = O_p(1), * sup_θ∈Θ||∑_t=1^n∑_s=1^t ϵ_s, θϵ_t, θ^T/n - ρ_s, θρ_t, θ^T/n|| = o_p(n^-β). * sup_θ∈ R_n, d||H^-1/2(S_XX- S_YY) H^-1/2|| = o_p(n^1 - η - β). * sup_θ∈ R_n, d||√(n)H^-1/2(S_Xϵ- S_Yρ)|| = o_p(n^3/2(1-η) - β). For the proof of part (a) we use summation by parts to write X_t, θ = ∑_s=1^tΓ^t-sϵ_s, θ = ∑_s=1^t ϵ_s, θ - ∑_s=1^t-1(Γ^t - s + 1 - Γ^t - s)∑_k=1^s+1ϵ_k, θ = ∑_s=1^t ϵ_s, θ - (Γ - I)∑_s=1^t-1Γ^t - s∑_k=1^s+1ϵ_k, θ and a similar expression holds for Y_t, θ. Thus, sup_θ∈Θsup_1≤ t≤ n1/√(n)||X_t, θ - Y_t, θ||≤sup_θ∈Θsup_1≤ t ≤ n||(Γ - I)∑_s=1^t-1Γ^t-s + I|| ×1/√(n)sup_θ∈Θsup_1≤ t ≤ n||∑_s=1^t ϵ_s, θ - ∑_s=1^t ρ_s, θ||. Since the first term on the right hand side is bounded, Lemma <ref> yields (a). To prove (b), we start with the same expression for Y_t, θ as above. This gives us sup_θ∈Θsup_1≤ t≤ nY_t, θ/√(n)≤sup_θ∈Θsup_1≤ t ≤ n||(Γ - I)∑_s=1^t-1Γ^t-s + I||1/√(n)sup_θ∈Θsup_1≤ t ≤ n||∑_s=1^t ρ_s, θ||. Again, the first term on the right hand side is bounded uniformly over n. For the second term, we have 1/√(n)sup_θ∈Θsup_1≤ t ≤ n||∑_s=1^t ρ_s, θ|| ≤sup_θ∈Θ||Σ^1/2||sup_1≤ t ≤ n||1/√(n)∑_s=1^t Σ^-1/2ρ_s, θ|| ≤ C sup_θ∈Θsup_1≤ t ≤ n||1/√(n)∑_s=1^t Σ^-1/2ρ_s, θ|| = O_p(1) since Σ^-1/2ρ_s, θ is i.i.d. standard normal for all θ∈Θ. The result then follows from (a). For (c) we start with 1/n∑_t=1^n∑_s=1^t ϵ_s, θϵ_t, θ^T = 1/2n(∑_t=1^n ϵ_t, θ)(∑_t=1^n ϵ_t, θ)^T + 1/2n∑_t=1^n(ϵ_t, θϵ_t, θ^T - Σ) + 1/2Σ. Lemma <ref> then yields sup_θ∈Θ||1/n∑_t=1^n∑_s=1^t ϵ_s, θϵ_t, θ^T - 1/2n(∑_t=1^n ϵ_t, θ)(∑_t=1^n ϵ_t, θ)^T - 1/2Σ|| = o_p(n^-β) and a similar argument shows that sup_θ∈Θ||1/n∑_t=1^n∑_s=1^t ρ_s, θρ_t, θ^T - 1/2n(∑_t=1^n ρ_t, θ)(∑_t=1^n ρ_t, θ)^T - 1/2Σ|| = o_p(n^-β). Thus, sup_θ∈Θ||1/n∑_t=1^n∑_s=1^t ϵ_s, θϵ_t, θ^T - ρ_s, θρ_t, θ^T|| = sup_θ∈Θ||1/n∑_t=1^n∑_s=1^n ϵ_s, θϵ_t, θ^T - ρ_s, θρ_t, θ^T|| + o_p(n^-β) ≤ C_nsup_θ∈Θ||1/√(n)∑_t=1^nϵ_t, θ - ρ_t, θ|| + o_p(n^-β) where C_n = sup_θ∈Θ{||1/√(n)∑_t=1^n ϵ_t, θ|| + ||1/√(n)∑_t=1^n ρ_t, θ||}. Since the law of 1/√(n)∑_t=1^n ρ_t, θ is equal to a d-dimensional Gaussian with mean 0 and covariance matrix Σ, Assumption <ref> yields sup_θ∈Θ||1/√(n)∑_t=1^n ρ_t, θ|| = O_p(1) and, by Lemma <ref>, we get C_n = O_p(1) and therefore also the result in (c). To prove (d), we first note that sup_θ∈Θ1/n||S_XX - S_YY|| ≤ C_n sup_θ∈Θsup_1≤ t≤ n1/√(n)||X_t, θ - Y_t, θ|| where C_n = sup_θ∈Θ1/√(n)(sup_1≤ t≤ n||X_t, θ|| + sup_1≤ t≤ n|| Y_t, θ||). From Lemma <ref>, we find that sup_θ∈ R_n, dσ_max(H^-1/2) = sup_θ∈ R_n, d(σ_min(H))^-1/2 = O(n^-η/2) so, by part (a) and (b), we get the result in (d). For the proof of (e) we again use summation by parts to write S_Xϵ = 1/n(X_n-1, θ∑_t=1^n ϵ_t, θ^T - ∑_t=1^n-1(X_t, θ - X_t-1, θ)∑_s=1^t ϵ_s, θ^T) = 1/n(X_n-1, θ∑_t=1^n ϵ_t, θ^T - (Γ - I)∑_t=1^n-1∑_s=1^t X_t-1, θϵ_s, θ^T - ∑_t=1^n-1∑_s=1^t ϵ_t, θϵ_s, θ^T), S_Yρ = 1/n(Y_n-1, θ∑_t=1^n ρ_t, θ^T - ∑_t=1^n-1(Y_t, θ - Y_t-1, θ)∑_s=1^t ρ_s, θ^T) = 1/n(Y_n-1, θ∑_t=1^n ρ_t, θ^T - (Γ - I)∑_t=1^n-1∑_s=1^t Y_t-1, θρ_s, θ^T - ∑_t=1^n-1∑_s=1^t ρ_t, θρ_s, θ^T). We have, for all 1≤ t ≤ n, sup_θ∈Θ||1/n∑_s=1^t X_t-1, θϵ_s, θ^T - Y_t-1, θρ_s, θ^T|| ≤ C_n sup_θ∈Θ{sup_1≤ k≤ n1/√(n)(||X_k, θ - Y_k, θ|| + ||∑_s=1^k ϵ_s, θ - ρ_s, θ||)} where C_n = sup_θ∈Θ{sup_1≤ k ≤ n||1/√(n)X_k, θ|| + sup_1≤ k≤ n||1/√(n)∑_s=1^k ρ_s, θ||} = O_p(1) by part (b). From part (a) and Lemma <ref> it then follows that sup_θ∈Θ||1/n∑_s=1^t X_t-1, θϵ_s, θ^T - Y_t-1, θρ_s, θ^T|| = o_p(n^-β). Clearly, we also have sup_θ∈ R_n, dn||(Γ - I)|| = o_p(n^1-η) so that sup_θ∈ R_n, d||1/n(Γ - I)∑_t=1^n-1∑_s=1^t X_t-1, θϵ_s, θ^T - Y_t-1, θρ_s, θ|| = o_p(n^1-η - β) and, by part (c), sup_θ∈ R_n, d|| S_Xϵ - S_Yρ|| = o_p(n^1-η -β). The result then follows from (<ref>). § CONFIDENCE REGIONS This section captures some of the more technical details omitted from Section <ref>. Validity of CR_a(α) and CR_b(α) is a fairly straightforward consequence of the fact that t̂^2_Γ can be uniformly approximated by t^2_Γ and t̃^2_Γ both of which have continuous distributions. Let (θ_n)_n∈ℕ⊂Θ and write t̂^2_Γ_n and t^2_Γ_n for the corresponding sequence of test-statistics and the corresponding uniform approximations defined in (<ref>) with this specific choice of θ. By continuity of the distribution of t^2_Γ_n, Lemma 1 in <cit.>, and the Portmanteau lemma, we find that sup_t∈ℝ|ℙ(t̂_Γ_n≤ t) - ℙ(t^2_Γ_n≤ t)| = o(1). We therefore find that lim inf_n→∞ℙ(t̂^2_Γ_n≤ q_n, Γ_n(1-α)) ≥ 1-α - lim sup_n→∞sup_t∈ℝ|ℙ(t̂_Γ_n≤ t) - ℙ(t^2_Γ_n≤ t)| = 1-α from which it follows that CR_a(α) is uniformly asymptotically valid. Now observe that Theorem <ref> can equally well be applied to show that t̃^2_Γ→_w t^2_Γ uniformly over Θ so that the same argument proves validity of CR_b(α). §.§ Predictive regression For each θ∈Θ_P with γ and Γ̃ the corresponding autoregressive coefficients, define the events A_θ = {ω∈Ω : Γ̃∉ CR(α_1, ω)}, B_θ = {ω∈Ω : γ∉ C_γ|Γ̃(α_2, ω)} where the dependence of the confidence regions on ω is made explicit in the notation. If ω∈Ω is such that γ∉ CI_γ(α_1, α_2, ω), then we must either have ω∈ A_θ or ω∈ B_θ implying, by Bonferroni's inequality, ℙ(γ∉ CI_γ(α_1, α_2)) ≤ℙ(A_θ) + ℙ(B_θ). It follows by assumption that lim sup_n→∞sup_θ∈Θ_Pℙ(A_θ) ≤α_1 so that lim inf_n→∞inf_θ∈Θ_Pℙ(γ∈ CI_γ(α_1, α_2)) ≥ 1 - α_1 - lim sup_n→∞sup_θ∈Θ_Pℙ(B_θ). The proof is complete if we can show that σ̂^-2_Y t̂^2_γ|Γ̃→_w χ^2_d-1 uniformly over Θ_P since this would imply that lim sup_n→∞sup_θ∈Θ_Pℙ(B_θ) ≤α_2 by the same argument as in the proof of Theorem <ref>. Defining ρ̃_t = ρ_t - Σ_YXΣ_X^-1ϵ̃_̃t̃, we have in obvious notation t̂^2_γ|Γ̃ = S_ρ̃X̃S_X̃X̃^-1S_X̃ρ̃ + R_n where R_n = n(r_nS_ϵ̃X̃S_X̃X̃S_X̃ϵ̃r_n^T + r_nS_ϵ̃X̃S_X̃X̃S_X̃ρ̃ + S_ρ̃X̃S_X̃X̃S_X̃ϵ̃r_n^T) with r_n = Σ̂_YXΣ̂_X^-1 - Σ_YXΣ_X^-1. Since Σ̂ is uniformly consistent and Σ is uniformly invertible over Θ_P we see that r_n→_p 0 uniformly over Θ_P. This follows from the uniform versions of the continuous mapping theorem and Slutsky's Lemma (see, e.g., Proposition 9 and Proposition 15 in <cit.>). Furthermore, all the matrix products in the expression above converge uniformly in distribution over Θ_P by Theorem <ref>. Thus, R_n→_p 0 uniformly over Θ_P. Now, let Σ̃∈ℝ^d× d be given by Σ̃_11 = Σ_Y - δ^TΣ_Xδ, (Σ̃_1i)_2≤ i≤ d=(Σ̃_i1)_2≤ i≤ d = 0, and (Σ̃_ij)_2≤ i, j≤ d = Σ_X. Similar to (<ref>), we then find S_ρ̃X̃ S_X̃X̃^-1 S_X̃ρ̃→_w||(Σ̃_Y - δ̃^TΣ̃_Xδ̃)^1/2Z||^2 uniformly over Θ_P with Z a (d-1)-dimensional standard normal random variable. But then, since Σ̃_Y - δ̃^TΣ̃_Xδ̃ = Σ_Y - δ^TΣ_Xδ which, in turn, is uniformly estimated by σ̂^2_Y, this completes the proof. §.§ Lag augmentation We shall first prove that √(n)vec(Γ̂_LA - Γ)→_w 𝒩(0, Σ^-1⊗Σ) uniformly over Θ for n→∞. In doing so, write S_ϵ X'= ∑_t=2^n ϵ_tX_t-2^T/n, S_X'X'=∑_t=2X_t-2X_t-2^T/n, S_ϵ'X' = ∑_t=2ϵ_t-1X_t-2^T/n, S_ϵ'ϵ'=∑_t=2^nϵ_t-1ϵ_t-1^T, and S_ϵϵ'=∑_t=2^nϵ_tϵ_t-1^T. Then, Γ̂_LA - Γ = S_ϵX̅ S_X̅X̅^-1 D = (S_ϵ X - S_ϵ X'S_X'X'^-1(S_X'X'Γ^T + S_X'ϵ'))(S_ϵ'ϵ' - S_ϵ'X'S_X'X'^-1S_X'ϵ')^-1 = (S_ϵϵ' - S_ϵ X'S_X'X'^-1S_X'ϵ')(S_ϵ'ϵ' - S_ϵ'X'S_X'X'^-1S_X'ϵ')^-1 where we used the relation X_t-1 = Γ X_t-2 + ϵ_t-1 multiple times. By Theorem <ref>, S_ϵ X'S_X'X'^-1S_X'ϵ'→_p 0 and S_ϵ' X'S_X'X'^-1S_X'ϵ'→_p 0 for n→∞ uniformly over Θ. Furthermore, Theorem <ref> and <ref> yield S_ϵ'ϵ'→_p Σ and √(n)vec(S_ϵϵ')→_w 𝒩(0, Σ⊗Σ) for n→∞ uniformly over Θ. Finally, since Σ is uniformly invertible and bounded on Θ, the uniform versions of the continuous mapping theorem and Slutsky's Lemma (Proposition 9 and Proposition 15 in <cit.>), yield the desired result. The first result then follows from the fact that Σ̂ is a uniformly consistent estimator of Σ and another application of the continous mapping theorem plus Slutsky's Lemma. Asymptotic uniform level of the confidence interval is proven as above noting that χ^2_1 is absolutely continuous wrt. Lebesgue measure. §.§ IVX The proof of Theorem <ref> that we present here is conceptually different from the proofs presented so far. We rely on the theory developed in <cit.>, but since we do not require that all roots approach unity at the same rate, there are some extra difficulties that need to be dealt with. In particular, we need to employ a different normalization in obtaining the asymptotics of S_ZZ and S_ϵ Z. Furthermore, Theorem <ref> shows that the suggested IVX approach is truly uniformly valid (at least over the suggested parameter space, Θ). The first lemma is of a technical nature. Let (θ_n)_n∈ℕ⊂Θ satisfy Assumptions <ref> and <ref> with F_θ_n=I and β∈(0, 1). Then, there exist 0≤ r≤ d, (k_n)_n∈ℕ⊂ℕ strictly increasing, and (θ̃)_n∈ℕ⊂Θ such that * θ_k_n = θ̃_k_n, ∀ n∈ℕ, * n^β (1-Γ̃_n, ii)→κ_i∈ℂ, |κ_i|∈[0,1], for 1≤ i ≤ r, * n^-β (1-Γ̃_n, ii)^-1→κ_i∈ℂ, |κ_i|∈[0,1], for r+1≤ i ≤ d, * θ̃_n→θ∈Θ, where θ̃_n = (Γ̃_n, Σ̃_n, ·) and all limits are taken as n→∞. Fix (θ_n)_n∈ℕ⊂Θ and β∈(0, 1). For each n∈ℕ, let 0≤ r_n≤ d be such that |n^β(1-Γ_n, ii)|≤ 1 for 1≤ i≤ r_n and |n^-β(1-Γ_n, ii)^-1|≤ 1 otherwise. Then, (r_n)_n∈ℕ is a sequence in {0, 1,..., d} so that, by compactness, it has a convergent sub-sequence. In other words, there exists a sub-sequence, (θ_n_k)_k∈ℕ, and 0≤ r≤ d such that |(n_k)^β (1 - Γ_n_k, ii)| ≤ 1 for 1≤ i≤ r and |(n_k)^-β (1 - Γ_n_k, ii)^-1| ≤ 1 otherwise. By Bolzano-Weierstrass, we may assume without loss of generality (passing to another sub-sequence if necessary) that there exists κ∈ℂ^d with |κ_i|≤ 1 and such that (n_k)^β(1 - Γ_n_k, ii)→κ_i, for 1≤ i ≤ r (n_k)^-β(1 - Γ_n_k, ii)^-1→κ_i, for r+1≤ i ≤ d for k→∞. By another compactness argument, we can furthermore choose the sub-sequence such that θ_n_k→θ = (Γ, Σ, c)∈Θ. Now, take some δ∈(0, β), let 0≤ r_1≤ r_2≤ d be such that |κ_i|>0 for r_1< i ≤ r_2 and κ_i = 0 otherwise, and define the diagonal matrix C_n∈ℂ^d× d by C_n, ii = n^-δ, if i≤ r_1, κ_i, if r_1< i ≤ r, κ_i^-1, if r < i ≤ r_2, n^δ, otherwise. Since we must have (Γ_n, ij)_1≤ i, j≤ r_2 = I_r_2, we find that Γ_n' = Γ - n^-βC_n satisfies <ref> and <ref> with Γ_n'→Γ for n→∞. Finally, let Γ̃_n = Γ_n_k if n=n_k for some k∈ℕ and Γ̃_n = Γ_n' otherwise and Σ̃_n = Σ_n_k and c̃_n = c_n_k for n_k≤ n < n_k+1. Then, θ̃_n=(Γ̃_n, Σ̃_n, c̃_n) satisfies all the conditions. Sequences of parameters like θ̃_n in the above Lemma fit nicely into the framework of <cit.>. We can adapt their results to this more general setup. Fix some β∈(0, 1) and consider a sequence (θ_n)_n∈ℕ⊂Θ such that conditions <ref> and <ref> are satisfied for some 0≤ r≤ d and κ∈ℂ^d with |κ_i|≤ 1. For such a sequence, we can define the integers 0≤ r_1≤ r_2≤ d as in the proof above along with the diagonal matrices D_n∈ℂ^d× d given by D_n, ii = n^-β for 1≤ i≤ r_2 and D_n, ii = (1 - |Γ_n, ii|) otherwise. This normalization is sufficiently flexible to ensure convergence of the relevant sample covariance matrices. Let β∈(1/2, 1) and (θ_n)_n∈ℕ⊂Θ be a sequence of parameters satisfying Assumptions <ref> and <ref> with F_θ_n = I as well as <ref>, <ref>, and <ref> of Lemma <ref> for some 0≤ r ≤ d, κ∈ℂ^d with |κ_i|≤ 1, and θ∈Θ. For (D_n)_n∈ℕ as defined above, there exists a sequence of positive definite matrices (Σ_Z, n)_n∈ℕ such that lim sup_n→∞{σ_min(Σ_Z, n)^-1 + σ_max(Σ_Z, n)} < ∞ and the following holds for any ϵ > 0 lim_n→∞ℙ(||D_n^1/2S_ZZD_n^1/2 - Σ_Z, n|| > ϵ) = 0 and lim_n→∞d_BL(√(n)Σ_Z,n^-1/2D_n^1/2S_ZϵΣ^-1/2, V) = 0 where vec(V)∼𝒩(0, I). Throughout the proof we write matrices as 3× 3 block matrices such that the top-left block is r_1× r_1, the middle block is (r_2-r_1)× (r_2-r_1), and the bottom-left block is (d-r_2)×(d-r_2). We use a superscript to denote the block index, e.g., S_ZZ^13 denotes the top-right block of S_ZZ. Let Z̃_t = ∑_s=1^t (1-n^-β)^t-sϵ_s and ψ_t = ∑_s=1^t (1-n^-β)^t-sX_s-1 so that Z_t = Z̃_t + (Γ_n - I)ψ_t. Let Λ_n be the diagonal matrix such that Λ_n, ii = 1 - |Γ_n, ii| for 1≤ i ≤ d. Then, since Γ_n and I-Λ_n commute, with C denoting some generic constant not depending on t or n, 𝔼||(I-Λ_n)ψ_t||^2 =∑_i,j=1^t(1-n^-β)^2t-i-j((I-Λ_n)𝔼(X_i-1X_j-1^T)(I-Λ_n)) ≤ C∑_i, j=1^t (1-n^-β)^2t-i-j||Γ^i∨ j - i∧ j(I-Λ_n)|| ≤ C∑_i=0^t-1(1-n^-β)^i∑_j=0^i-1||((1-n^-β)Γ)^j(I-Λ_n)|| where a∨ b = max{a, b} and a∧ b = min{a, b} and the first inequality follows from Lemma <ref> and the Cauchy-Schwartz inequality. This inequality yields a result equivalent to equation (40) in <cit.>. In particular, we deduce that sup_1≤ t ≤ n𝔼||(Γ_n - I)ψ_t||^2 = o(n) from which it follows that S_Zϵ = S_Z̃ϵ + o_p(1). We first prove (<ref>). For ease of notation, we write S_n = D_n^1/2S_ZZD_n^1/2. Since D_n^11=n^-β I_r_1, essentially the same proof as that of Lemma 3.1.(iii) in <cit.> using (<ref>) shows that S_n^11 = n^-β S^11_Z̃Z̃ + o_p(1) = 1/2Σ^11 + o_p(1), where the latter equality follows from Theorem <ref> and the fact that n^-β𝔼(S_Z̃Z̃) →Σ/2 for n→∞. Similarly, the proof of Lemma 3.5.(ii) in <cit.> can be adapted to show that S_n^33 = (D_n^33)^1/2 S^33_XX(D_n^33)^1/2 + o_p(1) = Σ^33_X, n + o_p(1), where the latter equality follows from Theorem <ref> and Lemma <ref> and Σ^33_X, n is defined as in Lemma <ref> but emphasizing the dependence on n. For the middle block, using the recursive relations Z_t = (1-n^-β)Z_t-1 + Δ X_t and Δ X_t = (Γ_n - I)X_t-1 + ϵ_t, we can write (1 - (1-n^-β)^2)S^22_ZZ = S^22_Δ X Z + S^22_Z Δ X + S^22_Δ X Δ X + o_p(1). It follows from Theorem <ref>, that S^22_Δ X Δ X = Σ^22 + o_p(1). For the other two terms, we use (<ref>) and write S^22_ZΔ X= S^22_Z̃ϵ + S^22_ZX(Γ_n^22 - I)^T + o_p(1). Using the recursive relations and (<ref>) once more yields (I - (1-n^-β)Γ^22_n)S^22_XZ = S^22_Xϵ + S^22_ϵZ̃ + S^22_ϵϵ + S^22_XX(Γ^22_n - I) + o_p(1) It follows from Theorem <ref> that the first two terms tend to 0 in probability for n→∞ and Σ^22_ϵϵ = Σ^22 + o_p(1). By Assumption <ref>, for n large enough and i≤ r_2, 1 - |Γ_n, ii| = |1 - |Γ_n, ii|| ≤|1 - Γ_n, ii| ≤ 1 - |Γ_n, ii|, i.e., |1 - Γ_n, ii| = 1 - |Γ_n, ii| and, in particular, Γ_n, ii∈ℝ and κ∈ℝ. Thus, if we define K∈ℝ^(r_2 - r_1)×(r_2 - r_1) diagonal with K_ii = κ_i for i≤ r_2 - r and K_ii = κ_i^-1 otherwise, we get n^β(I - Γ^22_n) → K n^β(Λ_n^22 - I) → K n^β(I - (1-n^-β)Γ_n^22) → K + I for n→∞. Lemma <ref> then yields (Γ_n^22-I)S_XZ^22=(K+I)^-1(KΣ^22 + K^1/2Σ_X, n^22K^1/2) + o_p(1). and (I-Λ^22_n)^1/2⊗ (I-Λ^22_n)^1/2(I-Γ_n^22⊗Γ_n^22) → (K⊗ K)^1/2(I⊗ K + K⊗ I)^-1 for n→∞ so that (keeping in mind that K_ii>0 for all i) Σ^22_X, n→ K^1/2∫_0^∞e^-sKΣ^22e^-sK ds K^1/2 = K^1/2Ω^22K^1/2. Then, using the relation Σ^22 - Ω^22K = KΩ^22, the limiting expression simplifies to (Γ_n^22 - I)S_XZ^22 = (K+I)^-1K^2Σ^22. Finally, since n^β(1-(1-n^-β)^2) = 2 + o(1) and D_n^22 = n^-βI_r_2-r_1, we find S_n^22 = 1/2(Σ^22 + (K+I)^-1K^2Ω^22 + Ω^22K^2(K+I)^-1) + o_p(1) = 1/2((K+I)^-1KΩ^22 + Ω^22K(K+I)^-1) + o_p(1) = 1/2(I+K)^-1(2KΩ^22K + Σ)(I+K)^-1 + o_p(1). We have yet to characterize the asymptotic behaviour of the off-diagonal blocks. First, note that by (23) in <cit.> and (<ref>), we get S_ZZ^32 - S_XZ^32 = -n^-β(S_ψψ^32(Γ_n^22 - I) - S_ψZ̃^32) so that (<ref>) and Theorem <ref> yield S_n^32 - (D_n^33)^1/2S_XZ^32(D_n^22)^1/2 = o_p(1). Similar to above, we have (I - (1-n^-β)Γ_n^33)S_XZ^32 = S^32_XX(Γ_n^22 - I) + o_p(1). But then, because n^-β(I - Γ_n^33)=o(1), we find that n^-β/2(I-Λ_n^33)^1/2(I - (1 - n^-β)Γ_n^33)^-1 = o(1) and, by Lemma <ref> and Theorem <ref>, S_XX^32(Γ_n^22 - I) = o_p(1). Thus, S_n^32 = o_p(1). A similar argument show that S_n^31 = o_p(1) so that the only block left is S_n^21. As a consequence of (<ref>), we have n^-β||S_ZZ^12 - S_Z̃Z^12|| = n^-β||(Γ_n^11-I)S_ψψ^12(Γ_n^22-I) + (Γ_n^11-I)S_ψZ̃^12|| and arguments like the one employed in the proof of Lemma 3.1 in <cit.> in combination with (<ref>) show that the right hand side is o_p(1). Using the recursive relations for Z_t, Z̃_t, and Δ X_t in combination with (<ref>) and Theorem <ref>, we have (1-(1-n^-β)^2)S_Z̃Z^12 =S^12_ϵ Z + S_Z̃Δ X^12 + S_ϵΔ X^12 + o_p(1) = S_Z̃X^12(Γ_n^22 - I) + S^12_ϵϵ + o_p(1). An application of Lemma <ref> and Theorem <ref> yields S_Z̃X^12 = Ω^12_n K^1/2 + o_p(1) where Ω^12_n = Σ^12(I - (1-n^-β)Γ_n^22)n^-β/2(I - Λ_n^22)^-1/2→Σ^12K^1/2(I + K)^-1 for n→∞. In conclusion, since D_n^22 = n^-βI_r_2-r_1 and D_n^33 = n^-βI_r_1, we get S_n^12 = 1/2(Σ^12 + Σ^12K(I+K)^-1) + o_p(1) = 1/2Σ^12 + o_p(1). Collecting all the limiting expressions, we define K̅ = [ I_r_1 0; 0 K ], Ω = ∫_0^∞ e^-sK̅[ Σ^11 Σ^12; Σ^21 Σ^22 ] e^-sK̅ds and observe that Σ^11 = ((I+K̅)^-1(2K̅ΩK̅ + Σ)(I+K̅)^-1)^11 Σ^12 = ((I+K̅)^-1(2K̅ΩK̅ + Σ)(I+K̅)^-1)^12. so that S_n = Σ_Z, n + o_p(1) with Σ_Z, n = 1/2[ (I+K̅)^-1(2K̅ΩK̅ + Σ)(I+K̅)^-1 0; 0 Σ^33_X, n ]. To see that Σ_Z, n is asymptotically invertible and bounded simply note that K̅_ii∈ [0, 1] for all 1≤ i ≤ r_2 and Σ is positive definite. Therefore, the top left block of Σ_Z, n is some fixed positive definite matrix for all n∈ℕ. The result then follows from Lemma <ref>. Once (<ref>) has been established, the proof of (<ref>) is completely analogous to the proof of (<ref>). Fix some β∈(1/2, 1) and a sequence (θ_n)_n∈ℕ⊂Θ so that θ_0, n=(Γ_0, n, Σ_0, n, ·) = (F_θ_nΓ_n F_θ_n^-1, F_θ_nΣ_n F_θ_n^-1, ·) satisfies <ref>. Now, by Lemma <ref> and <ref>, there exists a sub-sequence (θ_n_k)_k∈ℕ such that t̂^2_IV, Γ_n_k = nS_ϵ ZS_ZZ^-1S_Zϵ = nS_ϵ ZF_θ_n_k^T(F_θ_n_kS_ZZF_θ_n_k^T)^-1F_θ_n_kS_Zϵ→_w χ^2_d^2 for k→∞. This proves that t̂^2_IV, Γ_n→_w χ^2_d^2 for any sequence (θ_n)_n∈ℕ⊂Θ from which uniform convergence follows by Lemma 1 in <cit.>. To prove uniform asymptotic level of the corresponding confidence region, note that the limiting distribution is continuous with respect to Lebesgue measure so that a similar argument as before works in this case. § SIMULATIONS §.§ Confidence intervals For each n∈{50, 75, 100} and d∈{3, 4, 5} the simulation experiment is repeated 1000 times. In each repetition, for i,j=1,..., d, we draw U_ij∼Unif([0, 1]) and set Γ= U^-1Λ_n U where Λ_n∈ℝ^d× d is diagonal with Λ_n, 11 = 1 and Λ_n, ii = 1 - (1/n)^1/(i-1) for i=2,..., d. We then sample ϵ_t∼𝒩(0, Σ) i.i.d. for t=1,...,n with Σ = 1/2(I + 11^T) where 1 = (1,...,1)^T∈ℝ^d and let X_t = Γ X_t-1 + ϵ_t for  t=1,..., n, X_0 = 0. X_t is a sample from a VAR(1) process with θ = (Γ, Σ, ·). We then compute CI_b, CI_IV, and CI_LA for this sample and record the length of each confidence interval and whether it contains Γ_11. §.§ Predictive regression testing For both simulation experiments we fix d=4 and α=0.1. This implies that Γ̃∈ℝ^3× 3. The two regimes correspond to two different choices of Γ̃: * Mixed Regime: In this setting Γ̃ is chosen as above, that is, with roots of differing proximity to unity and with random eigenvectors sampled anew for every simulation run. * Non-stationary Regime: In this setting we set Γ̃=I so that X̃_t is a random walk. In both regimes the errors are i.i.d. Gaussian with covariance matrix Σ as given above. To obtain Figure <ref>, we do the following: For each n∈{10, 20,..., 200}, we draw two samples X_t from the VAR(1) processes given by the two choices of Γ̃ and under the null H_0:γ=0. We then compute the three tests on both samples recording whether the null was rejected or not. This is repeated 1000 times and the rejection rate is the proportion of times the null was rejected across all simulations. For Figure <ref>, we do essentially the same thing except that we now fix n=100 and perform the experiment across different choices of γ≠ 0. In particular, we run the experiment for γ = δ1, δ∈{0.005, 0.01,..., 0.1} and record the proportion of times the null was rejected across all 1000 simulations. §.§ EAM We present a slightly generalized version of the EAM-algorithm. EAM stands for Evaluation-Approximation-Maximization and the algorithm can more or less be split into three steps. It is an algorithm for solving problems of the following form sup f(x) s.t. g(x) ≤ c(x) over x∈𝒳⊂ℝ^p where f, g, and c are fixed scalar functions sufficiently smooth and satisfying certain requirements. In <cit.> it is required that f(x)=v^Tx for some v∈ℝ^p. Here we only require that f(x) is convex and twice continuously differentiable on 𝒳. We assume that c is costly to evaluate. Without going into too much detail, the algorithm proceeds as follows: * Initialization: Randomly sample initial points x^(1),..., x^(k) from 𝒳 and evaluate c(x^(i)) for i=1,..., k. Set L=k. * Iterate the following three steps until convergence: * E-step: Evaluate c(x^(L)) and pick current optimum y^*, L = max{f(x^(i)) : g(x^(i)) ≤ c(x^(i)), i=1,...,L } * A-step: Approximate x↦ c(x) by a Gaussian process regression model, with mean μ, constant variance σ^2, and covariance kernel K_β(x - x') = exp(-∑_i=1^p |x_i - x'_i|^2/β_i), fitted on (c(x^(i)), x^(i)), i=1,..., L. Fitting the model yields the mean function c_L(x) and the variance function s_L(x) as well as the fitted parameters μ̂_L, σ̂_L, β̂_L. * M-step: With probability 1-ϵ, let x^(L+1) = _x∈𝒳 EI_L(x) and with probability ϵ draw x^(L+1) randomly from 𝒳. Set L = L + 1. EI_L(x) is the expected improvement function and it is given by EI_L(x)=(f(x) - y^*, L)_+(1 - Φ(g(x) - c_L(x)/σ̂_L s_L(x))) where ·_+ = max{·, 0} and Φ is the standard normal CDF. Note that the optimization problem in the M-step can be reformulated as a constrained optimization problem with smooth objective function and smooth constraints for which all derivatives are known and it can therefore be solved with standard solvers. A key observation is that we only evaluate c once per iteration. In practice this results in far fewer evaluations of c when compared to, say, grid methods. This algorithm was intended to compute confidence intervals that arise as projections of confidence regions exactly as is the case for CI_b. In this case we would simply take x = Γ and let f(Γ)=Γ_11, g(Γ)=t̂^2_Γ, and c(Γ)=q̃_n, Γ(α). It does not really matter that Γ is a matrix since we can just vectorize it and redefine all the functions correspondingly. This would give us the upper bound of CI_b and the lower bound can be found by taking f(Γ)=-Γ_11. Similarly, the EAM-algorithm can be used to compute ϕ_b. This is done by letting x=Γ̃ and then f(Γ̃)=t̂^2_0|Γ̃ with g and c as before, but for Γ̃ instead of Γ. Note that in both cases f and g are polynomials of vec(x) and are therefore smooth with known derivatives of all orders.[All code used for the simulations can be found at <https://github.com/cholberg/unif_inf_var>. Our implementation of the EAM-algorithm is based on <cit.>.] § SUPPLEMENTARY RESULTS Here we give some supplementary results that are useful throughout. We assume for the entirity of this section that Assumptions <ref> and <ref> are true with the restriction F_θ = I. The first lemma collects some convergence results related to the normalizing matrix, H. We have * lim inf_n→∞inf_θ∈Θσ_min(H) > 0. * sup_R_n, dσ_min(H)^-1 = O(n^-η). * sup_R_n, 0sup_1≤ k≤ j≤ d(1-|λ_i_j|)|H_kj| = O(1). * sup_R_n, 0||Γ^t|| ≤ C (1-log(n)/n)^t for a constant C≥ 0 not depending on θ or n. * sup_R_n, 0||∑_t=0^n-2Γ^t Σ (Γ^t)^T|| = O(n/log n) * sup_R_n, 0||∑_t=0^n-2 tΓ^t Σ (Γ^t)^T|| = O(n/log n) We start with the proof of <ref>. We have H = 1/n∑_t=1^n ∑_s=1^t-1Γ^t-1-sΣ(Γ^t-1-s)^T. Since every matrix in the summand is positive semidefinite, we get inf_θ∈Θσ_min(H) ≥inf_θ∈Θ1/n∑_t=1^n-1σ_min(Σ) = n-1/ninf_θ∈Θσ_min(Σ) > n-1/nc where c = inf_θ∈Θσ_min(Σ)>0 by Assumption <ref>. For the proof of <ref>, note that, for any n large enough and θ∈ R_n, d, Γ is diagonal and therefore we have from (<ref>) σ_min(H) ≥σ_min(Σ)/n∑_t=1^n-1∑_s=0^t-1|λ_min(Γ)|^2s ≥σ_min(Σ)/n∑_t=1^n-11 - (1-n^-η)^2t/1 - (1 - n^-η)^2 = σ_min(Σ)/n(1-(1-n^-η)^2)(n - 1 - ∑_t=1^n-1(1 - n^-η)^2t) = σ_min(Σ)/n(1-(1-n^-η)^2)(n - 1 - (1 - n^-η)^2n/1 - (1-n^-η)^2). For n_0 large enough, we get 1/n(n - 1 - (1 - n^-η)^2n/1 - (1-n^-η)^2) ≥1/2, ∀ n≥ n_0 and, thus, with C = 2sup_θ∈Θσ_min(Σ)^-1 < ∞, we have that, for all n≥ n_0, sup_θ∈ R_n, dσ_min(H)^-1≤ C(1 - (1 - n^-η)^2). Since n^η(1 - (1 - n^-η)^2)→ 2 for n→∞ and all η > 0, this proves <ref>. Turning to the proof of <ref>, we immediately see that the statement is true when |λ_1| ≤ 1 - α so we can assume without loss of generality that all eigenvalues are strictly greater than 1 - α and, thus, that Γ is diagonal. We then get, for 1≤ k≤ j≤ d, |H_kj| ≤|Σ_kj|/n∑_t=1^n-1∑_s=0^t-1|λ_i_kλ_i_j|^s ≤|Σ_kj|/n∑_t=1^n-1∑_s=0^t-1|λ_i_j|^s = |Σ_kj|/n(1-|λ_i_j|)(n - 1-|λ_i_j|^n/1 - |λ_i_j|) where the second inequality follows from the fact that |λ_i|≤ 1 for all 1≤ i ≤ N. But then <ref> follows from the fact that 1-|λ_1|≥log(n)/n for any θ∈ R_n, 0. We now prove <ref>. For any θ∈Θ, it holds that ||Γ^n|| ≤ N max_1≤ i ≤ N||J_m_i(λ_i)^n||. Now, since lim_n→∞sup_|λ_i|≤ 1 - α ||J_m_i(λ_i)^n|| = 0 and J_m_i(λ_i) is diagonal for |λ_i| > 1 - α, there exists a C≥ 0 not depending on θ and n and such that ||Γ^t||≤ C |λ_max(Γ)|^t for all θ∈Θ. The result then follows from sup_θ∈ R_n, 0|λ_max(Γ)|≤ 1 - log(n)/n. For the proof of <ref>, part <ref> and the fact that Σ is uniformly bounded gives us sup_θ∈ R_n, 0||∑_t=0^n-2Γ^t Σ(Γ^t)^T|| ≤ C ∑_t=0^n-2sup_θ∈ R_n, 0 ||Γ^t||^2 ≤ C ∑_t=0^n-2(1 - log n/n)^2t ≤ C ∑_t=0^∞(1 - log n/n)^2t= C/(1-(1-log(n)/n)^2) = O(n/log n). The proof of part <ref> is almost the same. Indeed, by the same chain of inequalities, we find that sup_θ∈ R_n, 0||∑_t=0^n-2 tΓ^t Σ(Γ^t)^T|| ≤C(1-log(n)/n)^2/(1 - (1-log(n)/n)^2)^2 = O(n/log n). We have lim_n→∞sup_θ∈ R_n, 0||H^-1/2(∑_t=0^t-2Γ^t Σ(Γ^t)^T) H^-1/2 - I|| = 0. Note that it suffices to proof that lim_n→∞sup_θ∈ R_n, 0||(∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2 H (∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2 - I|| = 0. We have H = 1/n∑_t=1^n ∑_s=1^t-1Γ^t-1-sΣ(Γ^t-1-s)^T = 1/n∑_t=0^n-2(n - 1 - t)Γ^t Σ(Γ^t)^T = n - 1/n∑_t=0^n-2Γ^t Σ(Γ^t)^T - 1/n∑_t=1^n-2t Γ^t Σ(Γ^t)^T. and, as a result, (∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2 H (∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2 = n-1/nI + M where M = (∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/21/n∑_t=1^n-2t Γ^t Σ(Γ^t)^T (∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2. All that is left to show is therefore that M goes to 0 uniformly over R_n, 0 for n going to infinity. Similar arguments as in the proof of part <ref> of Lemma <ref> yield lim inf_n→∞inf_θ∈Θσ_min(∑_t=0^t-2Γ^t Σ(Γ^t)^T) > 0 so that sup_θ∈Θ||(∑_t=0^t-2Γ^t Σ(Γ^t)^T)^-1/2|| < ∞. By part <ref> of Lemma <ref>, we then get sup_θ∈ R_n, 0||M|| = o(1). Define the diagonal matrix Λ∈ℝ^d× d such that Λ_kk = |λ_i_k|. The following gives some useful relations between I-Λ and H. Let Λ∈ℝ^d× d be as defined above. Then, sup_θ∈Θsup_t≥ 1||𝔼((I-Λ)^1/2X_t, θX_t, θ^T(I-Λ)^1/2)|| < ∞ and, furthermore, lim_n→∞sup_θ∈ R_n, 0||(I-Λ)^1/2H(I - Λ)^1/2 - Σ_X|| = 0 where vec(Σ_X)=(I-Λ)^1/2⊗(I-Λ)^1/2(I - Γ⊗Γ)^-1vec(Σ) with lim sup_n→∞sup_θ∈ R_n, 0{σ_min(Σ_X)^-1 + σ_max(Σ_X)} < ∞. For any θ∈Θ with all diagonal entries of Γ having magnitude greater than 1-α and strictly smaller than 1, we have ||𝔼((I-Λ)^1/2X_t, θX_t, θ^T(I-Λ)^1/2)|| = ||(I-Λ)^1/2∑_s=0^t-1Γ^sΣ(Γ^s)^T(I-Λ)^1/2|| ≤ ||Σ||∑_s=1^t-1||(I-Λ)Γ^2s|| ≤ ||Σ||||(I-Λ)(I-Γ^2)^-1||||(I-Γ^2t)|| ≤ C ||(I-Λ)(I-Γ)^-1|| ≤ C d where C = sup_θ∈Θsup_t≥ 1||Σ||||I + Γ||||(I - Γ^2t)|| < ∞ and ||(I-Λ)(I-Γ)^-1||≤ d follows from Assumption <ref>. If Γ is not diagonal the non-diagonal blocks have eigenvalues with magnitude bounded by 1-δ and hence a similar bound can be achieved. If Γ has a diagonal entry with magnitude 1, the corresponding diagonal of I-Λ will be 0 and thus the bound above applies for this Γ with the corresponding blocks set to 0. This proves (<ref>). For the proof of (<ref>), simply note that Lemma <ref> and (<ref>) imply that lim_n→∞sup_θ∈ R_n, 0||(I-Λ)^1/2(H - 𝔼(X_n-1,θX_n-1, θ^T))(I - Λ)^1/2|| = 0 and sup_θ∈ R_n, 0||(I-Λ)^1/2𝔼(X_n-1,θX_n-1, θ^T)(I - Λ)^1/2 - Σ_X|| = sup_θ∈ R_n, 0||(I-Λ)^1/2∑_s=n-1^∞Γ^sΣ(Γ^s)^T(I-Λ)^1/2|| → 0 for n→∞. It remains to check that Σ_X is uniformly bounded and invertible in the limit. Since lim sup_n sup_θ∈ R_n, 0σ_max(Σ_X) < ∞ follows immediately from (<ref>), we only need to show the latter. Let θ∈ R_n, 0 be such that Γ is diagonal. Then, σ_min(Σ_X) ≥σ_min(Σ)∑_t=1^∞σ_min((I-Λ)Γ^2t) ≥σ_min(Σ)∑_t=0^∞min_1≤ k≤ d(1 - |λ_i_k|)|λ_i_k|^2t ≥σ_min(Σ)∑_t=0^∞log n/n(1 - log n/n)^2t = σ_min(Σ)log n/n(1 - (1 - log n/n)^2)^-1 ≥σ_min(Σ)/2. If Γ is non-diagonal, the same bound holds since adding ones on the super-diagonal does not decrease the minimum singular value. Because Σ is uniformly invertible over Θ, the proof is then complete. The next lemma says that checking whether A_nB_n converges to the identity matrix is the same as checking whether A_n B_n^2 A_n converges to the identity matrix where A_n and B_n are positive semidefinite matrices of conforming dimension. Let ℐ be some index set and consider two families of sequences of positive semidefinite d× d matrices, (A_n, i)_n∈ℕ, i∈ℐ and (B_n, i)_n∈ℕ, i∈ℐ. Then, if lim_n→∞sup_i∈ℐ||A_n, iB_n, i^2 A_n, i - I|| = 0, it also holds that lim_n→∞sup_i∈ℐ||A_n, iB_n, i - I || = 0. First, note that lim inf_n→∞inf_i∈ℐλ_min(A_n, iB_n, i) = c > 0. Since A_n, iB_n, i is similar to the positive definite matrix A_n, i^1/2B_n, iA_n, i^1/2, the contrary would imply the existence of a sequence (i_n)_n∈ℕ⊂ℐ such that σ_min(A_n, i_nB_n, i_n)→ 0 and, thus, σ_min(A_n, i_nB_n, i_n^2 A_n, i_n)→ 0 for n→∞ which, of course, is a contradiction. Now, let A_n, iB_n, i = U_n, iP_n, i be a polar decomposition, i.e., U_n, i is orthogonal and P_n, i positive semidefinite. Then, P_n, i^2 = A_n, iB_n, i^2 A_n, i which implies that lim_n→∞sup_i∈ℐ|| P_n, i - I|| = 0. It therefore suffices to show that U_n, i converges to the identity matrix uniformly over ℐ. Since U_n, i is orthogonal, we get ||U_n, i - I||^2 = 2d - 2(U_n, i) ≤ 2dsup_1≤ j≤ d|1 - λ_j(U_n, i)|. Let U_n, i = V_n, iD_n, iV_n, i^* be an eigendecomposition with D_n, i diagonal and V_n, i unitary. Define the Hermitian matrix H_n, i=V_n, i(P_n, i - I)V_n, i^*. Denote by d_n, i^j the j'th diagonal of D_n, i and by h_n, i^jk the jk'th element of H_n, i. Since H_n, i→ 0 uniformly over ℐ, for fixed ϵ > 0, we can pick n_0∈ℕ such that sup_i∈ℐsup_1≤ j, k≤ d|h_n, i^jk| < ϵ and inf_i∈ℐλ_min(A_n, iB_n, i) > c/2 for all n≥ n_0. We define the complex disk D_r(x)={y∈ℂ: |y-x|≤ r} for any x∈ℂ and r > 0. The matrix D_n, i + D_n, iH_n, i is simlar to A_n, iB_n, i so, by the Gershgorin circle theorem, for 1≤ k ≤ d, there exists 1≤ j ≤ d such that λ_k(A_n, iB_n, i)∈ D_R_j(d_n, i^j + d_n, i^j h_n, i^jj) where R_j = ∑_k≠ j|h_n, i^jk|. Recall that |d_n, i^j| =1. Using the fact that λ_k(A_n, iB_n, i)> c/2 is real and that sup_i∈ℐλ_k(A_n, iB_n, i)^2 ≤sup_i∈ℐσ_max(A_n, iB_n, i^2 A_n, i) → 1 for n→∞, we may therefore assume that n_0 is large enough so that sup_i∈ℐ|λ_k(A_n, iB_n, i) - 1| ≤sup_i∈ℐ|d_n, i^j - λ_k(A_n, iB_n, i)| + ϵ for all n ≥ n_0 which implies sup_i∈ℐ|d_n, i^j - 1| ≤ 2sup_i∈ℐ|d_n, i^j - λ_j(A_n, iB_n, i)| + ϵ≤ 2sup_i∈ℐR_j + ϵ + ϵ≤ 2dϵ. Thus, sup_i∈ℐ|| U_n, i - I|| ≤ 4d^2ϵ for all n≥ n_0. This completes the proof.
http://arxiv.org/abs/2306.07435v1
20230612213933
Randomized least-squares with minimal oversampling and interpolation in general spaces
[ "Abdellah Chkifa", "Matthieu Dolbeault" ]
math.NA
[ "math.NA", "cs.NA", "41A65" ]
theoremTheorem proposition[theorem]Proposition lemma[theorem]Lemma remark[theorem]Remark corollary[theorem]Corollary (
http://arxiv.org/abs/2306.02124v1
20230603143045
The mixing-spacetime symmetry in the Floquet-Bloch band theory
[ "Pei Wang" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.mes-hall", "cond-mat.str-el" ]
G↑↓ψ̂B̂ i e Tr#1 5pt #1Department of Physics, Zhejiang Normal University, Jinhua 321004, [email protected] We discover a class of spacetime symmetries unique to time-periodic systems, which we term "mixing symmetry" due to its combination of space and time coordinates in the symmetry transformation. We systematically enumerate the symmetry groups, and classify the corresponding Floquet-Bloch band theories by utilizing the winding number of quasi-energy. Moreover, we provide a comprehensive scheme for the experimental realization of these symmetries. The particle propagator exhibits an intriguing pattern that remains invariant even under transformations mixing space and time coordinates. We anticipate that this distinct feature can be observed in current cold atom experiments. The mixing-spacetime symmetry in the Floquet-Bloch band theory Pei Wang July 31, 2023 ============================================================== Introduction.— The study of Floquet-Bloch bands has emerged as a central topic in the field of nonequilibrium driven many-body dynamics <cit.>. Recent advancements in precise control and probing techniques have allowed for the realization of Floquet-Bloch bands in diverse platforms, including photonic waveguides <cit.>, solid materials <cit.>, and cold atom systems <cit.>. By employing periodic driving, these systems offer a unique opportunity to explore models that are challenging to realize in static setups <cit.>. Moreover, periodic driving enables the emergence of new states of matter that lack a static analog, leading to captivating phenomena in condensed matter physics, such as symmetry breaking <cit.>, localization <cit.>, and topological effects <cit.>. Symmetry plays a fundamental role in the study of band theory, exerting profound effects on various aspects of band structures. Spatial symmetries such as rotation, mirror reflection, and space inversion have long been recognized for their ability to protect band crossings or generate degeneracies <cit.>. Time reversal, particle-hole, and chiral symmetry have been utilized in the renowned tenfold classification of insulators and superconductors <cit.>. This classification has been applied to provide a periodic table for the topological phases <cit.>, and more recently, it has been extended to the topological classification of Floquet-Bloch bands <cit.>. Moreover, researchers have recognized the critical interplay between space group symmetries and topology, culminating in the comprehensive topological classification of band structures for all 230 crystal symmetry groups <cit.>. Notably, in the realm of Floquet systems, the presence of intertwined spatial and temporal translations, including nonsymmorphic symmetries such as glide time-reversal or time glide reflection, can preserve spectral degeneracy and give rise to novel out-of-equilibrium phases <cit.>. But previous studies have overlooked a class of symmetries that is unique to time periodic systems and absent in static ones. These symmetries are referred to as mixing symmetries in this paper. Let us consider the coordinates in 1+1-dimensional spacetime as (t,x). A linear coordinate transformation can be represented as (t',x')^T=A (t,x)^T. If the matrix A contains non-zero off-diagonal elements, it is called a mixing transformation because it combines the space and time coordinates. One well-known example of a mixing symmetry is the Lorentz symmetry, which holds significant importance in quantum field theory. In the context of condensed matter physics, the Schrödinger equation treats space and time differently, making continuous mixing symmetry impossible. Nevertheless, this does not rule out the possibility of discrete mixing symmetries <cit.>. Recently, it has been discovered that the spacetime crystals that exhibit a discrete Lorentz symmetry can be realized in ultracold atomic gases confined to an optical lattice <cit.>. But the models are constructed on finite-sized lattices and do not exhibit continuous Floquet-Bloch bands in the thermodynamic limit. In this paper, we present the first evidence of the existence of continuous Floquet-Bloch band theory that incorporates mixing symmetry. We thoroughly identify and classify the mixing groups in 1+1 dimensions. The resulting Floquet-Bloch theories are categorized based on both symmetry groups and the winding number of quasi-energy in the Brillouin zone (see Tab. <ref> for a summary). Unlike previously studied symmetries, the operator of mixing symmetry does not commute or anti-commute with the Hamiltonian. Therefore, we rely on group representation theory for constructing models. The band theory with mixing symmetry exhibits a quasi-energy-momentum relation that remains invariant under mixing transformations. Consequently, the particle propagator in real spacetime exhibits invariance when the spacetime coordinates undergo the transformation A, which is a distinctive characteristic of mixing symmetry. We discuss the possible realization of mixing symmetry in cold atoms on an optical lattice. The precise control achieved at the single-site level in experiments allows for programmable Hamiltonians with locally adjustable potential energies on each lattice site, facilitated by microelectromechanical systems mirrors <cit.>. We show that the Floquet-Bloch band with mixing symmetry can be implemented using a quadratic quantum Fourier transform (QQFT) protocol <cit.> on a driven optical lattice that features only onsite potential and nearest-neighbor hopping. The mixing symmetry can be observed by locating a Bose-Einstein condensate on the lattice and monitoring the atom density. Method.— When studying a quantum model, the usual approach involves writing down the Hamiltonian in real spacetime and then extracting the underlying symmetry from it. However, our objective is to construct a model with specific symmetry. We begin by providing a complete list of the mixing symmetry groups. Next, we establish the unitary representation of each group within the quasi-momentum-energy space. In this representation, the symmetry manifests as a constraint on the dispersion relation (DR), which is the function E(k) describing the quasi-energy E as a function of quasi-momentum k. For continuous Floquet-Bloch bands, we discover a fundamental equation governing the topology of the DR, which serves as a basis for band classification. By finding an E(k) that satisfies both the symmetry and topology conditions, we obtain the Floquet Hamiltonian Ĥ_F. Finally, we demonstrate how to realize Ĥ_F using a time-periodic Hamiltonian Ĥ(t) with locality in real spacetime. Our approach is inspired by the principles of quantum field theory, which relies on the unitary representation of the Poincaré group <cit.>. Mixing groups.— We are considering a 1+1-dimensional spacetime where spatial rotation or mirror reflection is absent, allowing us to concentrate on the study of mixing symmetry. There are two noncollinear translational vectors. Without loss of generality, we assign one vector (𝐞_x) to the spatial direction (x axis), and the other vector (𝐞_t) to the temporal direction (t axis), as shown in Fig. <ref>. This choice can always be made by employing a coordinate transformation that rotates 𝐞_t and 𝐞_x into the t and x axes, respectively. It is important to note that we exclusively consider symmorphic groups in this paper. For nonsymmorphic groups, nonsymmorphic symmetries may become significant when 𝐞_t and 𝐞_x are not orthogonal to each other <cit.>. To simplify the representation, we choose the lattice constants as the units of time and length, resulting in e_t = (1,0) and e_x=(0,1). Suppose that the 2-by-2 matrix A represents a mixing transformation. In this study, we focus on cyclic transformations, which are the ones that satisfy A^M = 1 for some positive integer M (called the order). By imposing the cyclic condition, we significantly reduce the number of symmetry groups, enabling us to exhaustively examine their representations. An arbitrary symmetry transformation can be expressed as a combination of A and translation. We denote this combined transformation as P(j, m, n), which acts on the coordinates as follows: ([ t'; x' ])= P(j,m,n) ([ t; x ]) = A^j ([ t; x ]) + ([ m; n ]) , where j,m and n are integers. P(j,m,n) denotes j times of mixing transformation followed by a translation of m units in time and n units in space. A symmetry group is a set of Ps that meet the group axioms. The closure under multiplication requires that the spacetime lattice {(m, n)^T | m,n ∈ℤ.} must keep invariant under A. Together with the existence of inverse element, we infer that the order of A can only be 2,3,4 or 6 <cit.>. The symmetry group can be expressed as 𝒫 = { P(j,m,n) | j=0, 1, ⋯, M-1; m,n∈ℤ. } with M=2,3,4 or 6. For given M, the group 𝒫_M is uniquely determined by A. The possible As in 𝒫_2, 𝒫_3, 𝒫_4 or 𝒫_6 are given in the supplementary materials. A few examples can help us understand the mixing group. The form of A in 𝒫_2 or 𝒫_4 is shown in Tab. <ref>, where a,b,c are integers satisfying bc=-a^2 ± 1, respectively. If a=0 and b=c=1, then A is the exchange of t and x (dubbed A_e) and belongs to 𝒫_2. If a=0, b=1 and c=-1, then A represents a rotation in the t-x plane by 90^∘ (dubbed A_r) and belongs to 𝒫_4. Figure <ref>a schematically illustrates the operations of A_e and A_r. The transformation A conserves the area of the parallelogram formed by two noncollinear vectors, because det(A)= ± 1. But A does not necessarily conserve the Euclidean length of a vector (e.g., consider the case a=2,b=-3 and c=1). Therefore, A can be not only rotation, reflection or inversion in the t-x plane, but also nonorthogonal transformations. Notice that A is distinguished from the discrete Lorentz transformation <cit.>, as the latter does not have a finite order. Floquet-Bloch band theory.— Each quantum theory is a unitary representation of its corresponding symmetry group. In our case, we aim to construct the unitary representations of 𝒫_M, and we follow a similar approach as described in Ref. [Wang21]. To denote the unitary operator of P(j,m,n), we use Û(j,m,n), which follows the same multiplication rule as P. The translation operators Û(0,m,n) commute with each other and share common eigenstates. In the Floquet-Bloch band theory, the eigenstates of translations are typically represented as |k,α⟩, where k ∈ [ -π, π) denotes the quasi-momentum, α is the band index, and the corresponding quasi-energy is denoted as E_α(k) with E_α(k) ∈ [ -π, π). When the operator Û(0,m,n) acts on |k,α⟩, it results in e^i E_α(k) m - i k n|k,α⟩. The pair (k,E_α(k)) represents a point in the Floquet-Bloch-Brillouin zone (FBBZ), which is topologically equivalent to a torus (see Fig. <ref>b,c). The DR of each continuous Floquet-Bloch band, i.e., the set of (k,E_α(k)) points, forms a loop on the torus. Since any element in 𝒫_M can be factorized into P(j,m,n)=P(0,m,n)P(j,0,0), the representation of 𝒫_M can be determined by examining the action of the mixing transformation operator Û(j,0,0) on the basis states |k,α⟩. Note that the single-particle Hilbert space is spanned by |k,α⟩. To determine the representation, it is sufficient to investigate the action of Û(1,0,0) since Û(j,0,0) = Û(1,0,0)^j. For this purpose, we utilize the multiplication rule: P(0,m',n')P(1,0,0) = P(1,0,0) P(0,m,n) <cit.>, or equivalently, Û(0,m',n') Û(1,0,0) = Û(1,0,0) Û(0,m,n), where (m',n')^T= A (m,n)^T. Acting both sides of Eq. (<ref>) on |k,α⟩, we find that Û(1,0,0) |k,α⟩ is also an eigenstate of translation operators, denoted by |k',α'⟩ =Û(1,0,0) |k,α⟩ without loss of generality. And Eq. (<ref>) determines a relation between k and k' <cit.>, which reads ([ k'; E_α'(k') ]) = A̅([ k; E_α(k) ]) (mod 2π), where A̅= det(A) A. Especially, we find A̅ = -A and A̅ = A for the symmetry classes 𝒫_2 and 𝒫_4, respectively. The modulo operation in Eq. (<ref>) ensures that ( k', E_α'(k')) falls within the FBBZ. According to definition, A̅ is invertible, and then it is a one-to-one continuous map between FBBZ and itself. In other words, A̅ acts as a homeomorphism on the FBBZ. Equation (<ref>) reveals that each point (k,E_α(k)) within the DRs is mapped by A̅ to another point within the DRs. A̅ establishes a one-to-one correspondence between the set of points within the DRs and itself. In a spacetime crystal with N continuous bands, each with its corresponding DR as a loop on the FBBZ torus, A̅ acts as a homeomorphism. Consequently, the image of a loop (DR) under A̅ is guaranteed to be another loop (DR). Thus, A̅ maps each DR loop to another DR loop, effectively acting as a permutation of the N bands. Topology of dispersion relation.— Equation (<ref>) is the sufficient and necessary condition for a unitary representation of 𝒫_M. Constructing a representation involves in finding the A̅-invariant DRs (solution of Eq. (<ref>)). For general A̅, these DRs can be highly nontrivial. For instance, if A is the exchange A_e, then A̅_e exchanges the quasi-momentum and quasi-energy. Our familiar DRs, such as quadratic or trigonometric functions, are not A̅-invariant. The nontriviality of A̅-invariant DRs arises from the fact that their loops exhibit nontrivial topology. The topology of a loop on a torus is characterized by a pair of integers, which corresponds to the fundamental group of the torus. As the DR is a continuous function of k within the range of [-π,π), a DR loop must wind around the torus exactly once in the k-direction. The topology of a DR loop is denoted as (1, w), where w represents the number of times the DR winds around the torus in the positive E-direction while completing one revolution in the positive k-direction. It is important to highlight that w has long been recognized as the average particle displacement over one period. In each cycle, w units of charge are pumped through the system <cit.>. If A̅ maps band α to α', their DRs' winding numbers (w_α and w_α') are connected to each other according to <cit.>±([ 1; w_α' ]) = A̅([ 1; w_α ]). Equation (<ref>) is our key result, which constrains the topology of an A̅-invariant DR. It has no solution for A̅ in 𝒫_3 or 𝒫_6 <cit.>, indicating that these symmetry classes have no representation with continuous bands. By substituting the expression of A into Eq. (<ref>), we obtain the band classification for 𝒫_2 and 𝒫_4. In 𝒫_2, bands are classified as singlets or doublets. A singlet remains invariant under A̅, while a doublet consists of two bands mapped by A̅ into each other, sharing the same winding number. For 𝒫_4, bands are classified as doublets with odd-function DRs of k and quadruplets (quartets of four bands). There are no singlet bands since w_α'=w_α constradicts Eq. (<ref>). Additionally, the two bands in a doublet have different winding numbers. Table <ref> summarizes the classification of Floquet-Bloch bands with mixing symmetry. Except for A with a=±1 (unconventional space inversion or time reversal), the DR's winding number must be nonzero <cit.>. Figure <ref> presents examples of DRs that satisfy Eq. (<ref>). Figure <ref>(a) shows a doublet pair of bands, mapped into each other by A̅_e in 𝒫_2. Both bands have a winding number of +1. Figure <ref>(b) shows a doublet pair of bands, mapped into each other by A̅_r in 𝒫_4. The two bands have winding numbers of +1 and -1, respectively. The simplest DRs with mixing symmetries (solution of Eq. (<ref>)) are linear ones: E(k)=wk, where w= ± 1, ± 2,⋯ represents the winding number. From Eqs. (<ref>) and (<ref>), we can fully determine the mixing symmetries of any linear Floquet-Bloch band <cit.>. In the 𝒫_2 symmetry class, a linear band is a singlet, which keeps invariant under the map A̅ with elements satisfying a=-b w± 1 and c=-bw^2 ± 2w. On the other hand, in the 𝒫_4 symmetry class, two bands E(k)=wk and E'(k)=w'k can form a doublet, where w'=w ± 1 or w'=w± 2 <cit.>. By utilizing the A̅-invariant E_α(k), we can readily establish the many-body quantum theory by introducing the creation operator ĉ^†_k,α and the annihilation operator ĉ_k,α and expressing the symmetry operators in terms of them <cit.>. Specifically, the time translation operator is Û(0,1,0)=e^i Ĥ_F, where Ĥ_F is the effective Floquet Hamiltonian: Ĥ_F = ∑_k,α E_α(k) ĉ^†_k,αĉ_k,α. Note that the symmetry condition of DRs is independent of whether the particles are bosons or fermions. The operators ĉ_k,α,ĉ_k,α^† are either commutative or anti-commutative, depending on the species of particles. A few comments are necessary. First, we ignore the interaction between particles in the model (<ref>). Constructing an interacting theory is significantly more challenging and falls beyond the scope of the current paper, as the mixing symmetry imposes constraints not only on the DR but also on the particle interactions. Second, for an exhaustive enumeration of quantum theories, we should also consider the possibility of Û(1,0,0) being an anti-unitary operator, which is discussed in the supplementary materials. Realization with local Ĥ(t).— We aim at realize a given Floquet Hamiltonian Ĥ_F, or equivalently the energy band E(k), in a cold atom system on an optical lattice. To achieve experimental feasibility, it is crucial that the time-periodic Hamiltonian Ĥ(t) possesses locality in real spacetime. However, this poses a challenge due to the nonzero winding numbers of the DRs, which is a characteristic feature of mixing symmetry. Let us consider a specific example: the linear DRs E(k)=wk. Upon Fourier transformation, the Floquet Hamiltonian Ĥ_F contains infinitely-long-range hopping terms in real space, making them currently inaccessible using existing technology. In fact, if Ĥ(t) exhibits locality (i.e., short-range hopping) and simultaneously maintains space translational symmetry at each time t, then the DRs of Ĥ_F must possess zero winding <cit.>. Therefore, in order to have an A̅-invariant DR, Ĥ(t) must break instantaneous translational symmetry. To design Ĥ(t) for a given DR, we employ the recently developed QQFT protocol <cit.>, which gives rise to highly flexible Hamiltonian engineering so that the DRs become completely programmable and the long-range tunnelings in Ĥ_F become accessible to optical lattice experiments. In one period, denoted as [0,1), the time-dependent Hamiltonian is expressed as Ĥ(t)= ∑_p=1^D I_p(t) Ĥ_p . Here, I_p(t) is the indicator function, which is defined as 1 for t∈ [(p-1)/D,p/D) and zero elsewhere. The parameter D represents the depth of the Hamiltonian sequence. Each Ĥ_p contains only onsite potentials and nearest-neighbor hoppings. It can generally be written as Ĥ_p = ∑_x(g^(p)_xψ̂^†_xψ̂_x+1 + u^(p)_xψ̂^†_xψ̂_x + h.c.), where ψ̂^†_x and ψ̂_x are the creation and annihilation operators at site x, respectively, and g and u denote the hopping strength and onsite potentials, respectively. The unitary evolution over one period is given by e^-iĤ_F = e^- i Ĥ_D/D⋯ e^- i Ĥ_2/D e^- i Ĥ_1/D. For a lattice model with L sites, the depth D scales as L log L for large L <cit.>, in the QQFT protocol. As the system size increases, the effort required for simulation grows super-linearly. In recent developments in cold atom technology, spatially resolved control of the atom-confining potential has been achieved, enabling the realization of a sequence of local Hamiltonians like Eq. (<ref>). It has been shown that systems with sizes up to several tens of sites are accessible in present experiments <cit.>. Figure <ref>(a) illustrates the sequence of Ĥ_p operations that generate E(k)=wk on a chain of length L=8. Within each period, a total of 39 operations are performed, including 32 swaps between neighboring sites, 6 local Fourier transformations, and one evolution of the onsite potential. For more detailed information, please refer to the supplemental material. Mixing symmetry in the wave function.— To observe the mixing symmetry, one can utilize the fact that the mixing symmetry manifests itself in the particle propagator in real spacetime. For a particle initially located at position x=0 and time t=0, its wave function at a later time (multiples of the period) satisfies: Ψ_α(t,x) = Ψ_α'( t',x'), where (t',x')^T = A(t,x)^T and t,x are arbitrary integers. α' is the map of band-α under A̅. For α'=α (a singlet band in the 𝒫_2 class), Eq. (<ref>) imposes a strong constraint on the wave function. For α' ≠α, Eq. (<ref>) provides a connection between the wave functions in different bands. For a concrete example, let us see a linear band with E(k)=wk, in which the particle moves at a constant speed, just like a classical particle. The wave function is calculated to be Ψ_α(t,x)=δ_x,w t. In previous discussions, we already show that such a band exhibits the 𝒫_2 symmetry when the elements of A are a = -b w± 1 and c=-bw^2 ± 2 w. It is easy to verify that δ_x,w t does remain invariant as (t,x)^T transforms under A. Using the QQFT protocol, we perfectly repeat the evolution of wave function on a lattice of length L=2^l. Figure <ref>(b) displays the DR as w=2, and Fig. <ref>(c) displays the corresponding wave function in the QQFT simulation. The probability distribution of particles, i.e. |Ψ(t,x)|^2, obviously meets the same symmetry as shown in Eq. (<ref>). In experiments, instead of a single particle, one can use the Bose-Einstein condensate (BEC) for observation, and then, | Ψ(t,x)|^2 represents the density of atoms. The density distribution forms a symmetric pattern which remains invariant under A, which will be a smoking gun signal of mixing symmetry. Discussion.— This paper presents an innovative discovery of Floquet-Bloch band theories that exhibit a unique mixing symmetry, which intertwines the space and time coordinates. We provide a comprehensive classification of Floquet-Bloch bands based on the cyclic mixing transformations of finite order. Notably, only the groups 𝒫_2 and 𝒫_4 possess continuous representations, where the mixing symmetry imposes constraints on the dispersion relation of each band. Furthermore, we reveal that the winding number of the dispersion relation on the Floquet-Bloch-Brillouin torus must adhere to a symmetry condition. To achieve a non-zero winding number, it is essential for the time-dependent Hamiltonian of the theory to break the instantaneous translation symmetry, a feat attainable through the implementation of QQFT on an optical lattice. Remarkably, the mixing symmetry manifests in the atom density, which becomes experimentally measurable, demonstrating its impact on the spacetime distribution. This discovery unveils a broader symmetry family that has been previously ignored, as the mixing symmetry transcends pure spatial or temporal characteristics and instead establishes correlations between space and time. Its exploration enhances our comprehension of symmetry in crystals. Looking ahead, intriguing open questions include the investigation of noncyclic mixing symmetry and the exploration of mixing-symmetry protected topological states of matter. Acknowledgement.— The work is supported by National Natural Science Foundation of China (Grants Nos. 11835011, 11774315), and the Junior Associates program of the Abdus Salam International Center for Theoretical Physics. We thank X. Wang for useful discussions. 35 natexlab#1#1 bibnamefont#1#1 bibfnamefont#1#1 citenamefont#1#1 url<#>1 urlprefixURL Oka09 T. Oka and H. Aoki, Phys. Rev. B 79, 081406(R) (2009). Kitagawa10 T. Kitagawa, E. Berg, M. Rudner, and E. Demler, Phys. Rev. B 82, 235114 (2010). Lindner11 N. H. Lindner, G. Refael, and V. Galitski, Nat. Phys. 7, 490 (2011). Cooper19 N. R. Cooper, J. Dalibard, and I. B. Spielman, Rev. Mod. Phys. 91, 015005 (2019). Rudner20 M. S. Rudner and N. H. Lindner, Nat. Rev. Phys. 2, 229 (2020). Rechtsman13 M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, S. Nolte, M. Segev, and A. Szameit, Nature 496, 196 (2013). Wang13 Y. H. Wang, H. Steinberg, P. Jarillo-Herrero, and N. Gedik, Science 342, 453 (2013). Jotzu14 G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature 515, 237 (2014). Sorensen05 A. S. Sørensen, E. Demler, and M. D. Lukin. Phys. Rev. Lett. 94, 086803 (2005). Eckardt05 A. Eckardt, C. Weiss, and M. Holthaus, Phys. Rev. Lett. 95, 260404 (2005). Goldman14 N. Goldman and J. Dalibard, Phys. Rev. X 4, 031027 (2014). Oka19 T. Oka and S. Kitamura, Annu. Rev. Condens. Matter Phys. 10, 387 (2019). Else16 D. V. Else, B. Bauer, and C. Nayak, Phys. Rev. Lett. 117, 090402 (2016). Khemani16 V. Khemani, A. Lazarides, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. 116, 250401 (2016). Yao17 N. Y. Yao, A. C. Potter, I.-D. Potirniche, and A. Vishwanath, Phys. Rev. Lett. 118, 030401 (2017). Ponte15 P. Ponte, Z. Papić, F. Huveneers, and D. A. Abanin, Phys. Rev. Lett. 114, 140401 (2015). Lazarides15 A. Lazarides, A. Das, and R. Moessner, Phys. Rev. Lett. 115, 030402 (2015). Bordia17 P. Bordia, H. Lüschen, U. Schneider, M. Knap, and I. Bloch, Nat. Phys. 13, 460 (2017). Rudner13 M. S. Rudner, N. H. Lindner, E. Berg, and M. Levin, Phys. Rev. X 3, 031005 (2013). Ashcroft76 N. W. Ashcroft and N. D. Mermin, Solid state physics (Tomson Learning Inc., London, UK, 1976). Altland97 A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997). Schnyder08 A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B 78, 195125 (2008). Kitaev09 A. Kitaev, AIP Conf. Proc. 1134, 22 (2009). Nathan15 F. Nathan and M. S Rudner, New J. Phys. 17, 125014 (2015). Potter16 A. C. Potter, T. Morimoto, and A. Vishwanath, Phys. Rev. X 6, 041001 (2016). Else16b D. V. Else and C. Nayak, Phys. Rev. B 93, 201103(R) (2016). Roy17 R. Roy and F. Harper, Phys. Rev. B 96, 155118 (2017). Bradlyn17 B. Bradlyn, L. Elcoro, J. Cano, M. G. Vergniory, Z. Wang, C. Felser, M. I. Aroyo, and B. A. Bernevig, Nature 547, 298 (2017). Po17 H. C. Po, A. Vishwanath, and H. Watanabe, Nat. Commun. 8, 50 (2017). Kruthoff17 J. Kruthoff, J. de Boer, J. van Wezel, C. L. Kane, and R.-J. Slager, Phys. Rev. X 7, 041069 (2017). Morimoto17 T. Morimoto, H. C. Po, and A. Vishwanath, Phys. Rev. B 95, 195155 (2017). Xu18 S. Xu and C. Wu, Phys. Rev. Lett. 120, 096401 (2018). Peng19 Y. Peng and G. Refael, Phys. Rev. Lett. 123, 016806 (2019). Mochizuki20 K. Mochizuki, T. Bessho, M. Sato, and H. Obuse, Phys. Rev. B 102, 035418 (2020). Wang18 P. Wang, New J. Phys. 20, 023042 (2018). Wang20 X. Li, J. Chai, H. Zhu, and P. Wang, J. Phys.: Condens. Matter 32, 145402 (2020). Wang21 P. Wang, J. Phys. A: Math. Theor. 54, 115003 (2021). Wang22 P. Wang, Z. Huang, X. Qiu, and X. Li, Phys. Rev. B 106, 134313 (2022). 2016_Weiss_Science Y. Wang, A. Kumar, T.-Y. Wu, and D. S. Weiss, Science 352, 1562 (2016). browaeys2020many A. Browaeys and T. Lahaye, Nat. Phys. 16, 132 (2020). OurSI See Supplementary Materials. Weinberg S. Weinberg, The Quantum Theory of Fields (Cambridge University Press, Cambridge, England, 1995). Qiu20 X. Qiu, J. Zou, X. Qi, and X. Li, npj Quantum Inf. 6, 87 (2020). Supplementary Materials § MIXING GROUPS According to definition, the mixing symmetry group has two important subgroups. One is the cyclic group that contains the mixing transformations, i.e., 𝒜={1,A,A^2,⋯, A^M-1} with M being the order. The other consists of the translations, reading 𝒯= {(m,n) | m,n ∈ℤ.}. Usually, the group that has 𝒜 and 𝒯 as subgroups is not unique. In this paper, we only consider the symmorphic group, which is the direct product of 𝒜 and 𝒯. The group element is written as P(j,m,n), which represents the mixing transformation A^j followed by the translation of vector (m,n). It is easy to see that, 𝒫={ P(j,m,n)} is a group if and only if the spacetime lattice 𝒯 keeps invariant under A. Because A is invertible (A^-1=A^M-1), 𝒯 keeps invariant under A if and only if m' and n', defined by (m',n')^T = A(m,n)^T, are integers for arbitrary m,n∈ℤ. Furthermore, this condition can be simplified into A(1,0)^T and A(0,1)^T being integer pairs. We generally express the matrix A and its inverse as A=([ a_11 a_12; a_21 a_22 ]) and A^-1 = 1/det(A)([ a_22 -a_12; -a_21 a_11 ]), respectively. Then, the condition that A(1,0)^T and A(0,1)^T are integer pairs translates into a_11, a_12, a_21, a_22 being all integers. But A^j (1,0)^T and A^j (1,0)^T must be also integer pairs for j=2,3,⋯, M-1. The case of j=M-1, or equivalently j=-1, is especially important, from which we derive that a_11/det(A), a_12/det(A), a_21/det(A), a_22/det(A) are integers. For a_ij and a_ij/det(A) to be both integers, we require det(A) = ± 1. To see it, one can use proof by contradiction (the assumption det(A)=± 2, ± 3, ⋯ leads to contradiction). To find all the cyclic As, we study the eigenvalues of A, i.e. a pair of complex numbers expressed as λ_± = a_11+a_22/2 ±√((a_11+a_22/2 )^2- det(A)). The cyclic condition (A^M=1) indicates | λ_±| ≡ 1, which is possible only if a_11+a_22 = 0, ± 1 ,± 2. As a_11+a_22 = 0 and det(A)=-1, a straightforward calculation shows A^2=1. Such As can be written in a more compact form as A=([ a b; c -a ]), where a,b,c are arbitrary integers satisfying bc = -a^2+1. As a_11+a_22 = 0 and det(A)=+1, we find A^4=1, and A has the same expression as Eq. (<ref>) but with bc = -a^2-1. As a_11+a_22 = ± 1, only det(A)=1 is consistent with | λ_±| ≡ 1 but det(A)=-1 is not, and we find A^6=1 or A^3=1, respectively. As a_11+a_22 = ± 2, the calculation shows that there does not exist a finite M so that A^M=1, except for A=± 1, which is trivial and then ignored. To summarize, the values of M are 2,3,4 or 6, and the corresponding symmetry groups are denoted by 𝒫_2,𝒫_3, 𝒫_4 or 𝒫_6, respectively. For a given M, 𝒫_M is a class of groups, with different groups having different A. In 𝒫_2, A is the matrix (<ref>) with bc=-a^2+1. In 𝒫_4, A is the matrix (<ref>) with bc=-a^2-1. In 𝒫_3, A is the matrix (<ref>) with the components being arbitrary integers that satisfy a_11+a_22 = - 1 and a_11a_22-a_12a_21=1. In 𝒫_6, A is the matrix (<ref>) with the components being arbitrary integers that satisfy a_11+a_22 = + 1 and a_11a_22-a_12a_21=1. § UNITARY AND ANTI-UNITARY REPRESENTATIONS We use |k,α⟩ to denote the single-particle eigenstate of the translation operators Û(0,m,n) with m,n∈ℤ. According to the Floquet-Bloch band theory, without loss of generality, the corresponding eigenvalue can be expressed as e^imE_α(k)-ikn, where k and E_α(k) are the quasi-momentum and quasi-energy, respectively, and α is the band index. Let us calculate Û(1,0,0)|k,α⟩. From the definition of P(j,m,n), it is easy see P(0,m',n')P(1,0,0) = P(1,0,0)P(0,m,n) with (m',n')^T= A(m,n)^T. Û(j,m,n) is the representation of P(j,m,n), then they satisfy the same multiplication rule. We obtain Û(0,m',n')Û(1,0,0) |k,α⟩= Û(1,0,0)Û(0,m,n) |k,α⟩ = e^imE_α(k)-iknÛ(1,0,0) |k,α⟩. Equation (<ref>) tells us that Û(1,0,0) |k,α⟩ is the eigenstate of Û(0,m',n') with the eigenvalue being e^imE_α(k)-ikn. But m' and n' can be arbitrary integers, because (m',n')^T= A(m,n)^T and A is invertible. Û(1,0,0) |k,α⟩ is then the common eigenstate of the translation operators, denoted by |k',α'⟩ without loss of generality. Using the notations k' and α', we calculate the left-hand side of Eq. (<ref>) and then obtain e^im'E_α '(k')-ik'n' = e^imE_α(k)-ikn. Using the fact that det(A)=± 1 and the expression of A^-1 in Eq. (<ref>), we quickly find ([ k'; E_α'(k') ]) = A̅([ k; E_α(k) ]) (mod 2π) with A̅= det(A) · A = ± A. In the above derivation, we assume that Û(1,0,0), i.e. the representation of A, is a unitary operator. To make our discussion complete, we also need to consider the possibility of Û(1,0,0) being an anti-unitary operator. In this case, the multiplication rule keeps the same, but Eq. (<ref>) changes into Û(0,m',n')Û(1,0,0) |k,α⟩= e^-imE_α(k)+iknÛ(1,0,0) |k,α⟩. Due to the reason mentioned above, we still assume Û(1,0,0) |k,α⟩=|k',α'⟩. Then Eq. (<ref>) becomes e^im'E_α '(k')-ik'n' = e^-imE_α(k)+ikn. Equation (<ref>) keeps the same but with A̅= - det(A) · A. Comparing the anti-unitary representation with the unitary representation, we find that the dispersion relation satisfies the same equation with only the sign of A̅ changing. On the other hand, if we do the change A→ -A in the unitary representation, the sign of A̅ also changes, since det(A)= det(-A). Moreover, if A is a cyclic matrix, so is -A. Therefore, for each anti-unitary representation, there exists a unitary representation that has exactly the same A̅, and then the dispersion relation, i.e. the solution of Eq. (<ref>), is also the same. The consideration of anti-unitary representation leads to nothing new in the dispersion relation. § TOPOLOGY OF DISPERSION RELATION We assume that the Floquet-Bloch band is continuous, or in other words, E_α(k) is a continuous function of k everywhere in the Floquet-Bloch-Brillouin zone (FBBZ). The dispersion relation (DR) of each band is then a loop on the FBBZ torus. The transformation A̅ (mod 2π) defined by Eq. (<ref>) maps a point in the FBBZ to another point in the FBBZ. Furthermore, it is a one-to-one map. Otherwise, suppose (k_1,E_1)≠(k_2,E_2) are mapped into the same (k',E'), then we have A̅(k_1-k_2,E_1-E_2)^T = 2π( m,n )^T with m,n being some integers. But the matrix A̅=± A or its inverse always map an integer pair into another integer pair, and then (k_1-k_2,E_1-E_2) = 2π( m',n' ) with m',n' being integers. This is impossible except for k_1 = k_2 and E_1=E_2, because (k_1,E_1) and (k_2,E_2) are both in the FBBZ. The one-to-one map A̅ is by definition continuous, so is its inverse. A̅ is then a homeomorphism. As a consequence, an arbitrary loop on the FBBZ torus must be mapped by A̅ into another loop. In the main text, we show that the spacetime crystal has the mixing symmetry if and only if the single-particle DRs are A̅-invariant. And if the DRs are A̅-invariant, then the DR of a band α, i.e. a loop, must be mapped into the DR of another band α' (it is possible that α=α'). Note that, from the pure mathematical point of view, it is also possible that the image of a DR loop is a non-DR loop (e.g., a loop on which k keeps a constant but E travels around the torus once). But in that case, the DRs are not A̅-invariant, and then the corresponding spacetime crystal has no mixing symmetry, which is uninteresting to us. Next, we study the topologies of the DR loops of α and α'. Using the knowledge of the fundamental group of torus, we describe the topology of a loop by two integers, which are the numbers of times the loop winds around the torus in the positive k- and E-directions, respectively. A DR loop winds around the torus once and only once in the k-direction, otherwise, there would exist some k∈ [-π,π) at which E(k) has no definition or has multiple values, which contradicts with the fact that E(k) is a function of k defined in the domain [-π,π). Therefore, the topology of the α-band DR is given by the pair (1,w_α), in which w_α is the number of times the loop winds in the positive-E direction as it winds once in the positive-k direction. An easy way of calculating w_α is by depicting E_α(k) in the extended quasi-energy zone, in which the range of E is extended to (-∞,∞) instead of being limited in [-π,π). In the extended-zone scheme, we can force E_α(k) to be continuous in the absence of the modulo operation, E_α(k) then becomes a curve in the k-E plane with k∈ [-π,π) and E∈(-∞,∞). The continuity of E_α(k) (mod 2π) requires (E_α(π) - E_α(-π)) being an integer times of 2π, and this integer is exactly w_α: E_α(π) - E_α(-π) = 2π w_α. Now, we study the image of {(k,E_α(k))} under the matrix A̅, in the extended-zone scheme. Without the modulo operation, A̅ is an invertible one-to-one map in the k-E plane, moreover, it is a linear map. Therefore, when (k,E_α(k)) starts from the left end (-π,E_α(-π)), and goes towards the right end (π,E_α(π)), its image (k',E_α'(k')) draws a curve in the plane. The end points of the image curve are (k'_0,E_α'(k'_0))^T=A̅(-π,E_α(-π))^T and (k'_1,E_α'(k'_1))^T = A̅(π,E_α(π))^T, respectively. Then, the winding number of α' evaluates w_α' =(E_α'(k'_1) - E_α'(k'_0))/( k'_1- k'_0). An important property of the image curve is that | k'_1-k'_0| must be 2π. The proof is as follows. First, the range of k' must be integer times of 2π, because the image is a complete DR loop (of band α') after the modulo operation. Second, the range of k' cannot be 2π n with n>1. Otherwise, as (k,E_α(k)) travels around the α-DR loop once, (k',E_α'(k')) already travels around the α'-DR loop n times, which contradicts with the fact that A̅ (mod 2π) is a one-to-one map on the torus. Based on the above arguments, we derive ±([ 1; w_α' ]) = A̅([ 1; w_α ]), where "±" corresponds to k'_1-k'_0 = ± 2π, respectively. § MIXING SYMMETRIES OF LINEAR E(K) We determine the mixing symmetries of a linear DR, given by E(k)=wk with w=± 1,± 2,⋯, by making the following observation. If the topology condition ±(1,w')^T= A̅(1,w)^T is satisfied, we can multiply both sides by k to obtain (k',E')^T = A̅(k,E)^T, where k'=± k and E'=w' k'. Therefore, the topology condition is sufficient for one linear band to be mapped by A̅ into another linear band. Let us first consider the 𝒫_2 symmetry class. Since w=w' under the map A̅ (see Tab. I of the main text), a linear E(k) is always mapped into itself and remains a singlet band in the 𝒫_2 class. Using the equation ±(1,w)^T= A̅(1,w)^T and the expression of A̅, we immediately find a = -b w± 1 and c=-bw^2 ± 2 w. For a given w, there exist an infinite number of mixing matrices (with different b) in the 𝒫_2 class: A= ([ -bw ± 1 b; -bw^2 ± 2w bw∓ 1 ]). The linear band E(k)=wk always exhibits 𝒫_2 symmetries. Next, we consider the 𝒫_4 symmetry class. Since E(k)=wk is an odd function of k, the linear band must be one branch of a doublet (Tab. I of the main text). Suppose the DR of its paired band is E'(k') =w'k'. Using ±(1,w')^T= A̅(1,w)^T and the expression of A̅, we find a=-bw± 1, c=-bw^2± 2w -2/b, and w'=w∓ 2/b. c and w' must be integers, therefore, b can only take the values ± 1, ± 2. For a given w, there exist 8 mixing matrices in the 𝒫_4 symmetry class: A= ([ -w ± 1 1; -w^2 ± 2 w - 2 w∓ 1 ]), ([ w ± 1 -1; w^2 ± 2 w + 2 -w∓ 1 ]), ([ -2w ± 1 2; -2w^2 ± 2 w - 1 2w∓ 1 ]), ([ 2w ± 1 -2; 2w^2 ± 2 w +1 -2w∓ 1 ]). The corresponding w' is given by w'=w∓ 2, w± 2, w∓ 1, w± 1, respectively. The above analysis exhausts all the mixing symmetries of a linear band. § CONSTRUCTION OF Ĥ(T) Our target is to simulate Ĥ_F = ∑_k E(k) ĉ^†_kĉ_k that has mixing symmetry by a periodic Hamiltonian Ĥ(t) with locality. First, we will prove that, if Ĥ(t) has both locality and space translation symmetry at each t, then the winding number of E(k) is zero. For simplicity, we consider a lattice model, in which a set of sites are spatially located at the coordinates j= 0, ± 1, ± 2, ⋯, respectively. In condensed matter community, the lattice models are widely employed in the study of particles moving in a periodic potential, because it is more difficult to directly deal with the differential operators in the continuous space. Without loss of generality, we define Ĥ(t) =∑_j∑_Δ j=-R^R f(Δ j,t) ψ̂^†_jψ̂_j+Δ j , where f(Δ j,t)=f^*(-Δ j,t) is the hopping strength, and ψ̂^† and ψ̂ are the onsite creation and annihilation operators, respectively. The assumption that Ĥ(t) has space translation symmetry at each moment, is hidden in the fact that f(t) is independent of j. The locality of Ĥ(t) manifests itself as the existence of a distance cutoff for hopping. The largest distance over which there are nonzero hopping terms is set to be R. After a Fourier transform, Eq. (<ref>) changes into Ĥ(t) =∑_k E(k,t) ψ̂^†_kψ̂_k, where E(k,t) = ∑_Δ j=-R^R f (Δ j,t)e^ikΔ j. E(k,t) is a sum of finite number of terms, with each term being a trigonometric function of k. If we depict these trigonometric functions on the FBBZ torus, they all have zero winding number, and then their sum, i.e. E(k,t), must also have zero winding number. The Floquet Hamiltonian Ĥ_F can be obtained by integrating Ĥ(t) over one period. Because Ĥ(t) at different t commutes with each other. Then we obtain E(k) = ∫^T_0 dt E(k,t) /T where T=1 is the period. Since E(k,t) at each t has zero winding, E(k) must also have zero winding. The DRs of a spacetime crystal with mixing symmetry usually have nonzero winding. And due to the above arguments, if we ask Ĥ(t) to be local and we want to simulate a Ĥ_F with nonzero-winding DRs, we need to break the instantaneous translation symmetry in Ĥ(t). In previous theoretical or experimental studies, people often focus on the atom-confining potential that keeps the instantaneous translation symmetry. This explains why the mixing symmetry has not been observed accidentally. The recently developed digital-micromirror-device and sub-wavelength techniques have realized programmable instantaneous-translation-symmetry-breaking potentials in the cold atomic gases. This provides the foundation for experimentally realizing Ĥ(t). Because we already know the DRs of Ĥ_F, the quadratic quantum Fourier transform (QQFT) protocol is especially useful for designing Ĥ(t) <cit.>. Here we briefly review the idea of QQFT. The Floquet Hamiltonian is defined by the fact that e^-iĤ_F is the evolution operator of quantum state over one time period. The QQFT protocol gives a sequence of local Hamiltonians, denoted by Ĥ_1,Ĥ_2,⋯Ĥ_D, which are consecutively engineered so that the evolution operator can be factorized as e^-iĤ_F = e^-iĤ_D/D⋯ e^-iĤ_2/D e^-iĤ_1/D, where D is the depth of the Hamiltonian sequence and 1/D is the lifetime of each Hamiltonian. To obtain the Ĥ_ps, we utilize the fact that Ĥ_F=∑_k E(k) ĉ^†_kĉ_k is quadratic. On a lattice of size L, we perform the Fourier transform ĉ^†_k=∑_je^ikj/√(L)ψ̂^†_j with ψ̂^†_j being the onsite creation operator, and then reexpress the Floquet Hamiltonian as Ĥ_F= Ψ̂^†ℋΨ̂, where Ψ̂ is the array of ψ̂_js and ℋ is a Hermitian matrix with the elements being ℋ_j,j' = ∑_k E(k)e^ik(j-j')/L . To proceed, we exploit a formula of quadratic-exponent operators, which can be easily derived from the Baker-Campbell-Hausdorff formula. For arbitrary Hermitian matrices ℋ_1, ℋ_1, ⋯ℋ_d and a single Hermitian matrix ℋ that satisfy e^-i ℋ_d⋯ e^-i ℋ_2 e^-i ℋ_1 = e^-i ℋ, we always have e^-i Ψ̂^†ℋ_d Ψ̂⋯ e^-i Ψ̂^†ℋ_2 Ψ̂ e^-i Ψ̂^†ℋ_1 Ψ̂ = e^-i Ψ̂^†ℋΨ̂. Equation (<ref>) simply says that the factorization of an evolution operator with quadratic Hamiltonian (such as e^-iĤ_F) is equivalent to the factorization of the corresponding unitary matrix e^-iℋ. To make Ĥ_p = Ψ̂^†ℋ_p Ψ̂ a local Hamiltonian, we need to ask the L-by-L matrix ℋ_p to be local. In the QQFT protocol, each ℋ_p contains only the diagonal elements (onsite potentials) and the off-diagonal elements ℋ_j,j+1 (hopping between nearest-neighbor sites). Observing Eq. (<ref>), we immediately find the next factorization: e^-i ℋ = e^-i e^i ℱℰ e^ -i ℱ = e^i ℱ e^-i ℰ e^ -i ℱ, where ℰ is a diagonal matrix with the diagonal elements being E(k), and ℱ is defined by (e^i ℱ)_j,j'=1/√(L) e^i2π jj'/L. In Eq. (<ref>), ℰ is already diagonal and then satisfies the locality condition. Furthermore, e^i ℱ is recognized to be the Fourier transformation, which can then be factorized into a sequence of local unitary matrices by using the algorithm of quantum Fourier transform (see Ref. [Wang22] for the detail). The factorization of e^i ℱ depends only upon the value of L. The analytical expressions of ℋ_ps have been obtained, as L is an integer power of 2, i.e. L=2^l. The sequence depth of e^i ℱ scales as Lln L. As an example, we give the sequence of Hamiltonians that generate the required dispersion relation on a one-dimensional lattice of length L=2^3=8. For simplicity, we label the lattice sites as j=0, 1, ⋯, 7. In this case, the unitary evolution over a single period can be factorized into e^-iℋ= R^(2) A^(2) R^(2)†R^(1) A^(1) R^(2)† A^(0)R^(2)† e^-i ℰ R^(2) A^(0)†R^(2) A^(1)†R^(1)† R^(2) A^(2)†R^(2)†. Here, R^(1) and R^(2) are the permutation matrices, which are realized by using a sequence of swaps, say R^(1) = S^(1,2) S^(5,6) and R^(2) = S^(3,4) S^(4,5) S^(5,6) S^(2,3) S^(3,4) S^(1,2), respectively. S^(j,j+1) is the swap (the Pauli matrix σ_x) between two neighbor sites j and j+1. For the realization of S^(j,j+1), the corresponding Hamiltonian is h_j,j+1=h_j+1,j=-h_j,j=-h_j+1,j+1 = π/2 and h_i,i'=0 for i,i'≠ j,j+1 (it is easy to verify S^(j,j+1)=e^-ih). The Hamiltonian h is definitely a local one, involving only an operation on two neighbor sites. A^(q) with q=0,1,2 is the local Fourier matrix, which couples 2j with 2j+1 sites for j=0,1,2,3. Its nonzero matrix elements are ( [ A^(q)_2j,2j = 1/√(2) A^(q)_2j,2j+1= 1/√(2)e^i2π(j% 2^q)/2^q+1; A^(q)_2j+1,2j =1/√(2) A^(q)_2j+1,2j+1= -1/√(2)e^i2π(j% 2^q)/2^q+1 ]), where % denotes the remainder. The corresponding Hamiltonian, i.e. i ln [A^(q)], has only the couplings between two nearest-neighbor sites. Finally, the Hamiltonian ℰ in Eq. (<ref>) is made of the on-site potentials. For a linear dispersion E(k)=w k, the elements of ℰ can be written as ℰ_i,j=δ_i,j2π/L j w. One can also use the modulo 2π operation to force ℰ_i,j to be in the interval [-π,π). In the construction of the Hamiltonian sequence, we notice that multiple swaps that are commutative with each other can be combined into one without breaking the locality of Hamiltonian. For example, S^(1,2) and S^(5,6) in R^(1) can be realized by using a single Hamiltonian that has the coupling between site-1 and site-2 and at the same time, also the coupling between site-5 and site-6. Such a consideration reduces the depth of the Hamiltonian sequence. In the case of L=8, we find the depth to be D=39. The sequence consists of 32 swaps, six A^(q) and one e^-i ℰ. § MIXING SYMMETRY OF THE SINGLE-PARTICLE PROPAGATOR In the main text, we derived from the multiplication rule that |k',α'⟩=Û(1,0,0)|k,α⟩, which illustrates the transformation of a single-particle state under Û(1,0,0). In the language of many-body physics, it is more convenient to define Û(1,0,0) based on its action on the creation or annihilation operators. This can be expressed as ĉ^†_k'α' = Û(1,0,0) ĉ^†_kαÛ^† (1,0,0). The field operators in real space are obtained through Fourier transformation of ĉ^†_kα, given by ψ̂^†_xα = ∑_k e^-ikx/√(L)ĉ^†_kα, where L is the system size. The time evolution of field operators is defined as ψ̂^†_x α(t) = e^iĤ_F tψ̂^†_x α e^-iĤ_F t for integer t (integer multiples of the period). Utilizing Eq. (<ref>), we can derive the following expression: Û(1,0,0) ψ̂^†_x α(t) Û^† (1,0,0) = ψ̂^†_x' α'(t'), where (t',x')^T = A(t,x)^T, and t,x,t',x' are all integers. The transformation Û(1,0,0) induces changes in both the spatial and temporal coordinates of the field operators, which are determined by the matrix A. The propagator of particles in band-α is defined as G_α(t_1 x_1,t_2x_2)=-i θ(t_1-t_2) ⟨[ψ̂_x_1α(t_1), ψ̂^†_x_2α(t_2)]_±⟩, where the plus (minus) sign corresponds to fermions (bosons), and θ represents the Heaviside function. The coordinates t_1,x_1,t_2,x_2 are all integers. The angle brackets ⟨⟩ denote the expectation value with respect to the vacuum state. Due to discrete translational symmetry, G_α depends only on the difference Δ t=t_1-t_2 and Δ x=x_1-x_2 for integer coordinates. Using Eq. (<ref>), we immediately find: G_α(Δ t,Δ x) = G_α'(Δ t',Δ x'), where (Δ t',Δ x')^T=A (Δ t,Δ x)^T. This equation explains how the mixing symmetry manifests in the particle propagator. For α'=α (a singlet band in the 𝒫_2 class), the propagator must remain invariant after a linear operation A on the spacetime coordinates, imposing a strong constraint on the propagator. For α'≠α, the propagator of band-α after the coordinate transformation becomes the propagator of band-α'. Thus, Eq. (<ref>) establishes a connection between propagators of different bands. In experiments, what can be measured is the wave function, or more precisely, the absolute magnitude of the wave function. The wave function is directly linked to the propagator. If we initially locate a particle at position x=0 at time t=0, its wave function at a later time satisfies, according to Eq. (<ref>) and (<ref>), Ψ_α(t,x) = Ψ_α'( t',x'), where ( t',x')^T = A ( t,x)^T and t,x are arbitrary integers. An alternative way to prove this result is by using Ψ_α (t,x) = ∑_k e^ikx-itE_α(k)/L and Eq. (<ref>).
http://arxiv.org/abs/2307.00366v1
20230701153130
Enhancing the EEG Speech Match Mismatch Tasks With Word Boundaries
[ "Akshara Soman", "Vidhi Sinha", "Sriram Ganapathy" ]
eess.AS
[ "eess.AS" ]
The high-quality single-cloud reddening curve sample R. Siebenmorgen1, J. Smoker2,3, J. Krełowski4, Karl Gordon5, and Rolf Chini6,7,8 Received: August 1, 2022/ Accepted: July 1, 2023 ========================================================================================= Recent studies have shown that the underlying neural mechanisms of human speech comprehension can be analyzed using a match-mismatch classification of the speech stimulus and the neural response. However, such studies have been conducted for fixed-duration segments without accounting for the discrete processing of speech in the brain. In this work, we establish that word boundary information plays a significant role in sentence processing by relating EEG to its speech input. We process the speech and the EEG signals using a network of convolution layers. Then, a word boundary-based average pooling is performed on the representations, and the inter-word context is incorporated using a recurrent layer. The experiments show that the modelling accuracy can be significantly improved (match-mismatch classification accuracy) to 93% on a publicly available speech-EEG data set, while previous efforts achieved an accuracy of 65-75% for this task. Index Terms: Speech-EEG match mis-match task, auditory neuroscience, word segmentation, speech comprehension. § INTRODUCTION Humans have the unique ability to communicate through speech. While speech comprehension is mastered from a young age, many neural processes enabling this seamless activity are unknown. One of the simplest ways of furthering the understanding of speech comprehension is through the recording of neural responses using electroencephalography (EEG). The EEG is a non-invasive neural imaging technique that measures electrical activity in the brain by placing electrodes on the scalp <cit.>. It has been demonstrated that the EEG signal recorded during a speech listening task contains information about the stimulus <cit.>. One can investigate how the brain comprehends continuous speech by developing models that relate the speech with the EEG signal using machine learning techniques <cit.>. The early attempts explored linear models for relating continuous natural speech to EEG responses <cit.>. They can be categorized into three different types - forward models, backward models, or hybrid models. The forward models predict EEG from speech stimuli, while the backward models reconstruct speech from EEG responses. In many studies, the correlation between the predicted and ground truth signal is considered as a measure of neural tracking <cit.>. However, linear models may be ill-equipped to capture the non-linear nature of the auditory system. Recently, deep neural networks have been employed to compare and analyze speech stimuli and EEG responses. Several studies have shown promising results with deep learning models for EEG-speech decoding <cit.>. These advancements in speech decoding from the brain will also be beneficial for the development of brain-computer interfaces(BCIs). In many of the computational approaches, the speech envelope has been the most commonly used feature <cit.>. Other features such as spectrograms <cit.>, phonemes <cit.>, linguistic features <cit.>, and phono-tactics <cit.> have also been explored with linear forward/backward models. Lesenfants et al.  <cit.> demonstrated that combining phonetic and spectrogram features improves the EEG-based speech reception threshold (SRT) prediction. While forward/backward models and correlation tasks were previously explored, the match mismatch tasks have been recently investigated as an alternative task <cit.>. Here, the task is to identify whether a portion of the brain response (EEG) is related to the speech stimulus that evoked it. In the previous studies using the match mismatch task, the auditory stimulus and speech of a fixed duration (5s) are processed through a series of convolutional and recurrent layers <cit.>. In this work, we argue that the prior works on speech-EEG match mismatch tasks are incomplete without considering the fragmented nature of speech comprehension. While speech and EEG signals are continuous, the neural tracking of speech signals is impacted by the linguistic markers of speech <cit.>. The most striking of this evidence comes from models of word surprisal <cit.> with N400 response evoked for unpredictable words <cit.>. In the simplest form, we hypothesize that the task of relating continuous speech with EEG must also include word-level segmentation information. We propose a deep learning model to perform match mismatch classification tasks on variable length inputs using word boundary information. The model consists of convolutive feature encoders of both the speech and EEG inputs. Further, the word segmentation information, obtained by force-aligning the speech with the text data using a speech recognition system, is incorporated in the feature outputs through a word-level pooling operation. The pooled representations are further modelled with recurrent long short-term memory (LSTM) layers to model the inter-word context. The final output from the LSTM network for the speech and EEG streams is used in the match mismatch classification task. The major contributions of this paper are: * Proposing a match mismatch classification model that can incorporate word boundary information. * Proposing a loss function based on Manhattan distance for the match mismatch task. * Experimental illustration of the effectiveness of the model, where the classification performance is significantly improved over the prior works. * A detailed set of ablation experiments to elicit the impact of word boundary information in speech EEG matching task. § METHODS §.§ Dataset We experiment with a publicly available speech-EEG data set[https://doi.org/10.5061/dryad.070jc] released by Broderick et al. <cit.>. It contains electroencephalographic (EEG) data recorded from 19 subjects as they listened to the narrative speech. The subjects listened to a professional audio-book narration of a well-known work of fiction read by a single male speaker. The data consists of 20 trials of roughly the same length, with each trial containing 180s of audio. The trials preserved the chronology of the storyline without repetitions or breaks. The sentence start and end time, and the word-level segmentation of the speech recordings are provided. The word segmentation is obtained using a speech recognition-based aligner <cit.>. The EEG data were acquired using the 128-channel BioSemi system at a sampling rate of 512Hz, while the audio data was played at 16kHz. Overall, the speech-EEG data amounted to a duration of 19 hours. §.§ EEG Preprocessing The CNSP Workshop 2021 guidelines[https://cnspworkshop.net/resources.html] served as the basis for the EEG pre-processing pipeline. It is implemented using the EEGLAB software <cit.>.The EEG signal is band-pass filtered between 0.5-32 Hz. Then it is down-sampled to 64Hz. After removing noisy channels (determined using the channel level statistics), the EEG channels are re-referenced to the mastoids. The data from each channel is also normalized by computing the z-score. The EEG pre-processing code and the codes used for further analysis discussed in this paper publicly available[https://github.com/iiscleap/EEGspeech-MatchMismatch]. §.§ Acoustic Feature - Mel Spectrogram The mel spectrogram of the speech signal is used as the stimulus feature. The mel spectrogram is computed for each sentence. A mel filter bank with 28 filters distributed in the mel-scale ranging from 0-8kHz frequency is used. The input audio is pre-emphasized with a factor of 0.97 before windowing. In order to obtain speech features at a sampling frequency of 64Hz, the spectrogram computation uses a Hamming window function of the width 31.25ms with half overlap. §.§ Match-mismatch classification task The accuracy of a match-mismatch classification task is employed in this study as a measure of the neural tracking of speech. fig:mmtask illustrates this paradigm in detail. The classification model is trained to relate the speech segment to its corresponding EEG response. In this study, the segment is chosen to be a sentence. We also compare with prior works <cit.>, which perform this task at the sentence level. The time-synchronized stimulus of the EEG response segment is the matched speech. Another sentence from the same trial of data collection is chosen as the mismatched speech. Selecting mismatched samples from the same trial makes the classification task challenging enough to encourage the model to learn the stimulus-response relationships. This sampling approach also avoids the chances of memorizing the speech features along with its label. The matched EEG response for these speech sentences is also included in the mini-batch training to ensure that memorisation is disallowed. §.§ Model architecture We employed different modelling paradigms to analyze the encoding of acoustic and semantic features in EEG signals. §.§.§ Baseline Model Recently, Monesi et al. <cit.> showed that convolutional neural network (CNN) and long short-term memory (LSTM) based architectures outperform linear models for modelling the relationship between EEG and speech. This work employed a match mismatch classification task on fixed duration windows of speech and their corresponding EEG data. The work also demonstrated that mel spectrogram features of the speech stimulus provide the best neural tracking performance compared to other representations like speech envelope, word embedding, voice activity and phoneme identity <cit.>. They have performed the match mismatch task of 5s duration segments with 90% overlap between successive frames. The prior works <cit.> use an angular distance between EEG and speech representations, average pooling over time, and a sigmoid operation. The model is trained with binary cross entropy loss <cit.>. We use this approach as the baseline setup for the proposed framework. §.§.§ Proposed match mismatch Model The speech signal representation 𝐒 is the mel-spectrogram of dimension 28 × T, where T denotes the duration of a speech sentence at 64Hz. Similarly, the EEG data for the same sentence is denoted as 𝐄, and it is of dimension 128 × T. Both the speech and the EEG features are processed through a parallel neural pipeline, as depicted in fig:speech_nw, without any weight sharing. This sub-network consists of a series of convolutional layers and LSTM layers. The convolutional layers implement 1-D and 2-D convolutions with 1 × 8 and 16 × 9 kernel sizes, respectively. The 1-D and 2-D layers have 8 and 16 kernels, respectively. Further, the 2-D CNN layers also introduce a stride of (1,3) to further down-sample the feature maps. The word boundary information available in the dataset is converted to the equivalent sampling rate (both EEG and audio representations at 64/3 Hz). The audio and EEG feature maps are average pooled at the word level using the word boundary information. As a result, for a given sentence, the EEG and speech branches generate vector representations sampled at the word level. An LSTM layer models the inter-word context from these representations. This layer is included in both the stimulus (speech) and response (EEG) pathways. The last hidden state of the LSTM layer, of dimension 32, is used as the embedding for the stimulus/response, denoted as R_s/R_e respectively. We propose the Manhattan distance between the stimulus and response embeddings. The similarity score is computed as, d(𝐄, 𝐒) = exp (- || R_e - R_s ||_1) The similarity score for the matched pair (𝐄, 𝐒^+) and mismatched pair (𝐄, 𝐒^-) are computed. The model, with a dropout factor of 0.2, is trained using a binary cross-entropy loss, with [d(𝐄, 𝐒^+), d(𝐄, 𝐒^-)] mapped to [1, 0] targets. §.§.§ Training and Evaluation Setup The dataset contained recordings from 19 subjects. All the experiments reported in this work perform subject-independent evaluation (the subjects used in training are not part of the evaluation). Further, we report the average results of 3-fold validation, with classification accuracy as the metric. The experiments are run with a batch size of 32. The models are trained using Adam optimizer with a learning rate of 0.001 and weight decay parameter of 0.0001. The models are learned with a binary cross-entropy loss. § RESULTS AND DISCUSSION §.§ Baseline model on fixed duration segments. The baseline implementation for comparison is the work reported in Monesi et al. <cit.>. This architecture is an LSTM model that operates on fixed-duration audio EEG data. All experiments are run for 20 epochs of training. The result of the model with fixed duration frames is given in tab:fixed. In order to increase the amount of training data, we also use 90% overlap between segments. §.§ Baseline model at sentence level The baseline model architecture is implemented for fixed-duration segments in training and testing. In order to operate at the sentence level, we have modified the dot product operation as element-wise multiplication followed by an average pooling. This score is passed through the sigmoid function, and the model is learned on sentence-level audio-EEG pairs. For the mismatch condition, a random speech spectrogram is paired with the EEG to generate the score. These results are reported in Table <ref>. §.§ Proposed model with sentence level processing The results with the proposed model are also reported in Table <ref>. We compare three different similarity scoring approaches, i) Angular (Cosine) similarity, ii) Negative L2 distance (Euclidean) and iii) proposed Manhattan similarity (Eq. <ref>). As seen in the results, the Euclidean and Manhattan similarity improves over the cosine similarity. The proposed EEG-speech match-mismatch classifier model reports an average accuracy of 93.97%, which is statistically significantly higher than the baseline model's sentence-level performance (Wilcoxon signed-rank test, p<1e-4). The epoch-wise accuracy for test fold-1 is also illustrated in fig:result_main. §.§ Mismatch sample selection for sentence processing Previous match-mismatch EEG-speech studies <cit.> dealt with fixed-duration speech and EEG segments. Cheveigne et al. <cit.> used an unrelated random segment as a mismatched sample, while studies like <cit.> employ a neighbouring segment as the mismatched sample. The sampling of the mismatched segments from the same trial ensures that the distribution of the matched and mismatched segments is similar. We explore a similar strategy for sentence-level analysis by selecting the neighbouring sentence in the same trial as the mismatched sample. Table <ref> shows how the mismatch selection strategy affects the classification accuracy. The average accuracy has a slight degradation when the next sentence is used as the mismatch sample. §.§ Importance of accurate word boundaries We conducted several ablation tests to understand the impact of the word boundary information. The model is fed with random word boundaries in the first set of experiments. Each sentence is assumed to contain a fixed number of words and their boundaries are chosen at random. The results are reported in Figure <ref>. The accuracy improves gradually when the number of word boundaries is increased, even though they are random. The accuracy of the experiment using 8 words in a sentence is 64%, which is significantly lower than the model's performance with accurate boundary information (Wilcoxon signed-rank test, p<0.0001). The final experiment shown in fig:result_randomW assumes a random number of words in each sentence with random boundaries, and it provided an accuracy of 60%. In the second set of experiments, we provide accurate word boundary information but skip the word boundary information at every n-th word. These results are reported in Table <ref>. For example, Skip-3 in this table corresponds to removing the word boundary inputs at every 3-rd entry. The pooling is done with the rest of the available word boundaries for these experiments. As seen in Table <ref>, the results with a higher value of n (of skip-n experiments), approach the setting without any removal (accuracy of 93.97%). It is also noteworthy that, even with the Skip-2 setting (word boundary information available for every alternate word), the performance is 82.3%, significantly better than the baseline model. This study also demonstrates that accurate word boundary information significantly impacts the match mismatch classification, which further illustrates that the EEG signal encodes the word level tracking of speech. § CONCLUSIONS In this paper, we have attempted to validate the hypothesis that speech comprehension in the brain is segmented at the word-level in the EEG responses to continuous speech. For this task, we developed a deep neural network model consisting of convolutional encoders, word-level aggregators and recurrent layers. A novel loss function for this task based on Manhattan similarity is also proposed. The proposed model validated the hypothesis by improving the accuracy of match-mismatch classification of speech and EEG responses at the sentence level. The incorporation of word boundary information yields statistically significant improvements compared to the baseline model, demonstrating the importance of this information in the neural tracking of speech. Moreover, the proposed model handles variable length inputs. Overall, this model can have potential applications in various domains, including speech recognition, brain-computer interfaces, and cognitive neuroscience. Future research could explore this model's extension to incorporate multi-modal inputs in the form of textual data in addition to the speech spectrogram. IEEEtran
http://arxiv.org/abs/2306.07009v1
20230612102430
$1/f$ noise from the trapping-detrapping process of individual charge carriers
[ "Aleksejus Kononovicius", "Bronislovas Kaulakys" ]
math.PR
[ "math.PR", "cond-mat.stat-mech" ]
1/f noise from the trapping–detrapping process of individual charge carriers Aleksejus Kononoviciusemail: mailto:[email protected]@tfai.vu.lt; website: <http://kononovicius.lt>, Bronislovas Kaulakys Institute of Theoretical Physics and Astronomy, Vilnius University =================================================================================================================================================================== We consider a signal generated by a single charge carrier drifting through the homogeneous condensed matter. We assume that the trapping centers are distributed uniformly across the material so that the trapping process is a homogeneous Poisson process. We assume that the detrapping rate of an individual trapping center is random and uniformly distributed. We show that under these assumptions, and if the trapping rate is low in comparison to the maximum detrapping rate, 1/f noise in the form of Hooge's relation is obtained. Hooge's parameter is shown to be a ratio between the characteristic trapping rate and the maximum detrapping rate. § INTRODUCTION Many materials, devices, and systems exhibit different kinds of fluctuations or noise <cit.>. Most widely known and well understood are the white noise and the Brownian noise. White noise is characterized by absence of temporal correlations, and flat power spectral density of S(f)∼1/f^0 form. Examples of the white noise include thermal and shot noise. Thermal noise is known to arise from the random motion of the charge carriers. It occurs at any finite temperature regardless of whether the current flows. Shot noise, on the other hand, is a result of the discrete nature of the charge carriers and the Poisson statistics of waiting times before each individual detection of the charge carrier. The Brownian noise is a temporal integral of the white noise, and thus exhibits no correlations between the increments of the signal, it is characterized by a power spectral density of S(f)∼1/f^2 form. The nature of the 1/f noise (also referred to as flicker noise or pink noise), characterized by power spectral density of S(f)∼1/f form, remains open to discussion despite almost 100 years since the first reports <cit.>. This kind of noise is of particular interest as it is observed across various physical <cit.>, and non–physical <cit.> systems. As far as the 1/f noise cannot be obtained by the simple procedure of integration, differentiation, or simple transformation of some common signals, and the general mechanism generating such signals has not yet been identified, there is not generally accepted solution to this 1/f noise problem. The oldest explanation for 1/f noise involves the superposition of Lorentzian spectra <cit.>. 1/f noise as a sum of Lorentzians also arises from the random telegraph signals <cit.>, and from the Brownian motion with wide–range distribution of relaxations <cit.>. These approaches are often limited to the specific systems being modeled, or require quite restrictive assumptions to be satisfied <cit.>. In the recent decades, series of models for the 1/f noise based on the specific, autoregressive AR(1), point process <cit.>, and the agent–based model <cit.>, yielding nonlinear stochastic differential equation <cit.> was proposed (see <cit.> for a recent review). Another more recent trend relies on scaling properties and non–linear transformations of signals <cit.>. These models on the other hand prove to be rather more abstract, and therefore more similar to the long–range memory models found in the mathematical literature, such as fractional Brownian motion <cit.> or ARCH models <cit.>. These and other similar models of 1/f noise are hardly applicable to the description and explanation of the mostly observable 1/f noise in the condensed matter. On the other hand, for a homogeneous conducting material Hooge proposed an empirical relation for the 1/f noise dependence on the parameters of the material <cit.>, S(f)=I̅^2α_H/Nf. Where I̅ stands for the average current flowing through the cross–section of the conducting material, N is the number of charge carriers, and α_H is the titular Hooge parameter. There were numerous attempts to derive or event explain the structure of the Hooge's relation <cit.>. In <cit.> Hooge's parameter was derived from an autoregressive point process model. More recent derivations of the Hooge's parameter based on the Poisson generation–recombination process modulated by the random telegraph noise were conducted in <cit.>. These and similar models cannot be directly applied to describe and explain the widespread 1/f noise in the condensed matter. Here we propose a model of 1/f noise in the condensed matter based on the trapping and the detrapping of individual charge carriers. As far as the square of the average current I̅^2 is proportional to the squared number of charge carriers N^2, Hooge's relation implies that the intensity of 1/f noise is proportional to the number of charge carriers N. Therefore, as the first approximation we can consider the noise originating from the flow of individual charge carriers. It is known that the drift, and the diffusion, of the charge carries does not yield 1/f noise <cit.>. Therefore, we consider the drift of the charge carriers interrupted by their entrapment in the trapping centers. We show that, if the detrapping rates of individual trapping centers are heterogeneous and uniformly distributed, 1/f noise is obtained. In this model, the signal generated by a single charge carrier is similar to the signal from non–overlapping rectangular pulses <cit.>. Using the results of Ref. <cit.> we derive Hooge's relation, and show that Hooge's parameter is a ratio between the characteristic trapping rate and the maximum detrapping rate. This paper is organized as follows. In Section <ref> we introduce a general physical model for 1/f noise in the condensed matter based on the trapping–detrapping process of a single charge carrier. In Section <ref> we discuss the implications of finite experiments and simulations. Namely, we show that the power spectral density produced by a single charge carrier may exhibit spurious low–frequency cutoff. This cutoff disappears, if the current generated by a large number of charge carriers is considered. Finally, Hooge's empirical relation and Hooge's parameter value for the proposed model is derived in Section <ref>. The main results are summarized in Section <ref>. § GENERAL MODEL FOR 1/F NOISE IN A HOMOGENEOUS CONDENSED MATTER Here we consider trapping–detrapping noise generated by a single charge carrier (e.g., electron). We consider only transitions between the conduction band and the trapping centers as producing fluctuations in the electric current. Under these conditions, the electric current generated by a single charge carrier is a sequence of non–overlapping rectangular pulses. The pulses are observed when the charge carrier drifts through the conduction band, thus generating the electric current. The pulses are separated by the gaps which correspond to the moments when the charge carrier remains trapped by any of the trapping centers. A sample signal generated by a single charge carrier is shown in Fig. <ref>. In Fig. <ref> and further in the paper τ_i will stand for i-th gap duration (detrapping time), θ_i will stand for i-th pulse duration (trapping time), and a will stand for the height of the pulses. The height of the pulses a has a fixed predetermined value as it represents the electric current generated by a drift of a single charge carrier. Gap and pulse durations are stochastic variables sampled from the specified gap and pulse duration distributions. The power spectral density of a signal generated by a single charge carrier I_1(t) (subscript 1 is added to emphasize that a single charge carrier is considered) composed of non–overlapping pulses with profiles A_k(t) is given by S_1(f)=lim_T→∞⟨2/T|∫_0^TI_1(t)e^-2π ft t|^2⟩ =lim_T→∞⟨2/T|∑_ke^-2π ft_kℱ{ A_k(t-t_k)}|^2⟩ , where T is the observation time, t_k is the start time of k-the pulse (corresponds to the time of k-th detrapping from the trapping center), and ℱ{ A_k(t-t_k)} stands for the Fourier transform of the k-th pulse profile A_k(t-t_k). In the specific case considered here the pulse profiles differ only in their duration θ_k. If the pulse and gap durations are independent, then the power spectral density of the signal is determined purely by the height of the pulses a and the pulse and gap duration distributions (let p_θ(θ) and p_τ(τ) be their respective probability density functions). In this case, the general formula for the power spectral density is given by <cit.> S_1(f)=a^2ν̅/π^2f^2[(1-χ_θ(f))(1-χ_τ(f))/1-χ_θ(f)χ_τ(f)]. In the above χ_τ(f) =⟨ e^2π fτ⟩ =∫_0^∞e^2π fτp_τ(τ)τ, χ_θ(f) =⟨ e^2π fθ⟩ =∫_0^∞e^2π fθp_θ(θ)θ, are the characteristic functions of the gap and pulse duration distributions respectively, and ν̅ is the mean number of pulses per unit time. For the ergodic processes, and given a long observation time, the value of ν̅ is trivially derived from the mean gap and mean burst durations, i.e., ν̅=1/⟨θ⟩ +⟨τ⟩. For the nonergodic processes, or if the observation time is comparatively short, the value of ν̅ can be approximated by calculating the means from the truncated distributions, or it may be defined purely empirically, i.e., ν̅=K/T (here K is the number of observed pulses, and T is the total observation time). Typically when trapping–detrapping processes are considered <cit.> it is assumed that both τ_i and θ_i are sampled from the exponential distributions with rates γ_τ and γ_θ respectively. Characteristic function of the exponential distribution, probability density function of which is given by p(τ)=γexp(-γτ), with event rate γ, is given by χ(f)=∫_0^∞γ e^2π fτ-γττ=γ/γ-2π f. Inserting Eq. (<ref>) as the characteristic function for pulse and gap duration distributions into Eq. (<ref>) yields Lorentzian power spectral density <cit.>. Notably, there were prior works which have assumed that τ_i, θ_i, or both are sampled from the distributions with power–law tails <cit.>. Here, let us assume that the detrapping times in the individual trapping centers are sampled from an exponential distribution with a unique detrapping rate γ_τ^(i). This would correspond to the individual trapping centers having different potential depths or trapping to different quantum states of same trapping center. As well as a result of the redistribution through the states with small bounding energy as an outcome of the interaction with phonons, electrons, radiation, etc. If γ_τ^(i) is uniformly distributed in the interval from γ_min to γ_max, then the probability density function of the detrapping time distribution is given by p(τ)=1/γ_max-γ_min∫_γ_min^γ_maxγ_τexp(-γ_ττ)γ_τ=(1+γ_minτ)exp(-γ_minτ)-(1+γ_maxτ)exp(-γ_maxτ)/(γ_max-γ_min)τ^2. This probability density function saturates for the short detrapping times, τ≪1/γ_max. For the longer detrapping times, τ≫1/γ_min, it decays as an exponential function. In the intermediate value range, 1/γ_max≪τ≪1/γ_min, this probability density function has the τ^-2 asymptotic behavior, which is already known to lead to 1/f noise <cit.>. The benefit of this formulation is that it allows to see how the τ^-2 asymptotic behavior can emerge in a homogeneous condensed matter. Experimentally τ^-2 asymptotic behavior is observable in quantum dots, nanocrystal, nanorod, and other semiconductor materials <cit.>, while the detrapping times can range from picoseconds to several months. The asymptotic behavior of Eq. (<ref>) can be examined in Fig. <ref> where it is represented by a red curve. Fig. <ref> also highlights contributions of some of the individual trapping centers, detrapping time distributions of which are plotted as dashed black curves. Unlike the simple power–law distribution, this gap duration distribution does not require the introduction of any arbitrary cutoffs. The parameters of this gap duration distribution have explicit physical meaning. Furthermore, the statistical moments are well–defined and have compact analytical forms. The mean of the distribution is given by: ⟨τ⟩ =1/γ_max-γ_minln(γ_max/γ_min), while the higher order moments are given by ⟨τ^q⟩ =q!/γ_max-γ_min×γ_min^1-q-γ_max^1-q/q-1. The characteristic function of the gap duration distribution can be obtained either by inserting Eq. (<ref>) into Eq. (<ref>), or by averaging over the characteristic functions of the exponential distribution, Eq. (<ref>). Latter approach yields the expression quicker: χ_τ(f)=1/γ_max-γ_min∫_γ_min^γ_maxγ_τ/γ_τ-2π fγ_τ=1+2π f/γ_max-γ_minln(γ_max-2π f/γ_min-2π f). If the interval of the possible detrapping rates is broad γ_min≪γ_max, then for γ_min≪2π f≪γ_max the characteristic function can be approximated by χ_τ(f)≈1+2π f/γ_maxln(1+γ_max/2π f)≈1-2π f/γ_max[π/2+ln(2π f/γ_max)]. Inserting Eq. (<ref>) into Eq. (<ref>) we have: S_1(f)=2a^2ν̅/πγ_maxf[(1-χ_θ(f))[π/2+ln(2π f/γ_max)]/1-χ_θ(f){ 1-2π f/γ_max[π/2+ln(2π f/γ_max)]}]. Assuming that 2π f/γ_max[π/2+ln(2π f/γ_max)]≪1, which is supported by an earlier assumption that 2π f≪γ_max, allows to simplify the above to S_1(f)≈a^2ν̅/γ_maxf. This approximation should hold well for γ_min≪2π f≪γ_max, and should not depend on the explicit form of χ_θ(f) unless χ_θ(f)≈1 for at least some of the frequencies in the range. Let us examine a specific case when the trapping centers are uniformly distributed within the material, and therefore the trapping process can be assumed to be a homogeneous Poisson process. Inserting the characteristic function of the exponential distribution, Eq. (<ref>), as the characteristic function of the pulse duration distribution into Eq. (<ref>) yields S_1(f)=4a^2ν̅/γ_θ^2[1/1-χ_τ(f)-2π f/γ_θ]. Then inserting the characteristic function of the proposed detrapping time distribution, Eq. (<ref>), into Eq. (<ref>) yields S_1(f)=a^2ν̅γ_max/γ_θ^2f×1/(π/2)^2+[γ_max/γ_θ-ln(2π f/γ_max)]^2. If the maximum detrapping rate is large in comparison to the trapping rate, i.e. γ_max/γ_θ≫π/2 and γ_max/γ_θ≫-ln(2π f/γ_max), then we recover Eq. (<ref>). In Fig. <ref> the power spectral density of a simulated signal with comparatively large detrapping rates is shown as a red curve. § LOW–FREQUENCY CUTOFF IN FINITE EXPERIMENTS The obtained approximation, Eq. (<ref>), holds for infinite observation time limit (single infinitely long signal) or infinite experiment limit (infinitely many signals with finitely long observation time). If either of the limits doesn't hold, then the range of frequencies over which the pure 1/f noise is observed becomes narrower. In finite experiments the process will not reach steady state, and therefore the cutoff frequencies will depend not on the model parameter values γ_min and γ_max, but on the smallest and the largest γ_τ^(i) values actually observed during the experiment. The difference between γ_max and the largest γ_τ^(i) is negligible, because the pure 1/f noise will be observed only if γ_max is a relatively large number. On the other hand the relative difference between γ_min and smallest γ_τ^(i) might not be negligible. Let us estimate the expected value of the smallest γ_τ^(i) in a finite experiment. In the model introduced in the previous section γ_τ^(i) is sampled from the uniform distribution with [γ_min,γ_max] range of possible values. It is known that, for x_i sampled from the uniform distribution with [0,1] range of possible values, the smallest x_i observed in the sample of size K is distributed according to the Beta distribution with the shape parameters α=1 and β=K <cit.>. Thus the expected value of the smallest x_i is given by ⟨min{ x_i} _K⟩ =α/α+β=1/K+1. Rescaling the range of possible values to [γ_min,γ_max] yields γ_min^(eff)=⟨min{γ_τ^(i)} _K⟩ =γ_max-γ_min/K+1+γ_min. As K corresponds to the number of pulses in the signal, we have that K=ν̅T=T/⟨θ⟩ +⟨τ⟩ and γ_min^(eff)=(γ_max-γ_min)⟨θ⟩ +⟨τ⟩/⟨θ⟩ +⟨τ⟩ +T+γ_min. In the above ⟨θ⟩ is effectively a model parameter as it is trivially given by ⟨θ⟩ =1/γ_θ, while ⟨τ⟩ is a derived quantity which has a more complicated dependence on the model parameters γ_min and γ_max (see Eq. (<ref>)). If the range of possible γ_τ^(i) values is broad, i.e., γ_max≫γ_min, we have γ_min^(eff)≈γ_maxγ_max⟨θ⟩ +lnγ_max/γ_min/γ_max(⟨θ⟩ +T)+lnγ_max/γ_min+γ_min. The above applies to the ergodic case with γ_min≫1/T. In the nonergodic case, for γ_min≲1/T, it would impossible to distinguish between the cases corresponding to the different γ_min values. Therefore, for the nonergodic case, γ_min can be replaced by 1/T yielding γ_min^(eff)≈γ_maxγ_max⟨θ⟩ +ln(γ_maxT)/γ_max(⟨θ⟩ +T)+ln(γ_maxT)+1/T≈1+γ_max⟨θ⟩ +ln(γ_maxT)/T. For relatively long pulse durations, ⟨θ⟩≫ln(γ_maxT)/γ_max, we have that γ_min^(eff)≈1+γ_max⟨θ⟩/T≈γ_max/γ_θT. From the above, it follows that low–frequency cutoff is always present in singular experiments with one charge carrier, and with finite observation time T. The cutoff will be observed at a frequency close to γ_min^(eff). As can be seen in Fig. <ref>, the cutoff moves to the lower frequencies as T increases, the power spectral density is flat for the lowest observable natural frequencies, 1/T<f≲γ_max/γ_θT. If multiple independent experiments (let R be the number of experiments) with finite observation time T are performed and the resulting spectral densities are averaged, then the total number of pulses increases by a factor R yielding γ_min^(eff)=(γ_max-γ_min)⟨θ⟩ +⟨τ⟩/⟨θ⟩ +⟨τ⟩ +RT+γ_min≈γ_max⟨θ⟩/RT+1/T=R+γ_max⟨θ⟩/RT. For R≫γ_max⟨θ⟩, no low–frequency cutoff will be noticeable. As shown in Fig. <ref>, low–frequency cutoff disappears as the experiments are repeated and the obtained power spectral densities are averaged. We have derived Eq. (<ref>) considering the current generated by a single charge carrier. In many experiments the number of charge carriers N will be large, N≫1. Consequently, from the Wiener–Khinchin theorem <cit.> it follows that performing independent experiments is equivalent to observing independent charge carriers. Therefore for N≫γ_max⟨θ⟩ no low–frequency cutoff will be noticeable. Though in this case, the power spectral densities of the signals generated by single charge carriers add up instead of averaging out, yielding a minor generalization of Eq. (<ref>) S_N(f)≈Na^2ν̅/γ_maxf. In the above ν̅ is strictly the mean number of pulses per unit time generated by a single charge carrier. As can be seen in Fig. <ref> (a), the signal generated by multiple independent charge carriers is no longer composed of non–overlapping pulses, although it retains discrete nature as individual charges drift freely or are trapped by the trapping centers. The amplitude and the slope of the power spectral density are well predicted by Eq. (<ref>) (as seen in Fig. <ref> (c)). The distribution of the signal's amplitude would be expected to follow the Binomial distribution with sample size N and success probability (probability that the charge carrier is free) p_F=⟨θ⟩/⟨θ⟩ +⟨τ⟩≈1-⟨τ⟩/⟨θ⟩. The fit by the Binomial distribution shown in Fig. <ref> (b) is not perfect, because the nonergodic case is simulated and ⟨τ⟩ is ill–defined, but predicts the overall shape of the probability distribution rather well. For γ_min≫1/T the fit would be much better. Notably, with larger N and under noisy observation, the Binomial distribution predicted by the model will quickly become indistinguishable from the Gaussian distribution. While in some cases 1/f noise is known to behave as a non–Gaussian process, most often it is found to exhibit Gaussian fluctuations <cit.>. § DERIVATION OF HOOGE'S EMPIRICAL RELATION AND HOOGE'S PARAMETER By comparing Hooge's relation Eq. (<ref>) to Eq. (<ref>) we see that α_H=N^2a^2ν̅/γ_maxI̅^2. As the height of the pulses a corresponds to the current generated by a single charge carrier we have a=qv_c/L, where q stands for the charge held by the carrier, v_c is the free drift velocity, and L is the length of the conducting material. Expression for a can be rewritten in terms of the average current flowing through the cross–section of the conducting material σ_M I̅=σ_Mnqv_d, where n stands for the density of the charge carriers (i.e., n=N/Lσ_M), and v_d is the average velocity of the charge carriers. The average velocity is related to the free drift velocity via the fraction of time the charge carrier spends drifting v_d=⟨θ⟩/⟨θ⟩ +⟨τ⟩v_c=ν̅⟨θ⟩ v_c. Consequently we have a=I̅/Nν̅⟨θ⟩. Inserting Eq. (<ref>) into Eq. (<ref>) yields the expression of the Hooge's parameter in terms of the characteristic trapping rate and the maximum detrapping rate: α_H=1/ν̅⟨θ⟩ ^2γ_max≈γ_θ/γ_max=⟨τ_min⟩/⟨θ⟩. In the above ⟨τ_min⟩ =1/γ_max is the expected gap duration generated when a charge carrier is trapped by the shallowest trapping center. The purer materials (i.e., ones with lower trapping center density n_c) will have lower α_H values, as the trapping rate is given by γ_θ=⟨σ_cv_c⟩ n_c (here σ_c is the trapping cross–section). Consequently the approximations for the power spectral density generated by the proposed model, Eqs. (<ref>) and (<ref>), can be rewritten in the same form as Hooge's empirical relation. Inserting Eq. (<ref>) into Eq. (<ref>) yields: S_N(f)=I̅^2γ_θ/γ_maxNf. § CONCLUSIONS We have proposed a general physical model of 1/f noise based on the trapping–detrapping process in a homogeneous condensed matter. Unlike in the previous works, we have assumed that the detrapping rate of each trapping center is random, and sampled from the uniform distribution. This assumption leads to a power–law distribution of the detrapping times Eq. (<ref>), which arises from a superposition of exponential detrapping time distributions representing the individual trapping centers with their own detrapping rates (see Fig. <ref>). Under this assumption, regardless of the details of the trapping process, as long as the trapping process is slow in comparison to the detrapping process, pure 1/f noise in a form of Hooge's empirical relation is obtained Eq. (<ref>). Under the assumptions of the proposed model, Hooge's parameter is just a ratio between the rate parameters of the trapping and the detrapping processes Eq. (<ref>). In Section <ref>, we have noted that as long as a finite signal generated by a single charge carrier is considered, the power spectral density may exhibit spurious low–frequency cutoff. The cutoff width is of the same order of magnitude as γ_max/γ_θ. This cutoff disappears when the power spectral density is averaged over a large number of experiments of finitely long observation time, or when the power spectral density is generated by a large number of independent charge carriers over finitely long observation time. In the latter case the distribution of the signal's amplitude follows Binomial distribution, which under noisy observations will quickly become indistinguishable from the Gaussian distribution. Future extensions of the approach presented here could include a more detailed analysis of multiple charge carrier dynamics, and allowing the detrapping rates to come from a discrete uniform distribution with a reasonably small number of possible detrapping rate values. All of the code used to perform the reported numerical simulations is available at <https://github.com/akononovicius/flicker-trap-detrap-individual-charge>. § AUTHOR CONTRIBUTIONS Aleksejus Kononovicius: Software, Validation, Writing – Original Draft, Writing – Review & Editing, Visualization. Bronislovas Kaulakys: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing. 10 url<#>1urlprefixURL href#1#2#2 #1#1 Kogan1996CUP S. Kogan, Electronic noise and fluctuations in solids, Cambridge University Press, 1996. https://doi.org/10.1017/CBO9780511551666 doi:10.1017/CBO9780511551666. Lowen2005Wiley S. B. Lowen, M. C. Teich, Fractal-Based Point Processes, Wiley, 2005. https://doi.org/10.1002/0471754722 doi:10.1002/0471754722. VanKampen2007NorthHolland N. G. van Kampen, Stochastic process in physics and chemistry, North Holland, Amsterdam, 2007. Johnson1925PR J. B. Johnson, The Schottky effect in low frequency circuits, Physical Review 26 (1925) 71–85. https://doi.org/10.1103/PhysRev.26.71 doi:10.1103/PhysRev.26.71. Schottky1926PR W. Schottky, Small-shot effect and flicker effect, Physical Review 28 (1) (1926) 74–103. https://doi.org/10.1103/PhysRev.28.74 doi:10.1103/PhysRev.28.74. Voss1976PRL R. F. Voss, J. Clarke, 1/f noise from systems in thermal qquilibrium, Physical Review Letters 36 (1) (1976) 42–45. https://doi.org/10.1103/PhysRevLett.36.42 doi:10.1103/PhysRevLett.36.42. Dutta1981RMP P. Dutta, P. M. Horn, Low-frequency fluctuations in solids: 1/f noise, Reviews of Modern Physics 53 (1981) 497–516. https://doi.org/10.1103/RevModPhys.53.497 doi:10.1103/RevModPhys.53.497. Weissman1988RevModPhys M. B. Weissman, 1/f noise and other slow, nonexponential kinetics in condensed matter, Reviews of Modern Physics 60 (2) (1988) 537–571. https://doi.org/10.1103/RevModPhys.60.537 doi:10.1103/RevModPhys.60.537. Hooge1981RepProgPhys F. N. Hooge, T. G. M. Kleinpenning, L. K. J. Vandamme, Experimental studies on 1/f noise, Reports on Progress in Physics 44 (5) (1981) 479–532. https://doi.org/10.1088/0034-4885/44/5/001 doi:10.1088/0034-4885/44/5/001. Hooge1994IEEE F. N. Hooge, 1/f noise sources, IEEE Transactions on Electron Devices 41 (11) (1994) 1926–1935. https://doi.org/10.1109/16.333808 doi:10.1109/16.333808. Mitin2002 V. Mitin, L. Reggiani, L. Varani, Generation-recombination noise in semiconductors, in: Noise and Fluctuation Controls in Electronic Devices, Noise and Fluctuation Controls in Electronic Devices, American Scientific Publishers, 2002. Wong2003MR H. Wong, Low-frequency noise study in electron devices: Review and update, Microelectronics Reliability 43 (4) (2003) 585–599. https://doi.org/10.1016/S0026-2714(02)00347-5 doi:10.1016/S0026-2714(02)00347-5. Fleetwood2015TNS D. M. Fleetwood, 1/f noise and defects in microelectronic materials and devices, IEEE Transactions on Nuclear Science 62 (4) (2015) 1462–1486. https://doi.org/10.1109/tns.2015.2405852 doi:10.1109/tns.2015.2405852. Careri2000PRE G. Careri, G. Consolini, Dielectric 1/f noise of proton glass on a hydrated protein surface, Physical Review E 62 (3) (2000) 4454–4456. https://doi.org/10.1103/PhysRevE.62.4454 doi:10.1103/PhysRevE.62.4454. Siwy2002PRL Z. Siwy, A. Fuliński, Origin of 1/f noise in membrane channel currents, Physical Review Letters 89 (15) (2002) 158101. https://doi.org/10.1103/PhysRevLett.89.158101 doi:10.1103/PhysRevLett.89.158101. Balandin2013NN A. A. Balandin, Low-frequency 1/f noise in graphene devices, Nature Nanotechnology 8 (2013) 549–555. https://doi.org/10.1038/nnano.2013.144 doi:10.1038/nnano.2013.144. Krisponeit2013PRB J.-O. Krisponeit, C. Kalkert, B. Damaschke, V. Moshnyaga, K. Samwer, Time-resolved resistive switching on manganite surfaces: Creep and 1/f^α noise signatures indicate pinning of nanoscale domains, Physical Review B 87 (12) (2013) 121103(R). https://doi.org/10.1103/PhysRevB.87.121103 doi:10.1103/PhysRevB.87.121103. Sadegh2014NJP S. Sadegh, E. Barkai, D. Krapf, 1/f noise for intermittent quantum dots exhibits non-stationarity and critical exponents, New Journal of Physics 16 (2014) 113054. https://doi.org/10.1088/1367-2630/16/11/113054 doi:10.1088/1367-2630/16/11/113054. Fox2021Nature Z. R. Fox, E. Barkai, D. Krapf, Aging power spectrum of membrane protein transport and other subordinated random walks, Nature Communications 12 (2021). https://doi.org/10.1038/s41467-021-26465-8 doi:10.1038/s41467-021-26465-8. Wirth2021IEEE G. Wirth, M. B. da Silva, T. H. Both, Unified compact modeling of charge trapping in 1/f noise, RTN and BTI, in: 2021 5th IEEE Electron Devices Technology and Manufacturing Conference (EDTM), IEEE, 2021, pp. 1–3. https://doi.org/10.1109/edtm50988.2021.9421005 doi:10.1109/edtm50988.2021.9421005. Voss1975Nature R. F. Voss, J. Clarke, 1/f noise in music and speech, Nature 258 (1975) 317–318. https://doi.org/10.1038/258317a0 doi:10.1038/258317a0. Kobayashi1982BioMed M. Kobayashi, T. Musha, 1/f fluctuation of heartbeat period, IEEE Transactions on Biomedical Engineering 29 (1982) 456–457. https://doi.org/10.1109/TBME.1982.324972 doi:10.1109/TBME.1982.324972. Gilden1995Science D. L. Gilden, T. Thornton, M. W. Mallon, 1/ f noise in human cognition, Science 267 (5205) (1995) 1837–1839. https://doi.org/10.1126/science.7892611 doi:10.1126/science.7892611. Wagenmakers2004PsychonBullRev E.-J. Wagenmakers, S. Farrell, R. Ratcliff, Estimation and interpretation of 1/f^α noise in human cognition, Psychonomic Bulletin & Review 11 (2004) 579–615. https://doi.org/10.3758/bf03196615 doi:10.3758/bf03196615. Li2005PRE W. Li, D. Holste, Universal 1/f noise, crossovers of scaling exponents, and chromosome-specific patterns of guanine-cytosine content in DNA sequences of the human genome, Physical Review E 71 (4) (2005) 041910. https://doi.org/10.1103/PhysRevE.71.041910 doi:10.1103/PhysRevE.71.041910. Podobnik2008PhysA B. Podobnik, D. Horvatic, A. L. Ng, H. E. Stanley, P. C. Ivanov, Modeling long-range cross-correlations in two-component ARFIMA and FIARCH processes, Physica A 387 (15) (2008) 3954–3959. https://doi.org/10.1016/j.physa.2008.01.062 doi:10.1016/j.physa.2008.01.062. Levitin2012PNAS D. J. Levitin, P. Chordia, V. Menon, Musical rhythm spectra from Bach to Joplin obey a 1/f power law, Proceedings of the National Academy of Sciences of the United States of America 109 (2012) 3716–3720. https://doi.org/10.1073/pnas.1113828109 doi:10.1073/pnas.1113828109. Bernamont1937ProcPhysSoc J. Bernamont, Fluctuations in the resistance of thin films, Proceedings of the Physical Society 49 (4S) (1937) 138–139. https://doi.org/10.1088/0959-5309/49/4S/316 doi:10.1088/0959-5309/49/4S/316. Surdin1939 M. Surdin, Fluctuations de courant thermionique et le Flicker effect, Journal de Physique et le Radium 10 (4) (1939) 188–189. https://doi.org/10.1051/jphysrad:01939001004018800 doi:10.1051/jphysrad:01939001004018800. Surdin1951 M. Surdin, Une théorie des fluctuations électriques dans les semi-conducteurs, Journal de Physique et le Radium 12 (8) (1951) 777–783. https://doi.org/10.1051/jphysrad:01951001208077700 doi:10.1051/jphysrad:01951001208077700. McWhorter1957 A. L. McWhorter, R. H. Kingston, Semiconductor surface physics, in: Proceedings of the Conference on Physics of Semiconductor Surface Physics, Vol. 207, University of Pennsylvania, Philadelphia, 1957. VanDerZiel1979AEEP A. V. D. Ziel, Flicker noise in electronic devices, in: Advances in Electronics and Electron Physics, Elsevier, 1979, pp. 225–297. https://doi.org/10.1016/s0065-2539(08)60768-4 doi:10.1016/s0065-2539(08)60768-4. Palenskis2015SRep V. Palenskis, K. Maknys, Nature of low-frequency noise in homogeneous semiconductors, Scientific Reports 5 (1) (2015). https://doi.org/10.1038/srep18305 doi:10.1038/srep18305. Kaulakys2005PhysRevE B. Kaulakys, V. Gontis, M. Alaburda, Point process model of 1/f noise vs a sum of Lorentzians, Physical Review E 71 (2005) 051105. https://doi.org/10.1103/PhysRevE.71.051105 doi:10.1103/PhysRevE.71.051105. Kaulakys1999PLA B. Kaulakys, Autoregressive model of 1/f noise, Physics Letters A 257 (1999) 37–42. https://doi.org/10.1016/S0375-9601(99)00284-4 doi:10.1016/S0375-9601(99)00284-4. Ruseckas2011EPL J. Ruseckas, B. Kaulakys, V. Gontis, Herding model and 1/f noise, EPL 96 (2011) 60007. https://doi.org/10.1209/0295-5075/96/60007 doi:10.1209/0295-5075/96/60007. Kononovicius2012PhysA A. Kononovicius, V. Gontis, Agent based reasoning for the non-linear stochastic models of long-range memory, Physica A 391 (2012) 1309–1314. https://doi.org/10.1016/j.physa.2011.08.061 doi:10.1016/j.physa.2011.08.061. Kaulakys2009JStatMech B. Kaulakys, M. Alaburda, Modeling scaled processes and 1/f^β noise using non-linear stochastic differential equations, Journal of Statistical Mechanics 2009 (02) (2009) P02051. https://doi.org/10.1088/1742-5468/2009/02/p02051 doi:10.1088/1742-5468/2009/02/p02051. Kazakevicius2021Entropy R. Kazakevicius, A. Kononovicius, B. Kaulakys, V. Gontis, Understanding the nature of the long–range memory phenomenon in socioeconomic systems, Entropy 23 (2021) 1125. https://doi.org/10.3390/e23091125 doi:10.3390/e23091125. Ruseckas2014JStatMech J. Ruseckas, B. Kaulakys, Scaling properties of signals as origin of 1/f noise, Journal of Statistical Mechanics 2014 (6) (2014) P06005. https://doi.org/10.1088/1742-5468/2014/06/p06005 doi:10.1088/1742-5468/2014/06/p06005. Kaulakys2015MPLB B. Kaulakys, M. Alaburda, J. Ruseckas, 1/f noise from the nonlinear transformations of the variables, Modern Physics Letters B 29 (2015) 1550223. https://doi.org/10.1142/S0217984915502231 doi:10.1142/S0217984915502231. Eliazar2021JPA I. Eliazar, Selfsimilar diffusions, Journal of Physics A: Mathematical and Theoretical 54 (2021) 35LT01. https://doi.org/10.1088/1751-8121/ac1771 doi:10.1088/1751-8121/ac1771. Kazakevicius2023PRE R. Kazakevičius, A. Kononovicius, Anomalous diffusion and long-range memory in the scaled voter model, Physical Review E 107 (2023) 024106. https://doi.org/10.1103/PhysRevE.107.024106 doi:10.1103/PhysRevE.107.024106. Mandelbrot1968SIAMR B. Mandelbrot, J. W. Van Ness, Fractional Brownian motions, fractional noises and applications, SIAM Review 10 (1968) 422–437. https://doi.org/10.1137/1010093 doi:10.1137/1010093. Mandelbrot2013Springer B. B. Mandelbrot, Multifractals and 1/f noise: Wild self-affinity in physics (1963–1976), Springer, 2013. Beran2017Routledge J. Beran, Statistics for long-memory processes, Routledge, 2017. https://doi.org/10.1201/9780203738481 doi:10.1201/9780203738481. Engle1982Econometrica R. Engle, Autoregresive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica 50 (1982) 987–1008. https://doi.org/10.2307/1912773 doi:10.2307/1912773. Bollerslev1986Econometrics T. Bollerslev, Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics 31 (1986) 307–327. https://doi.org/10.1016/0304-4076(86)90063-1 doi:10.1016/0304-4076(86)90063-1. Giraitis2009 L. Giraitis, R. Leipus, D. Surgailis, ARCH(∞) models and long memory, in: T. G. Anderson, R. A. Davis, J. Kreis, T. Mikosh (Eds.), Handbook of Financial Time Series, Springer Verlag, Berlin, 2009, pp. 71–84. https://doi.org/10.1007/978-3-540-71297-8_3 doi:10.1007/978-3-540-71297-8_3. Bollerslev2023JEconom T. Bollerslev, The story of GARCH: A personal odyssey, Journal of Econometrics 234 (2023) 96–100. https://doi.org/10.1016/j.jeconom.2023.01.015 doi:10.1016/j.jeconom.2023.01.015. Hooge1969PLA F. N. Hooge, 1/f noise is no surface effect, Physics Letters A 29 (3) (1969) 139–140. https://doi.org/10.1016/0375-9601(69)90076-0 doi:10.1016/0375-9601(69)90076-0. Hooge1972Phys F. N. Hooge, Discussion of recent experiments on 1/f noise, Physica 60 (1) (1972) 130–144. https://doi.org/10.1016/0031-8914(72)90226-1 doi:10.1016/0031-8914(72)90226-1. Hooge1990PhysB F. Hooge, The relation between 1/f noise and number of electrons, Physica B: Condensed Matter 162 (3) (1990) 344–352. https://doi.org/10.1016/0921-4526(90)90030-X doi:10.1016/0921-4526(90)90030-X. Dmitriev2009JAppPhys A. P. Dmitriev, M. E. Levinshtein, S. L. Rumyantsev, On the hooge relation in semiconductors and metals, Journal of Applied Physics 106 (2) (2009) 024514. https://doi.org/10.1063/1.3186620 doi:10.1063/1.3186620. Vandamme2013ICNF L. K. J. Vandamme, How useful is Hooge's empirical relation, in: 22nd International Conference on Noise and Fluctuations (ICNF), IEEE, 2013, pp. 1–6. https://doi.org/10.1109/ICNF.2013.6578875 doi:10.1109/ICNF.2013.6578875. Gruneis2019PLA F. Gruneis, An alternative form of Hooge's relation for 1/f noise in semiconductor materials, Physics Letters A 383 (13) (2019) 1401–1409. https://doi.org/10.1016/j.physleta.2019.02.009 doi:10.1016/j.physleta.2019.02.009. Gruneis2022PhysA F. Gruneis, 1/f noise under drift and thermal agitation in semiconductor materials, Physica A 593 (2022) 126917. https://doi.org/10.1016/j.physa.2022.126917 doi:10.1016/j.physa.2022.126917. Kononovicius2023PRE A. Kononovicius, B. Kaulakys, 1/f noise from the sequence of nonoverlapping rectangular pulses, Physical Review E 107 (2023) 034117. https://doi.org/10.1103/PhysRevE.107.034117 doi:10.1103/PhysRevE.107.034117. Margolin2006JStatPhys G. Margolin, E. Barkai, Nonergodicity of a time series obeying Levy statistics, Journal of Statistical Physics 122 (1) (2006) 137–167. https://doi.org/10.1007/s10955-005-8076-9 doi:10.1007/s10955-005-8076-9. Lukovic2008JChemPhys M. Lukovic, P. Grigolini, Power spectra for both interrupted and perennial aging processes, The Journal of Chemical Physics 129 (18) (2008) 184102. https://doi.org/10.1063/1.3006051 doi:10.1063/1.3006051. Niemann2013PRL M. Niemann, H. Kantz, E. Barkai, Fluctuations of 1/f noise and the low-frequency cutoff paradox, Physical Review Letters 110 (14) (2013) 140603. https://doi.org/10.1103/PhysRevLett.110.140603 doi:10.1103/PhysRevLett.110.140603. Leibovich2016PRE N. Leibovich, A. Dechant, E. Lutz, E. Barkai, Aging Wiener-Khinchin theorem and critical exponents of 1/f^β noise, Physical Review E 94 (2016) 052130. https://doi.org/10.1103/PhysRevE.94.052130 doi:10.1103/PhysRevE.94.052130. Frantsuzov2008NatPhys P. Frantsuzov, M. Kuno, B. Jankó, R. A. Marcus, Universal emission intermittency in quantum dots, nanorods and nanowires, Nature Physics 4 (7) (2008) 519–522. https://doi.org/10.1038/nphys1001 doi:10.1038/nphys1001. Cordones2013CSR A. A. Cordones, S. R. Leone, Mechanisms for charge trapping in single semiconductor nanocrystals probed by fluorescence blinking, Chemical Society Reviews 42 (8) (2013) 3209. https://doi.org/10.1039/c2cs35452g doi:10.1039/c2cs35452g. Nenashev2018PRB A. V. Nenashev, V. V. Valkovskii, J. O. Oelerich, A. V. Dvurechenskii, O. Semeniuk, A. Reznik, F. Gebhard, S. D. Baranovskii, Release of carriers from traps enhanced by hopping, Physical Review B 98 (15) (2018) 155207. https://doi.org/10.1103/PhysRevB.98.155207 doi:10.1103/PhysRevB.98.155207. Haneef2020JMCC H. F. Haneef, A. M. Zeidell, O. D. Jurchescu, Charge carrier traps in organic semiconductors: A review on the underlying physics and impact on electronic devices, Journal of Materials Chemistry C 8 (3) (2020) 759–787. https://doi.org/10.1039/c9tc05695e doi:10.1039/c9tc05695e. Gentle2009Springer J. E. Gentle, Mathematical and statistical preliminaries, in: Statistics and Computing, Springer New York, 2009, pp. 5–79. https://doi.org/10.1007/978-0-387-98144-4_1 doi:10.1007/978-0-387-98144-4_1. Melkonyan2010PhysB S. V. Melkonyan, Non-Gaussian conductivity fluctuations in semiconductors, Physica B: Condensed Matter 405 (1) (2010) 379–385. https://doi.org/10.1016/j.physb.2009.08.096 doi:10.1016/j.physb.2009.08.096. Ruseckas2016JStatMech J. Ruseckas, R. Kazakevicius, B. Kaulakys, Coupled nonlinear stochastic differential equations generating arbitrary distributed observable with 1/f noise, Journal of Statistical Mechanics 2016 (4) (2016) 043209. https://doi.org/10.1088/1742-5468/2016/04/043209 doi:10.1088/1742-5468/2016/04/043209.
http://arxiv.org/abs/2306.05569v1
20230608213932
Disinformation 2.0 in the Age of AI: A Cybersecurity Perspective
[ "Wojciech Mazurczyk", "Dongwon Lee", "Andreas Vlachos" ]
cs.CY
[ "cs.CY", "cs.CR" ]
[email protected] Warsaw University of Technology Nowowiejska 15/19 Warsaw Mazovia Poland 00-665 [email protected] The Pennsylvania State University USA [email protected] University of Cambridge Cambridge UK Disinformation 2.0 in the Age of AI: A Cybersecurity Perspective Andreas Vlachos ================================================================ § INTRODUCTION According to a report from Lloyd's Register Foundation[Lloyd's Register Foundation, “The Lloyd's Register Foundation World Risk Poll”, 2019, URL: https://wrp.lrfoundation.org.uk/2019-world-risk-poll/fake-news-is-the-number-one-worry-for-internet-users-worldwide/], at present, cybercrime is one of the biggest concerns of Internet users worldwide, with disinformation[In this work, among related concepts and terms, we use the term, disinformation, to refer to “false information created with malicious intention," per <cit.>.] ranking highest among such risks (57% of internet users across all parts of the world, socio-economic groups, and all ages). Yet, for years, there has been a discussion in the security community about whether disinformation should be considered a cyber threat or not <cit.>. However, recent worldwide phenomena (e.g., the increase in the frequency and sophistication of cyberattacks, the 2016 US election interference, and Russian invasion in Ukraine, the COVID-19 pandemic, etc.) have made disinformation one of the most potent cybersecurity threats for businesses, government agencies, the media, and whole society as a whole. In addition, recent breakthroughs in AI have further enabled the creation of highly realistic deepfake content at scale. As such, we strongly believe that disinformation should be rightfully considered as a cyber threat, and therefore developing effective countermeasures is critically necessary. § THE EVOLUTION OF DISINFORMATION The way that disinformation is evolving is not exactly new, as other “classical” cyber threats followed a similar path. First, disinformation has been around for centuries and the internet is just the latest means of communication to be abused to spread it. We have been already witnessing something like this in the past - there were different types of crimes like scams, extortions, and thefts (e.g., financial or intellectual property ones), and we now see their cyber versions, which are, in fact, less risky for attackers yet more effective than their classical forms. Thus, the use of the Internet (mainly via social media) made it possible to boost the scale and range at which these attacks can be launched, but the essence of the attack itself remains the same. Similarly, disinformation, as the virtual version of propaganda, can affect many more people in a much shorter time than in the case of the non-digital version (e.g., traditional newspapers, TV news). Moreover, advances in AI allowed the creation of deepfakes in various types of digital media (i.e., images, video, speech) and text, and the introduced modifications are tough to spot, distinguish, and explain <cit.>. This greatly enhances the resulting disinformation' potential reach and believability (Fig. <ref>). Note that disinformation is not necessarily expected to provide direct revenue, as in the case of other cyber threats. However, such cases already have happened, e.g., by arranging disinformation to manipulate stock price[NBCnews, “SEC Cracks Down on Fake Stock News”, 2017, URL: https://www.nbcnews.com/business/markets/sec-cracks-down-fake-stock-news-n745141] or making money for disseminating them[Heather C. Hughes, Israel Waismel-Manor, 'The Macedonian Fake News Industry and the 2016 US Election', URL: https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/macedonian-fake-news-industry-and-the-2016-us-election/79F67A4F23148D230F120A3BD7E3384F]. § DISINFORMATION 2.0 In the conventional paradigm of disinformation 1.0, on the attack side, we have disinformation creators who fabricate false information and post it to websites or social media for monetary incentives or political agenda. Similarly, we have more sophisticated (often state-backed) actors who aim to spread disinformation more broadly and deeply on social media. On the other hand, on the defense side, platforms have used human operators as well as computational methods to ensure the integrity of information, such as disinformation detectors to filter out questionable content, and socialbot detectors to curb the spread of disinformation. By and large, so far, creators (semi-)manually have created and disseminated disinformation content without using sophisticated AI techniques. When fake contents were detected and filtered out by defense mechanisms, creators would simply attempt to re-disseminate disinformation using different accounts or domains. However, with the recent advances in AI, we envision that this paradigm is likely to change substantially, yielding so-called disinformation 2.0, where disinformation would become more targeted and personalized, its content become very difficult to distinguish from real news, and its creation and dissemination become more accelerated by AI. Disinformation 2.0 will further increase distrust in the news that humans encounter in real and digital life, which is a major problem already[Digital News Report 2022, URL: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022]. Typically, in cyberspace, an attacker's aim is to find weak spots in the targeted system and exploit/compromise them using technical and/or non-technical means. If we treat disinformation as a cyberattack, then several scenarios of disinformation 2.0 in the age of AI are plausible as follows, as also illustrated in Figure <ref>: * Adversaries generate more convincing disinformation, using AI-based techniques (e.g., ChatGPT for texts or DALL-E for images), * Adversaries subtly perturb existing true contents using AI methods to create convincing disinformation while better evading perturbation detection mechanisms <cit.>. * Adversaries use AI methods to attack disinformation detection eco-systems, promoting disinformation and demoting real news <cit.>, * Adversaries strategize the spread of disinformation using AI methods, maximizing its influence on the network while evading socialbot detection eco-systems <cit.>. When the creation and dissemination of disinformation are greatly enhanced using advanced AI methods in one of these scenarios, the resulting disinformation 2.0 becomes much harder to detect, persuasive, and impactful than the previous disinformation 1.0. § DISINFORMATION AND CYBERSECURITY – WHERE DOES IT FIT? Social engineering is a well-known group of methods that an attacker can use for collecting information to deceive people by misusing their trust or convincing them to behave in the desired manner (e.g., share confidential information or perform activities useful for the attacker, such as installing malicious software). This is probably the oldest family of techniques used to perform cyber reconnaissance. Moreover, it is extremely effective as it exploits the weakest link in security–i.e., humans. It can also significantly decrease the time needed to gather information and often requires minimal or no technical skills. Typically, social engineering is classified into two groups: (1) human-based, requiring direct or in-person interaction, and (2) technology-based, where the physical presence of an attacker is not needed. One of notable examples of technology-based social engineering is phishing, in which an attacker tries to mislead the victim by impersonating a trustworthy entity by using a neatly-crafted email or impersonating a legitimate website. Considering the above, disinformation, in its essence, bears many resemblances to phishing, as summarized in Table <ref>. Note also that in cybersecurity, attackers utilize users to compromise a technical system. However, the aim of disinformation is often different such that adversaries want to influence (long-term) human decision-making. This differentiates disinformation from social engineering whose the aim is relatively short-term. Thus, disinformation adversaries use cyber technology to compromise Internet users (not devices, mechanisms, protocols, and tools). § COUNTERMEASURES Detecting disinformation 1.0 has been heavily researched in recent years (e.g., <cit.>), and many solutions have reported high detection accuracies (e.g., <cit.>). However, there are still several remaining issues, including early detection, multi-lingual/multi-platform detection, better explainability, socio-technical issues <cit.>. As with every cyber threat, completely eliminating disinformation is unlikely (as achieving complete security is never possible). However, we need to diminish the impact of disinformation on Internet users, as we did with threats like spam emails. Several years ago, spam emails were considered one of the major threats, but now their scale and relevancy are not high as they were before and have fallen to a reasonable level <cit.>. This has been achieved due to two decades of research advances, during which many sophisticated techniques enabled to significantly limit the volume of spam emails in Internet users' inboxes. Currently, a major disadvantage is that dedicated defenses against disinformation are being individually researched, developed, deployed, and evaluated, which is not as effective in diminishing the threat of disinformation. For this challenge, we argue to use some lessons learned from cybersecurity, where typically multiple “layers of defense” are envisioned. On the Internet, as security risks occur at various levels, it is necessary to set up security mechanisms that provide multiple layers of defense against these risks (e.g., by considering threats at the system, network, application, and transmission levels). Using such a layered approach to provide network security makes it possible for an attacker who penetrates one layer of defense to be stopped by a subsequent layer. Therefore, addressing the disinformation problem is also likely to require a layered approach where both human-oriented (e.g., raising awareness, training, etc.) and technical measures are applied at the news creation, transmission, and consumption layers. § A PROPOSED APPROACH Disinformation 2.0 techniques urge to devise new countermeasures. To achieve this, a layered approach could address such a threat in a holistic manner. That is why, we propose to distinguish four layers at which disinformation impact can be diminished, i.e. Fig. <ref>: * Social Network-level layer is organized by the social network operator, who can equip it with several solutions like (AI-based and/or human operator-aided) disinformation detection, spread control mechanisms, and detection of bots. Additional methods include, for example, verification of sources providing the news (e.g., using reputation-based or digital watermarking-based solutions), whitelisting and blacklisting of news sources, and fixing the information suggestion algorithms to avoid creating filter bubbles. * ISP-level layer is organized at ISP (Internet Service Provider), which is responsible for detecting, filtering, and blocking well-known and verified domains of disinformation (this is already done, e.g., for phishing emails, suspicious links, or blacklisted domains). In such a scenario, the ISP can be considered a sort of proxy between the users and the servers of the social network, located somewhere on the Internet. * Device-level layer is organized on the user's machine, typically within a browser or mobile apps, as this is how the user interacts with various websites and social networks. The security mechanisms deployed on this level should include automatic (e.g., AI-based) disinformation detection solutions, in-browser news source verification techniques (e.g., digital watermarking), using whitelists and blacklists for news sources, automatic cross-verification of suspicious news at several trusted sources, and means to enforce user's responsible news sharing (e.g., when the news is marked as potentially fake by automatic detection mechanisms, it would notify the user when he/she is trying to spread it further). * User-level layer is an essential part of the holistic approach to addressing disinformation, incorporating all manual actions that can be performed by users. This includes engaging with prebunking <cit.> which raises the ability of the users to detect disinformation. Furthermore, given the rise and value of citizen journalism, it is important to empower users to perform disinformation detection using technical means that are commonly accessible to professional journalists in newsrooms. Note that in order to be effective, all security mechanisms and user actions, need to be applied simultaneously and collaborate with each other by exchanging information. Moreover, employed mechanisms should be as diverse as possible, i.e., they preferably should base their detection approaches on different aspects of disinformation. It is also worth emphasizing that currently, not all of the above-mentioned methods to fight disinformation are in use/exist (e.g., an automatic AI-based cross-verification of the news at different sources). Moreover, in Fig. <ref>, arrows between the layers indicate that each layer can transfer certain information to the other layer. For instance, disinformation detection mechanisms on social networks can tag a video or image as questionable when passing the news to the user's browser/app for further probing by the next layer, or it can filter it out and send down only a proper notification. On the other hand, if disinformation is discovered on the user's device level, then this information can be passed to the social network operator and displayed to the user. Obviously, the success of the proposed approach depends on the ensured user data privacy In conclusion, we strongly believe that the advanced AI techniques, despite their benefits to society, greatly enable adversaries to achieve more sophisticated and effective disinformation 2.0 attacks. As such, adopting the lessons learned from cybersecurity research, novel countermeasures are needed, especially a holistic layered approach. Wojciech Mazurczyk acknowledges the funding obtained from the EIG CONCERT-Japan call to the project Detection of disinformation on SocIal MedIa pLAtfoRms “DISSIMILAR” through grant EIG CONCERT-JAPAN/05/2021 (National Centre for Research and Development, Poland). Dongwon Lee acknowledges the funding from NSF awards #1820609, #1934782, #2114824, and #2131144. Andreas Vlachos acknowledges the funding from ERC grant AVERITEC (GA 865958), and the EU H2020 grant MONITIO (GA 965576). ACM-Reference-Format
http://arxiv.org/abs/2306.02629v1
20230605065620
Low-Latency SCL Bit-Flipping Decoding of Polar Codes
[ "Wei Zhang", "Xiaofu Wu" ]
cs.IT
[ "cs.IT", "math.IT" ]
Low-Latency SCL Bit-Flipping Decoding of Polar Codes Wei Zhang and Xiaofu Wu Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected]; [email protected] Accepted XXX. Received YYY; in original form ZZZ =================================================================================================================================================== Bit flipping can be used as a postprocessing technique to further improve the performance for successive cancellation list (SCL) decoding of polar codes. However, the number of bit-flipping trials could increase the decoding latency significantly, which is not welcome in practice. In this paper, we propose a low latency SCL bit flipping decoding scheme, which is restricted to just single round of post-processing. The use of multiple votes for a more accurate estimation of path survival probability is proposed to locate the first error event of SCL decoding. Simulations show the sound improvement compared to the existing SCL bit-flipping decoding methods. Polar codes, low-latency decoding, SCL decoding, bit-flipping decoding. n § INTRODUCTION Polar codes in combination with successive cancellation (SC) decoding have been theoretically demonstrated to have capacity-achieving capabilities <cit.> under the binary-input discrete memoryless channel (BI-DMC). However, the performance of SC decoding is unsatisfactory for practical short codes. In order to overcome this deficiency of SC decoding, successive cancellation list (SCL) <cit.> decoding was proposed that preserves multiple candidate paths for decision so as to increase the probability of successful decoding. Its performance could be further improved by using Cyclic-Redundancy-Check-aided SCL (CA-SCL) <cit.> decoding. In recent years, bit-flipping was applied to either SC or SCL decoding for further improving the decoding performance. In <cit.>, the so-called SC Flip decoding was first proposed, where the locations of the potential error-prone information bit positions are identified and a serial of re-decoding attempts are activated whenever its first attempt of decoding fails. In <cit.>, the error-prone information bit positions were identified under the so-called critical set. <cit.> proposed to flip multiple error-prone bits, which may improve the efficiency of bit-flipping. For SCL decoding, a number of bit-flipping strategies <cit.> <cit.> following the concept of critical set were proposed for improving the performance of CA-SCL decoding. In <cit.>, a simple reliability metric was formulated with a full list of path metrics for locating the error prone bits, which shows its power for the purpose of bit-flipping, compared to <cit.> <cit.>. Although varied SCL-Flip decoding schemes have achieved significantly improved performance than the standard SCL decoding. The number of re-decoding trials is rather large, which results in high-latency for the underlying decoder. In this paper, we focus on the low-latency SCL-Flip decoding. By restricting a single re-decoding attempt, it is critical to improve the accuracy of locating the first error position to be flipped. The idea of multiple votes for locating the first error position is thus proposed. This paper is organized as follows, the preliminaries of polar codes are briefly introduced. Low-latency SCL bit-flipping decoding is described in detail in section III. Simulation results are given in section IV. In section V, conclusion is made. § PRELIMINARIES §.§ Polar Codes Consider a (N, K, 𝒜) polar code with block-length N=2^n, information bit length K and information bit index set of 𝒜. Let u_1^N=(u_1,u_2,...,u_N) denote the input vector to be encoded, where u_i is an information bit whenever i∈𝒜 and a frozen bit for i∈𝒜^c. For polar encoding, x_1^N=u_1^N× G_N is employed with G_N=F^⊗ n, where F^⊗ n denotes the n-th Kronecker power of F=[ [ 1 1; 1 0 ]]. §.§ SC and SCL Decoding For SC decoding, it successively evaluates the log-likelihood ratio of each bit u_i based on the received vector y_1^N and its i preceding decision bits û_1^i-1 L^i = logP(y_1^N,û_1^i - 1|u_i = 0)/P(y_1^N,û_1^i - 1|u_i = 1). Then, û_i is decided as 0 if L^i≥ 0 and as 1 if L^i≤ 0. Instead of just keeping a single path, SCL preserves L > 1 paths during the decoding process, which could significantly improve the decoding performance. Let L_l^i denote the log-likelihood ratio of the bit u_i along the l-th path. When the number of paths is greater than the list size as the SCL decoder proceeds from level i to i+1, it retains L best paths according to the updated path metric PM_l^i ={ PM_l^i - 1, if û_i,l = 1/2[1 - sign(L_l^i)] PM_l^i - 1 + |L_l^i|, otherwise. . where sign(x )=1 if x>0 and -1 otherwise. After all nodes in 𝒜 are visited, the paths in the list are examined one-by-one with decreasing metrics. The decoder outputs the first path passing the CRC detection as the estimation sequence. If none of such a path is found, the algorithm declares a decoding failure. §.§ SC-Flip and SCL-Flip Decoding SC-Flip decoding algorithm attempts to flip a decision to get the correct decoding whenever the conventional SC decoding fails <cit.>. It was observed that error propagation occurs frequently in SC decoding, where any single erroneous decision may result in a burst of errors. Hence, it is crucial to find the first error position in SC-Flip decoding. For SCL decoding, the bit-flipping can be again employed whenever a decoding failure is claimed. The so-called SCL-Flip decoding was recently proposed in <cit.>. In <cit.>, a critical set is detected, which is deemed to be error-prone during SCL decoding. Therefore, each bit in this critical set is flipped in the re-decoding attempts. In <cit.>, the bit position for flipping is determined by a newly-introduced confidence metric for the survival paths and simulations show significant performance improvement over the scheme in <cit.>. § LOW-LATENCY SCL BIT-FLIPPING DECODING In <cit.>, it was shown that the decoding failures in the CA-SCL decoding are mainly caused by the elimination of the correct path from L maintaining paths. In general, bit-flipping, as a post-processing technique, could be repeatedly implemented if the previous attempt fails. However, the decoding latency increases linearly with the numbers of decoding attempts. Therefore, this paper focuses on just single attempt of bit-flipping for the purpose of maintaining the low-latency of SCL bit-flipping decoding. §.§ Identification of the First Error Position with Multiple Votes In <cit.>, it was shown that the confidence in the decision for the path competition on u_i, i ∈𝒜∖𝒜_0, can be determined from the ratio between the total probability of the L survival paths to the total probability of the L removed paths, namely, E_i(α ) = log∑_l = 1^L e^ - PM_l^i/( ∑_l = 1^L e^ - PM_l + L^i)^α in the case of α≥ 1. Note that 𝒜_0 denotes the set consisting of log_2 L information indices. However, once the first decoding failure at level i_1 ∈𝒜 occurs during CA-SCL decoding, it is likely to produce error propagation in the subsequent decoding process for i > i_1. In this case, there will be a biased estimate in the confidence of the decision which is less than its correct value for a large index i. To compensate for the biased estimate due to the error propagation, α≥ 1 is introduced in (<ref>). Essentially, an information index i that has a low E_i(α) should have a high priority for re-decoding. Therefore, a bit-flipping index set is constructed in <cit.> by locating the bit indices with the smallest values of E_i(α). Since we are interested in the first error position, this may simply adopt the following rule, namely, î_1 = min_i E_i(α), i∈𝒜∖𝒜_0. However, whenever a decoding failure occurs, there may exist multiple bit errors in the final decoded output û_1^N. This means that Q> 1 error positions {i_1,⋯,i_Q}⊂𝒜 may appear in û_1^N. All of these error positions may have low confidence in E_i_q(α), q∈ [1,Q]. This makes the location of the first error position using (<ref>) unreliable. Therefore, we employ the idea of multiple votes for better locating the first error position of CA-SCL decoding. Consider the CA-SCL decoding with list size of L. For i∈𝒜∖𝒜_0, each decoding path û_1,l^i-1, ∀ l∈[1,L], will extend to two paths where û_i=0 or 1. Firstly, we employ α_1≥ 1 for evaluating the confidence of deciding each position i ∈𝒜. If the first error position occurs at i_1 ∈𝒜, it is more likely that E_i(α_1) ≥ E_i_1(α_1), ∀ i <i_1, i ∈𝒜. Due to the possible error propagation, this may be violated whenever the CA-SCL decoder proceed to the later position at i> i_1. Hence, we require an additional vote for eliminating the possible location with low value of E_i(α_1) but with i>i_1. This is pursued with the use of E_i(α_2), i∈𝒜 with a large value of α_2≠α_1. With the use of α_2 > 1, the error position of i>i_1 may be more accurate since the bias of the deciding confidence (E_i(α_1)) can be well alleviated. With two votes for deciding the first error position, we believe that the estimation accuracy should be improved. In what follows, we provide a detailed algorithm for locating the first error position by employing two weighting factors, α_1 and α_2 in formulating the deciding confidences. Define two confidence sets as follows {ℰ^α_1={E_i(α_1) | i∈𝒜∖𝒜_0}, ℰ^α_2={E_i(α_2) | i∈𝒜∖𝒜_0}. . By sorting the above two sets independently, the index of elements in sorted ascending order can be obtained as {ℐ_1=sort(ℰ^α_1) ℐ_2=sort(ℰ^α_2) . where sort(·) returns the indices of ℰ^α with the ascending order of E_i_1^α≤ E_i_2^α≤...≤ E_i_Q^α. Then, we can locate the first error position by searching over ℱ_m =ℐ_1(1:m)∩ℐ_2(1:m) from m=1 to Q. Whenever ℱ_m ≠∅, it succeeds. The corresponding algorithm is summarized as Algorithm 1. Note that we employ a set of ℱ for the output, which facilitates the generalization to the case of |ℱ|≥ 1. §.§ Low Latency SCL-Flip Decoding As a post-processing technique for SCL decoding, the proposed low-latency SCL-Flip (LL-SCL-Flip) takes just one chance of bit-flipping after the standard SCL decoding. Whenever SCL decoding fails, it initiates a new re-decoding attempt, where SCL decoder restarts its decoding from the error position i_1 with the shift-pruning approach <cit.>. Instead of using L best paths, the re-decoding attempt tries to employ the discarded L paths, since the confidence of deciding u_i_1 is low according to multiple-votes investigated in Algorithm 1. The proposed LL-SCL-Flip is summarized in Algorithm 2. § SIMULATIONS In this section, simulations are performed to show the effectiveness of the proposed LL-SCL-Flip decoding, which is compared with various existing SCL bit-flipping decoding, including the partial SCL-Flip <cit.> and Post-Processing <cit.>. A (128, 64+8) polar code with 8-bit CRC is considered, where its generator polynomial is x^8+x^7+x^6+x^5+x^4+x^3+x^2. All the code bits are BPSK (1→-1, 0→1) modulated and transmitted over an AWGN channel (σ = 1/√(2R)10^-snr/20). §.§ BLER Performance First, we analyze the decoding performance.Fig. 1 shows the performance of the proposed LL-SCL-Flip decoding (L=4) for polar codes (128, 64+8), which is compared with Post-Processing with α=1, α=2, α=3 in the case of just one-chance of re-decoding. For the polar code of (128, 64+8), Fig. 2 shows that LL-SCL-Flip with L=8 outperforms the standard CA-SCL(L=8) by about 0.25 dB and is inferior to the standard CA-SCL(L=16) by about 0.02 dB. Compared to the just one-round of post-processing (α=2), it achieves about 0.12 dB gain. For a larger list size L=16, Fig. 2 shows that LL-SCL-Flip outperforms the standard CA-SCL (L=16) by about 0.21 dB. Compared to Post Processing with one single round (α=2), it achieves about 0.11 dB gain. For the polar code of (256, 128+8), Fig. 3 shows that the proposed LL-SCL-Flip with L=4 is also very effective, which gains the improvement about 0.19 dB over that of the standard CA-SCL, and 0.1 dB over the one-round of post-processing (α=2) in <cit.>. The performance of LL-SCL-Flip is better than the partial SCL-Flip <cit.>. §.§ Average Complexity of Decoding In this paper, we use the average list size to represent the decoding complexity <cit.>, and let F represent the total number of decoding frames and T represent times. For example, the average complexity of the CA-SCL decoding algorithm is (L· T/F). For the bit-flipping decoding algorithm, each additional flip means that the complexity is increased by 1. If the proposed flip decoding algorithm tries to flip additional t times, the complexity is (T+t)· L/F. In Fig. 4, we compare the average decoding complexity of the standard CA-SCL, Post-Processing <cit.> and the proposed LL-SCL-Flip. Fig. 4 shows that the average computational complexity of LL-SCL-Flip is a bit higher than either Post-Processing <cit.>, or standard CA-SCL decoding before Eb/No=2.4dB. They are almost alike after Eb/No=2.4dB. § CONCLUSION This paper considers to reduce the latency of the standard bit-flipping approach for SCL decoding. Instead of using a large number of re-decoding attempts, we focus on the single-round approach for bit-flipping. By employing the idea of multiple votes, this paper proposed an enhanced mechanism for locating the first error position, with which a shift-pruning approach is used in the re-decoding. Simulations show that the proposed LL-SCL-Flip could achieve obvious improvement over the standard SCL-Flip decoding. 1 IEEEtran Ref_1 E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channel”, IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051-3073, Jul. 2009. Ref_2 I. Tal and A. Vardy, “List Decoding of Polar Codes”, IEEE Trans. Inf. Theory, vol. 61, no. 5, pp. 2213–2226, May 2015. Ref_3 K. Niu and K. Chen, “CRC-Aided Decoding of Polar Codes”, IEEE Commun. Lett., vol. 16, no. 10, pp. 1668–1671, 2012. Ref_4 O. Afisiadis, A. Balatsoukas-Stimming, and A. Burg, “A Low Complexity Improved Successive Cancellation Decoder for Polar Codes ”, in 2014 48th Asilomar Conference on Signals, Systems and Computers, pp. 2116–2120, 2014. Ref_5 Zhang Z, Qin K, Liang Z, “Progressive Bit-Flipping Decoding of Polar Codes over Layered Critical Sets”, GLOBECOM 2017 - 2017 IEEE Global Communications Conference. IEEE, 2017. Ref_6 Sha S, Zhang L, He Y, “An Improved Multiple Bit-Flipping Successive Cancellation Decoding Algorithm for Polar Codes”, 2020 International Conference on Wireless Communications and Signal Processing(WCSP), 2020 Ref_7 Yu Y, Pan Z, Nan L “Successive Cancellation List Bit-flip Decoder for Polar Codes”, 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), 2018. Ref_8 Cheng F, Liu A, Zhang Y “Bit-Flip Algorithm for Successive Cancellation List Decoder of Polar Codes ”, IEEE Access, 2019, PP(99):1-1. Ref_9 Wang C H, Pan Y H, Lin Y H, “Post-Processing for CRC-Aided Successive Cancellation List Decoding of Polar Codes ”, IEEE Commun. Lett. 2020, PP(99):1-1. Ref_10 L. Chandesris, V. Savin, and D. Declercq, “Dynamic-SCFlip decoding of polar codes”, IEEE Trans. Commun. vol. 66, no. 6, pp. 2333–2345, Jun. 2018. Ref_11 M. Rowshan and E. Viterbo “Improved list decoding of polar codes by shifted-pruning”, in 2019 IEEE Information Theory Workshop(ITW), Visby, Sweden, Auf. 2019, pp, 1-5.
http://arxiv.org/abs/2306.10291v1
20230617082337
Proximity-induced spin-orbit coupling in phosphorene on WSe$_2$ monolayer
[ "Marko Milivojević", "Martin Gmitra", "Marcin Kurpas", "Ivan Štich", "Jaroslav Fabian" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
AIP/123-QED Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia Institute for Theoretical Physics, University of Regensburg, 93053 Regensburg, Germany Faculty of Physics, University of Belgrade, 11001 Belgrade, Serbia Institute of Physics, Pavol Jozef Šafárik University in Košice, 04001 Košice, Slovakia Institute of Experimental Physics, Slovak Academy of Sciences, 04001 Košice, Slovakia Institute of Physics, University of Silesia in Katowice, 41‑500 Chorzów, Poland Institute of Informatics, Slovak Academy of Sciences, 84507 Bratislava, Slovakia Department of Natural Sciences, University of Saints Cyril and Methodius, 917 01 Trnava, Slovakia Institute for Theoretical Physics, University of Regensburg, 93053 Regensburg, Germany We investigate, using first-principles methods and effective-model simulations, the spin-orbit coupling proximity effects in a bilayer heterostructure comprising phosphorene and WSe_2 monolayers. We specifically analyze holes in phosphorene around the Γ point, at which we find a significant increase of the spin-orbit coupling that can be attributed to the strong hybridization of phosphorene with the WSe_2 bands. We also propose an effective spin-orbit model based on the C_1 v symmetry of the studied heterostructure. The corresponding spin-orbit field can be divided into two parts: the in-plane field, present due to the broken nonsymmorphic horizontal glide mirror plane symmetry, and the dominant out-of-plane field triggered by breaking the out-of-plane rotational symmetry of the phosphorene monolayer. Furthermore, we also demonstrate that a heterostructure with 60^∘ twist angle exhibits an opposite out-of-plane spin-orbit field, indicating that the coupling can effectively be tuned by twisting. The studied phosphorene/WSe_2 bilayer is a prototypical low common-symmetry heterostructure in which the proximity effect can be used to engineer the spin texture of the desired material. Proximity-induced spin-orbit coupling in phosphorene on WSe_2 monolayer Jaroslav Fabian ======================================================================= § INTRODUCTION Phosphorene <cit.> is a two-dimensional (2D) material whose sizable direct semiconducting gap and high carrier mobility make it a promising alternative to gapless graphene in the field of electronics. However, weak spin-orbit coupling <cit.> and zero magnetism in phosphorene, limit its use in spintronics applications. Also, phosphorene has space-inversion symmetry and thus exhibits no spin-orbit fields. The simplest way to induce such fields is via the Rashba effect <cit.>, i.e. by applying an electric field in the direction perpendicular to the monolayer plane. This approach is not very effective in phosphorene as the Rashba field ultimately depends on the atomic number <cit.>. It is therefore desired to find alternative ways of inducing sizeable spin-orbit fields in phosphorene. Van der Waals heterostructures offer a rich playground for modifying electronic, spin, optical, and magnetic properties of the target materials <cit.>. In the context of proximity-induced spin-orbit effects in weak SOC materials <cit.>, transition-metal dichalcogenide (TMDC) monolayers (MLs) <cit.> are the obvious material of choice due to the strong spin-orbit coupling of their valence bands <cit.>. The common three-fold symmetry of graphene and TMDC materials has enabled a simple effective description of the proximity-induced interaction between the MLs <cit.>. Such a common symmetry is not present in phosphorene/TMDC heterostructures, in which the rotation-symmetry-broken environment can trigger different spin-orbit coupling terms and, as a consequence, induce new types of spin textures in the desired materials <cit.>. The goal of the present study is to obtain both a quantitative and qualitative understanding of such heterostructures. In particular, we study a heterostructure comprising phosphorene (P) and monolayer WSe_2 employing ab-initio methods and group theory. The giant spin splitting in the valence bands of the WSe_2 monolayer points to the potentially interesting hole spin physics of proximitized phosphorene. Indeed, we find sizeable momentum-dependent spin-orbit fields at the Γ point (both in-plane and out-of-plane) where the strong hybridization between the phosphorene and WSe_2 bands takes place. From symmetry arguments, we derived an effective spin-orbit Hamiltonian that ideally captures the spin physics predicted by the density-functional theory (DFT) calculations. Finally, we show that a 60^∘ twisted heterostructure preserves the in-plane spin-orbit fields but flips the out-of-plane component, suggesting that twist angle can be an effective tool to tailor the proximity spin physics in such heterostructures. This paper is organized as follows. After the introductory section, in Sec. <ref> we analyze the geometry of the P/WSe_2 heterostructure and present the necessary computational details for the calculation of the band structure. In Sec. <ref> band structure analysis of such a heterostructure is presented. Furthermore, based on the C_1 v symmetry of the heterostructure, the effective model for the hole spins around the Γ point is constructed and the fitting parameters that match the DFT data with the model are given. We also analyze the effect of twist on the proximity effect, by assuming the relative twist angle of 60^ o between the phosphorene and WSe_2 monolayer. Finally, in Sec. <ref>, we present our conclusions and provide further outlooks of the presented study. § COMPUTATIONAL AND ATOMIC STRUCTURE DETAILS For lattice parameters of phosphorene ML, we consider a=3.2986Å and b=4.6201Å <cit.> (lattice vectors correspond to a=a e_x, b=b e_y), while the lattice parameter of WSe_2 ML is equal to a_ W=3.286Å <cit.> (lattice vectors are a_1=a_ W e_x, a_2=a_ W(- e_x+√(3) e_y)/2). The commensurate heterostructure was constructed using the CellMatch code <cit.>, containing 20 P atoms and 8 WSe_2 chemical units. While the phosphorene layer remained unstrained, the WSe_2 is strained by 0.51%. In Fig. <ref>, we present side (a) and top (b) view of the atomic structure model of the P/WSe_2 heterostructure, alongside the Brillouin zone with high symmetry points of phosphorene (c) and WSe_2 (d) ML. The studied heterostructure has the vertical mirror plane symmetry that coincides with the yz plane, where the zigzag (armchair) direction of phosphorene corresponds to the x (y) direction of the heterostructure. We perform DFT electronic structure calculations of the P/WSe_2 heterostructure by means of the plane wave QUANTUM ESPRESSO package <cit.>, assuming a vacuum of 20 Å in the z-direction. The Perdew–Burke–Ernzerhof exchange-correlation functional was utilized <cit.>, for the norm-conserving method <cit.>. The positions of atoms were relaxed with the help of the quasi-Newton scheme and scalar-relativistic SG15 Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials <cit.>. The force and energy convergence thresholds for ionic minimization were set to 1×10^-4 Ry/bohr and 10^-7 Ry/bohr, respectively, using the Monkhorst-Pack scheme with 56× 8 k-points mesh. Small Methfessel-Paxton energy level smearing of 1mRy <cit.> was used along with the kinetic energy cut-offs for the wave function and charge density 80 Ry and 320 Ry, respectively. Also, the semiempirical Grimme's DFT-D2 van der Waals corrections were included <cit.>. For the relaxed structure, the average distance between the closest phosphorene and the selenium plane (in the z-direction) is equal to 3.31Å. In the case of noncolinear DFT calculations including spin-orbit coupling, fully relativistic SG15 ONCV pseudopotentials were used. Also, the dipole correction <cit.> was applied to properly determine energy offset due to dipole electric field effects. The energy convergence threshold was set to 10^-8 Ry/bohr, using the same k-points mesh and kinetic energy cutoffs for the wave function and charge density as in the relaxation procedure. Note that the illustration of the band structure unfolded to the Brillouin zone of both monolayers, is done using the DFT Vienna ab-initio simulation package VASP 6.2 <cit.>, using as the input the relaxed structure from QUANTUM ESPRESSO code. § BAND STRUCTURE ANALYSIS In Fig. <ref> we present the band structure of the P/WSe_2 heterostructure unfolded to the XΓY path (a) of the phosphorene and ΓKMΓ path (b) of the WSe_2 Brillouin zone. In order to have a more apparent separation between the bands having different atomic origins, we mark the bands with dominant phosphorus (a) and WSe_2 (b) atomic orbital character with orange and green color, respectively. First, we notice that an overall heterostructure is a semiconductor due to the semiconducting nature of both constituents. The small strain applied to the WSe_2 monolayer does not change its band structure significantly. The most important feature for the spin-orbit proximity study stems from the fact that the top valence band projected to the WSe_2 Brillouin zone has the same characteristics as in the monolayer limit; the giant spin-orbit coupling at the K point and along the ΓKM path is preserved <cit.>. On the other hand, it can be seen that within the phosphorene Brillouin zone, the valence band around the Γ point is mainly composed of phosphorene atomic orbitals. This is consistent with the highly anisotropic energy dispersion relation in the armchair and zigzag direction observed, resembling the well-known asymmetry of the phosphorene effective mass in the vicinity of k=0 point <cit.>. Additionally, close to the Γ point, we notice strong hybridization of phosphorene bands with bands having dominant WSe_2 character. Since the K point of WSe_2 is folded to the XΓ line of the phosphorene Brillouin zone, it is to be expected that the proximity-induced spin-orbit coupling should be more pronounced along the XΓ line than in the ΓY direction. The DFT calculation confirms this conjecture. As we will show below, the obtained hole spin texture of the top valence band of phosphorene can be described using a simple symmetry-adapted spin-orbit Hamiltonian with anisotropic parameters for ΓX and ΓY directions. §.§ Model Hamiltonian To make a simple description of the hole physics in phosphorene within the P/WSe_2 heterostructure we derive a simple spin-orbit coupling model Hamiltonian based on the C_1 v symmetry of the heterostructure. Group symmetry C_1 v={e,σ_ v} has two elements; e represents the identity element, while σ_ v is the vertical mirror symmetry that coincides with the yz-plane. The presence of vertical mirror symmetry is a consequence of the zero twist angle between the MLs. This symmetry can be broken by twisting WSe_2 ML for an angle different than a multiple of 60^ o. Thus, the effective spin-orbit model close to the Γ point can be derived using the constraints posed by the presence of the vertical mirror plane symmetry as well as by the time-reversal symmetry. Using the transformation rule of the momentum and spin operators, (k_x,k_y) (-k_x,k_y) and (σ_x,σ_y,σ_z) (σ_x,-σ_y,-σ_z), respectively, it turns out that the effective, linear in k, spin-orbit coupling Hamiltonian can be written as a sum of polynomials k_xσ_y, k_yσ_x, and k_xσ_z, that are invariant under the system's symmetry σ_ v H_ SO^ eff=λ_1 k_x σ_y+λ_2 k_y σ_x+λ_3 k_x σ_z, with the parameters λ_1, λ_2, and λ_3 that need to be determined. The presence of the k_x σ_y and k_y σ_x terms is a consequence of a broken nonsymmorphic horizontal glide mirror plane symmetry of the phosphorene monolayer, while the emergence of the k_xσ_z spin-orbit fields triggered by breaking the out-of-plane rotational symmetry. In terms of the induced spin texture, the spin-orbit Hamiltonian can be divided into two parts, the in-plane (λ_1k_x σ_y+λ_2k_y σ_x), and out-of-plane (λ_3k_x σ_z) spin-orbit fields. By diagonalizing the Hamiltonian (<ref>), one can obtain the following formulas for the spin splitting and the spin expectation values of the Bloch states: Δ_ so^∓ = ∓√(k_x^2(λ_1^2+λ_3^2)+k_y^2λ_2^2), s_x^∓ = ∓k_yλ_2/2√(k_x^2(λ_1^2+λ_3^2)+k_y^2λ_2^2), s_y^∓ = ∓k_xλ_1/2√(k_x^2(λ_1^2+λ_3^2)+k_y^2λ_2^2), s_z^∓ = ∓k_xλ_3/2√(k_x^2(λ_1^2+λ_3^2)+k_y^2λ_2^2), and use them to determine the spin-orbit coupling parameters by fitting the DFT data. The fitting parameters (see Table <ref>) reproduce well the spin structure of the top valence band close to the Γ point. This is illustrated in FIG. <ref> (a)-(c) where we plot the spin-splitting energy Δ E=Δ_ so^+-Δ_ so^- and spin expectation values close to the Γ point, along the XΓY path. In FIG. <ref>(d)-(f), the angular dependence of spin splitting and spin expectation values is given by assuming the fixed |k| value (0.009 in the units of 1/Å) and varying the angle φ between the k-point vector and the x-direction from 0 to 2π. On the level of the effective model (see Table <ref>), we notice the dominant effect of the out-of-plane spin-orbit field, which is an inherent feature of the group-IV monochalcogenides <cit.> monolayers, representing the ferroelectrics with phosphorene-like atomic structure. However, in these systems, the spin is locked in the z-direction, due to symmetry, whereas in our case the more exotic spin texture is generated. Furthermore, one can compare the strengths of the spin-orbit coupling parameters in the k_x and k_y directions. In the k_x-direction, the effective strength of the spin-orbit field is equal to √(λ_1^2+λ_3^2)=0.019 eVÅ (comparable to the intrinsic spin-orbit coupling strength in ferroelectric SnS monolayer <cit.>), while in the k_y-direction, the strength is equal to 0.009 eVÅ, being roughly two times smaller than in the k_x case. How can the proximity-enhanced spin-orbit coupling influence the electron spin dynamics in phosphorene? We propose to explore spin relaxation, which is readily experimentally accessible. Indeed, in pristine phosphorene, the spin relaxation was found from theory and experiment to be dominated by the Elliott-Yafet mechanism stemming from the intrinsic spin-orbit coupling <cit.>. This competes with the Dyakonov-Perel mechanism, which is weaker due to the weak Rahsba spin-orbit coupling, although for sufficiently large out-of-plane electric fields or z-component of the crystal potential gradient ∇ V(r), it can overtake the Elliott-Yafet effect. For monolayer phosphorene, this would happen for electric fields of E ≈ 5 Vnm^-1, corresponding to the effective strength of the spin-orbit field λ_x ≈ 1.08 meVÅ in the k_x direction and λ_y ≈ 3.34 meVÅ in the k_y direction <cit.>. The values of λ_1, λ_2, and λ_3 exceed those of λ_x and λ_y. We thus predict that the Dyakonov-Perel mechanism dominates the spin relaxation in proximitized phosphorene. From the comparison of spin-orbit coupling parameters, λs', one sees that proximitized phosphorene due to WSe_2 has a pronounced anisotropy of the in-plane spin-orbit fields which is expected to yield marked spin relaxation anisotropy. Assuming the Fermi level at 2 meV below the valence band maximum, the corresponding crystal momenta are k_x = 0.015 Å^-1 and k_y = 0.0004 Å^-1, which give spin-orbit fields Ω_x=λ_2 k_y = 3.6 μeV, Ω_y=λ_1 k_x = 0.18 meV and Ω_z = λ_3 k_x = 0.22 meV. It is clear, that Ω_x will have a minor effect on spin relaxation compared to Ω_y and Ω_z. Neglecting Ω_x, and assuming isotropic momentum lifetime τ_p, the spin relaxation rates for the armchair (arm) and out-of-plane (⊥) directions the rates can be estimated as τ_s, arm^-1∼τ_p λ_3^2 ⟨ k_x^2⟩ and τ_s,⊥^-1∼τ_p λ_1^2 ⟨ k_x^2⟩, respectively, where ⟨⟩ denotes the Fermi contour average <cit.>. Electron spins polarized in the zigzag (zz) direction would relax approximately twice faster, with the rate τ_s, zz^-1∼τ_p⟨ k_x^2⟩ (λ_1^2+λ_3^2 ). Finally, one can argue that the observed spin-orbit coupling in phosphorene does not originate from the proximity-induced interaction with the strong spin-orbit coupling material, WSe_2 ML, but is a consequence of the broken symmetry of the phosphorene monolayer. To test this assumption, we compare the previously calculated spin-orbit coupling parameters (Table <ref>) with the case of the phosphorene ML, by removing the WSe_2 ML from the self-consistent calculation and keeping the coordinates of phosphorene ML obtained within the heterostructure relaxation, being the mechanism responsible for breaking the phosphorene's symmetry. In this case, the fitting of the spin-orbit Hamiltonian (<ref>) to the DFT data gives us the following parameters: λ_1^ P=-0.00065 eV Å, λ_2^ P=0.0014 eV Å, and λ_3^ P≈ 0, confirming the dominant role of the proximity-induced spin-orbit coupling effect. Note that the obtained values obey a similar trend (|λ_1^ P|<|λ_2^ P|; λ_3^ P=0), and are of the same order of magnitude as Rashba spin-orbit parameters of phosphorene in strong electric fields (∝ V/nm) <cit.>. §.§ Twist modification of proximity-induced spin-orbit coupling: an example of 60^ o twist angle Strong proximity-mediated transfer of a spin-orbit coupling from WSe_2 to phosphorene suggests that a relative change of WSe_2 band structure with respect to phosphorene by means of a twist could have a significant impact on the spin texture in phosphorene. We test this assumption by analyzing the P/WSe_2 heterostructure in which the WSe_2 monolayer is twisted for an angle of 60^ o with respect to phosphorene. The WSe_2 ML within the new heterostructure has the same number of atoms and is strained for the equal percentage as in Section <ref>; thus, it was possible to use the same parameters as before to perform the necessary DFT calculations. After fitting the model Hamiltonian (<ref>) to the DFT data, we obtain the following spin-orbit coupling parameters: λ_1=0.010 eV Å, λ_2=0.010 eV Å, and λ_3=0.015 eV Å. When compared to the values obtained in the zero twist-angle case, we can notice that a small change in parameters λ_1 and λ_2 is followed by the sign change of the λ_3 parameter. The sign change of the λ_3, corresponding to the k_x σ_z spin-orbit coupling term, can be directly connected to the fact that, instead of the ΓK branch, the ΓK' branch of WSe_2 is located on the ΓX line of the phosphorene Brillouin zone. Since at the K and K' points, the corresponding energies are equal and connected via time-reversal symmetry Θ, Θ E_|K+⟩=E_|K'-⟩, where |±⟩ corresponds to spin wavefunction with s_z=± 1/2 spin expectation value (we remind that spins in WSe_2 monolayer are locked in the out-of-plane direction), hybridization of phosphorene bands with WSe_2 bands via the spin split branch with s_z=±1/2 spin expectation value will be transferred to the branch s_z=∓1/2 with the opposite spin. The fact that the k_xσ_z term is locked to the valley of WSe_2 MLs suggests that this term is related to the valley-Zeeman spin-orbit coupling induced by the proximity effect in the studied heterostructure. § CONCLUSIONS We analyzed the proximity-induced spin-orbit coupling effects in a heterostructure made of phosphorene and WSe_2 monolayer. Giant spin splitting of WSe_2 valence bands motivated us to focus on the hole spin physics in phosphorene where, due to the broken inversion symmetry, spin splitting of the bands can occur. We discovered a significant proximity-induced spin-orbit coupling in the top valence band of phosphorene, whose origin is attributed to the strong hybridization with the WSe_2 spin split bands close to the Γ point. An effective spin-orbit coupling Hamiltonian model compatible with the C_1 v symmetry of the heterostructure is derived, and the spin-orbit parameters that fit the obtained data from ab-initio calculations to the model Hamiltonian are determined. By comparing the obtained parameters with the spin-orbit coupling values with group-IV monochalcogenide monolayers, representing the ferroelectrics with phosphorene-like atomic structure, we concluded that phosphorene is transformed into weak spin-orbit coupling material. Still, compared to electric field-induced Rashba spin-orbit coupling, the proximity-induced spin-orbit coupling is an order of magnitude larger. Finally, we showed that the twist angle can influence the spin-orbit proximity effect in a studied material. More precisely, for the twist angle of 60^ o, we reported a sign change of the out-of-plane spin-orbit field, followed by a sizable modification of the in-plane spin-orbit texture. The studied heterostructure shows that structures with incompatible symmetries can be used to generate spin textures different from the more commonly studied composites made of graphene and transition metal dichalcogenides, opening a playground for novel materials that can be used either as a target material or as a substrate in van der Waals heterostructures important for spintronics application. M.M. acknowledges the financial support provided by the Ministry of Education, Science, and Technological Development of the Republic of Serbia and DAAD Research Grant 57552336. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Programme SASPRO 2 COFUND Marie Sklodowska-Curie grant agreement No. 945478. M.G. acknowledges financial support provided by Slovak Research and Development Agency provided under Contract No. APVV-SK-CZ-RD-21-0114 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic provided under Grant No. VEGA 1/0105/20 and Slovak Academy of Sciences project IMPULZ IM-2021-42 and project FLAG ERA JTC 2021 2DSOTECH. M.K. acknowledges financial support provided by the National Center for Research and Development (NCBR) under the V4-Japan project BGapEng V4-JAPAN/2/46/BGapEng/2022. I.Š acknowledges financial support by APVV-21-0272, VEGA 2/0070/21, and by H2020 TREX GA No. 952165 project. J.F. acknowledges support from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project-ID 314695032, project B07), SPP 2244 (Project No. 443416183), and of the European Union Horizon 2020 Research and Innovation Program under Contract No. 881603 (Graphene Flagship) and FLAG-ERA project 2DSOTECH. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). 50 LNZ+14 H. Liu, A. T. Neal, Z. Zhu, Z. Luo, X. Xu, D. Tománek, and P. D. Ye, ACS Nano 8, 4033 (2014). LNH+14 W. Lu, H. Nan, J. Hong, Y. Chen, C. Zhu, Z. Liang, X. Ma, Z. Ni, C. Jin, and Z. Zhang, Nano Res. 7, 853 (2014). LWL+14 L. Liang, J. Wang, W. Lin, B. G. Sumpter, V. Meunier, and M. Pan, Nano Lett. 14, 6400 (2014). ZYX+14 S. Zhang, J. Yang, R. Xu, F. Wang, W. Li, M. Ghufran, Y.-W. Zhang, Z. Yu, G. Zhang, Q. Qin, and Y. Lu, ACS Nano 8, 9590 (2014). CVP+14 A. Castellanos-Gomez, L. Vicarelli, E. Prada, J. O. Island, K. L. Narasimha-Acharya, S. I. Blanter, D. J. Groenendijk, M. Buscema, G. A. Steele, J. V. Alvarez, H. W. Zandbergen, J. J. Palacios, and H. S. J. van der Zan, 2D Mater. 1, 025001 (2014). QKH+14 J. Qiao, X. Kong, Z.-X. Hu, F. Yang, and W. Ji, Nat. Commun. 5, 4475 (2014). LA14 P. Li and I. Appelbaum, Phys. Rev. B 90, 115439 (2014). KGF16 M. Kurpas, M. Gmitra, and J. Fabian, Phys. Rev. B 94, 155423 (2016). JKG+19 P. E. Faria Junior, M. Kurpas, M. Gmitra, and J. Fabian, Phys. Rev. B 100, 115203 (2019). KJG+19 M. Kurpas, P. E. Faria Junior, M. Gmitra, and J. Fabian, Phys. Rev. B 100, 125422 (2019). PKS15 Z. S. Popović, J. M. Kurdestany, and S. Satpathy, Phys. Rev. B 92, 035135 (2015). FR19 S. M. Farzaneh and S. Rakheja, Phys. Rev. B 100, 245429 (2019). SPS14 K. V. Shanavas, Z. S. Popović, and S. Satpathy, Phys. Rev. B 90, 165108 (2014). GG13 A. K. Geim and I. V. Grigorieva, Nature 499, 419–425 (2013). HMM+20 B. Huang, M. A. McGuire, A. F. May, D. Xiao, P. Jarillo-Herrero, and X. Xu, Nat. Mater. 19, 1276–1289 (2020). SFK+21 J. F. Sierra, J. Fabian, R. K. Kawakami, S. Roche, and S. O. Valenzuela, Nat. Nanotechnol. 16, 856–868 (2021). HZL+22 M. Huang, J. Zhou, D. Chen, H. Lu, N. J. McLaughlin, S. Li, M. Alghamdi, D. Djugba, J. Shi, H. Wang, and C. R. Du, Nat. Commun. 13, 5369 (2022). SBB+22 D. Schmitt, J. P. Bange, W. Bennecke, A. AlMutairi, G. Meneghini, K. Watanabe, T. Taniguchi, D. Steil, D. Russell Luke, R. Thomas Weitz, S. Steil, G. S. Matthijs Jansen, S. Brem, E. Malic, S. Hofmann, M. Reutzel, and S. Mathias, Nature 608, 499–503 (2022). ZZN+22 W.-M. Zhao, L. Zhu, Z. Nie, Q.-Y. Li, Q.-W. Wang, L.-G. Dou, J.-G. Hu, L. Xian, S. Meng, and S.-C. Li, Nat. Mater. 21, 284–289 (2022). ZRM+23 N. L. Zaitsev, I. P. Rusinov, T. V. Menshchikova, and E. V. Chulkov, Phys. Rev. B 107, 045402 (2023). GF15 M. Gmitra and J. Fabian, Phys. Rev. B 92, 155403 (2015). GF17 M. Gmitra and J. Fabian, Phys. Rev. Lett. 119, 146401 (2017). SMK+22 K. Szałowski, M. Milivojević, D. Kochan, and M. Gmitra, 2D Mater. 10, 025013 (2023). MLH+10 K. F. Mak, C. Lee, J. Hone, J. Shan and T. F. Heinz, Phys. Rev. Lett. 105, 136805 (2010). KH12 E. S. Kadantsev and P. Hawrylak, Solid State Commun. 152, 909-913 (2012). CRG+13E. Cappelluti, R. Roldán, J. A. Silva-Guillén, P. Ordejón and F. Guinea, Phys. Rev. B 88, 075409 (2013). ZCS11 Z. Y. Zhu, Y. C. Cheng, and U. Schwingenschlögl, Phys. Rev. B 84, 153402 (2011). KZD+13 A. Kormányos, V. Zólyomi, N. D. Drummond, P. Rakyta, G. Burkard, and V. I. Fal'ko, Phys. Rev. B 88, 045416 (2013). SYZ+13 L. Sun, J. Yan, D. Zhan, L. Liu, H. Hu, H. Li, B. Kang Tay, J.-L. Kuo, C.-C. Huang, D. W. Hewak, P. See Lee, and Z. Xiang Shen, Phys. Rev. Lett. 111, 126801 (2013). KGR13 K. Kośmider, J. W. González, and J. Fernández-Rossier, Phys. Rev. B 88, 245436 (2013). ABX+14 N. Alidoust, G. Bian, S.-Y. Xu, R. Sankar, M. Neupane, C. Liu, I. Belopolski, D.-X. Qu, J. D. Denlinger, F.-C. Chou, and M. Zahid Hasan, Nat. Commun. 5, 4673 (2014). KBG+15 A. Kormányos, G. Burkard, M. Gmitra, J. Fabian, V. Zólyomi, N. D. Drummond, and Vladimir Fal'ko, 2D Mater. 2, 022001 (2015). LK19 Y. Li and M. Koshino, Phys. Rev. B 99, 075438 (2019). VGR21 M. Vila, J. H. Garcia, and S. Roche Phys. Rev. B 104, L161113 (2021). NZG+21 T. Naimer, K. Zollner, M. Gmitra, and J. Fabian, Phys. Rev. B 104, 195156 (2021). LSK+22 S. Lee, D. J. P. de Sousa, Y.-K. Kwon, F. de Juan, Z. Chi, F. Casanova, and T. Low, Phys. Rev. B 106, 165420 (2022). DRK+19 A. David, P. Rakyta, A. Kormányos, and G. Burkard, Phys. Rev. B 100, 085412 (2019). PDR+22 C. G. Péterfalvi, A. David, P. Rakyta, G. Burkard, and A. Kormányos, Phys. Rev. Res. 4, L022049 (2022). JOS+21 K.-H. Jin, E. Oh, R. Stania, F. Liu, and H. Woong Yeom, Nano Lett. 21, 9468–9475 (2021). WJ69 J. A. Wilson and A. D. Yoffe, Adv. Phys. 18, 193-335 (1969). L15 P. Lazić, Comput. Phys. Commun. 197, 324-334 (2015). QE1 P. Giannozzi et al., J. Phys.: Condens. Matter 21, 395502 (2009). QE2 P. Giannozzi et al., J. Phys.: Condens. Matter 29, 465901 (2017). PBE96 J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). HSC79 D. R. Hamann, M. Schlüter, and C. Chiang, Phys. Rev. Lett. 43, 1494 (1979). H13 D. R. Hamann, Phys. Rev. B 88, 085117 (2013). SG15 M. Schlipf and F. Gygi, Comput. Phys. Commun. 196, 36 (2015). SGH+16 P. Scherpelz, M. Govoni, I. Hamada, and G. Galli, J. Chem. Theory Comput. 12, 3523–3544 (2016). MP89 M. Methfessel and A. T. Paxton, Phys. Rev. B 40, 3616 (1989). G06 S. Grimme, J. Comput. Chem. 27, 1787-1799 (2006). BCF+09 V. Barone, M. Casarin, D. Forrer, M. Pavone, M. Sambi, and A. Vittadini, J. Comput. Chem. 30, 934-939 (2009). B99 L. Bengtsson, Phys. Rev. B 59, 12301 (1999). KF96 G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). KF99 G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). FY14 R. Fei and L. Yang, Nano Lett. 14, 2884–2889 (2014). GC15 L. C. Gomes and A. Carvalho, Phys. Rev. B 92, 085406 (2015). FLL+15 R. Fei, W. Li, J. Li, and L. Yang Appl. Phys. Lett. 107, 173104 (2015). GCN15 L. C. Gomes, A. Carvalho, and A. H. Castro Neto, Phys. Rev. B 92, 214103 (2015). PVL19 S. P. Poudel, J. W. Villanova, and S. Barraza-Lopez, Phys. Rev. Materials 3, 124004 (2019). AMS+19 H. Ai, X. Ma, X. Shao, W. Li, and M. Zhao, Phys. Rev. Materials 3, 054407 (2019). AI19 Moh. A. U. Absor and F. Ishii, Phys. Rev. B 100, 115104 (2019). Avsar2017 A. Avsar, J. Y. Ta, M. Kurpas, M. Gmitra, K. Watanabe, T. Taniguchi, J. Fabian, and B. Özyilmaz, Nat. Phys. 13, 888 (2017). Zutic2004 I. Žutić, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. 76, 323 (2004).
http://arxiv.org/abs/2306.02105v1
20230603131137
Adapting Pretrained ASR Models to Low-resource Clinical Speech using Epistemic Uncertainty-based Data Selection
[ "Bonaventure F. P. Dossou", "Atnafu Lambebo Tonja", "Chris Chinenye Emezue", "Tobi Olatunji", "Naome A Etori", "Salomey Osei", "Tosin Adewumi", "Sahib Singh" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Achievable Sum Rate Optimization on NOMA-aided Cell-Free Massive MIMO with Finite Blocklength Coding Baolin Chong, Hancheng Lu, Senior Member, IEEE, Yuang Chen, Langtian Qin and Fengqian Guo Baolin Chong, Hancheng Lu, Yuang Chen, Langtian Qin and Fengqian Guo are with the Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230027, China. (e-mail: [email protected]; [email protected]; [email protected]; qlt315@[email protected]; [email protected]) July 31, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================== While there has been significant progress in ASR, African-accented clinical ASR has been understudied due to a lack of training datasets. Building robust ASR systems in this domain requires large amounts of annotated or labeled data, for a wide variety of linguistically and morphologically rich accents, which are expensive to create. Our study aims to address this problem by reducing annotation expenses through informative uncertainty-based data selection. We show that incorporating epistemic uncertainty into our adaptation rounds outperforms several baseline results, established using state-of-the-art (SOTA) ASR models, while reducing the required amount of labeled data, and hence reducing annotation costs. Our approach also improves out-of-distribution generalization for very low-resource accents, demonstrating the viability of our approach for building generalizable ASR models in the context of accented African clinical ASR, where training datasets are predominantly scarce. § INTRODUCTION Clinical automatic speech recognition (ASR) is an active area of research <cit.>. Several studies <cit.> showed that the use of speech recognition led to a 19-92% decrease in average documentation time, 50.3-100% decrease in turnaround time, and 17% improvement in documentation quality. In the African context where the patient burden is high <cit.> and staffing is inadequate <cit.>, clinical ASR systems have great potential to reduce daily documentation burden. In particular, there is a considerable interest in the development of ASR systems that can accurately distinguish African-accented speech, with a focus on creating systems that can accurately transcribe and interpret speech in a variety of accents and dialects <cit.>. However, developing and deploying ASR systems in an African setting is a challenging task <cit.> due to the diversity of African languages, with several dialects and variants characterized by low-resource environments and limited data availability <cit.>. As a result of scarce data and resources, it is also challenging to pretrain large multilingual language models for African languages <cit.>. In contrast, recent advances in data collection and annotation techniques <cit.> have enabled the creation of large-scale datasets that can be used to train and test African-accented ASR systems. Combined with advancements in machine learning algorithms and deep learning techniques <cit.> these methods have resulted in significant improvements in clinical information extraction and ASR, demonstrating great potential to reduce annotation costs. Also, the use of automatic annotation tools <cit.> has significantly reduced the cost of annotation, bringing performant ASR models within reach for African clinical organizations <cit.>. Our experiments in this work are designed to allow us to answer the following specific questions: * given a set of African accents A = { A_1, A_2,..,A_k}, how does the model adapt to A across the adaptation rounds and domains? * which selection strategy works better, and for which domain(s)? * which domain(s) help(s) the model to do better, and how does the model perform (in terms of uncertainty) across the domain(s)? * is epistemic uncertainty-based selection effective in low-resource data scenarios? * is uncertainty-based selection, model, and dataset agnostic? Answering these questions, our contributions through this paper are the following: * we propose an iterative model adaptation process that uses epistemic uncertainty-based data selection approach to build cost-efficient, robust, and linguistically diverse automatic speech recognition systems in a Pan-African clinical context, * we show that our approach reduces significantly the required amount of labeled data (by ∼ 35-45%) while outperforming several high-performing ASR models, and improving out-of-distribution generalization for very low-resource accents, * we investigate trends in domain selection across adaptation rounds using our best-performing selection approach, * and finally, we explore the nascent field of African clinical ASR and establish strong baselines that provide room for further exploration in this research direction. § RELATED WORKS §.§ African Clinical Speech Recognition Although the 2000+ African Languages make up about 29% of the 7000+ languages spoken in the world <cit.>, almost all African languages are classified as "low-resource" <cit.>. Despite advancements in clinical ASR over the past three decades <cit.>, research or applications in the African clinical context are not well investigated in the literature, likely a product of the paucity of data for accented languages, inadequate research funding, data-hungry approaches well-suited to high-resource settings, and most importantly, data-efficient approaches are inadequately explored. §.§ African health system and maturity of clinical ASR in developed nations Healthcare systems in many African countries are overworked and underfunded <cit.> struggling to meet teeming demand while experiencing a major shortage of skilled health personnel <cit.>. Despite local attempts to increase health workers <cit.>, found a 1.55 health worker per 1000 persons ratio, lower than the WHO-recommended 4.45 health workers per 1000 persons. ASR deployments in several clinical applications (neurodegenerative disorders <cit.>, stroke <cit.>, Alzheimer's <cit.>, and psychotherapy <cit.>) from developed nations <cit.> provide ample evidence that clinical ASR can improve healthcare operational efficiency (throughput, documentation time, and document quality) all of which are beneficial in the overwhelmed African setting <cit.>. However, ASR is still in its early stages of development, with many obstacles to adoption <cit.>. A few countries, such as Kenya <cit.>, Nigeria <cit.> South Africa <cit.>, and Tanzania <cit.>, have however begun to invest in ASR with promising results. Despite its potential to transform healthcare systems, ASR suffers from racial bias and poor performance on accented speech <cit.>, particularly African accents. Recognizing heavy accents in speech data is a challenge for ASR systems since it contains high variability in terms of pronunciations <cit.>. These racial disparities pose a high risk to the future of clinical ASR. In recent years, we have seen a drastic improvement in ASR systems due to more advanced deep learning systems and the availability of data. It is however shocking that racial disparities and poor performances in accented speech still exist for black speakers. Typically, ASR system generalization is hard to achieve due to difficulties in getting good speech transcript data, including for African accents. The different dialects and accents of many languages introduce some level of difficulty in transcribing such data. These variations are much more complicated in the case of African languages, as speech data in general is more difficult to source. For this reason, there is a poor performance of ASR in African languages and even worse for clinical data. Improving ASR on accented speech and linguistic diversity is necessary to avoid speakers of other languages having to change their way of speaking <cit.>. This most likely makes speakers feel excluded, and the social impact is high. On the other hand, clinical ASR will need more attention, as being misunderstood by an ASR system could be more harmful to a patient. There are advances in research towards the improvement of ASR for African languages <cit.>. Interestingly, there are no available benchmarks for clinical ASR for African languages. To accomplish this, there is a need to create resources for identifying areas for improvement, such as creating a variety of language datasets and robust models. § DATASETS AND UNCERTAINTY-BASED SELECTION METHODOLOGY To investigate the effectiveness of our approach, we impose data-efficiency constraints, choosing fewer training examples than available to not exceed 60-65% of the entire initial dataset. We assign 30% of the initial dataset as our training set and assign the remaining 70% as the augmentation set. Within the defined constraints, we adjust the training and augmentation sizes based on the number of training examples available for each domain (see Tables <ref>, <ref> and Appendix <ref>). §.§ Datasets In this work, we primarily explore the AfriSpeech-200 dataset, a novel 200 hr accented English speech corpus curated for clinical and general domain ASR. AfriSpeech-200 is crowdsourced from over 2000 African speakers, representing 13 Anglophone countries across sub-Saharan Africa and the US (see Table <ref>). AfriSpeech transcripts are sourced from the WikiText-103 dataset, web scraping of African news websites, over 90,000 African names from <cit.>, 965 Nigerian Igbo names from <cit.>, a list of African cities from Wikipedia, and biomedical sentences from PubMed, NCBI disease corpus, and AfriSpeech Templates. To establish the dataset-agnostic nature of our approach, we explore three additional datasets: (1) SautiDB <cit.>, a dataset of Nigerian accent recordings with 919 audio samples, a sampling rate of 48kHz each, for a total amount of 59 minutes of recordings; (2) Medical Speech[<https://www.kaggle.com/datasets/paultimothymooney/medical-speech-transcription-and-intent>] which contains 6,661 audio utterances for common medical symptoms like knee pain or headache, for a total of 8 hours of recordings; and (3) CommonVoice English Accented Dataset, a subset of English Common Voice (version 10) <cit.>, with western accents removed as our primary interest is in low-resource settings. §.§ Epistemic Uncertainty Sampling Epistemic Uncertainty (EU) is the uncertainty of the model, most often due to a lack of training data. EU is usually defined with the variance (for a given model g with some parameters θ_t) as follows V(g(x, θ)) = _θ_t∼ q[g(x, θ_t)^2] - (_θ_t∼ q[g(x, θ_t)])^2 = 1/T∑_i=1^Tf(x, θ_t)^2 - (1/T∑_i=1^Tf(x, θ_t))^2 where θ_t∼ p(θ|𝒟). In our experiments, we decided to use the standard deviation defined (STD) as STD = √(V(g(x, θ))). Several works in machine learning, on multiple contexts like biological sequence design, image data, and language modeling, have shown the efficiency of incorporating epistemic uncertainty in model training, coupled with AL <cit.>. EU is useful for the model's training as it improves its robustness and encourages exploration to mitigate inductive bias related to underrepresented accents (accents with fewer speech data and speakers). One of the most popular ways to estimate uncertainty is by inferring predictive distributions with Bayesian neural networks. A predictive distribution is defined as p(y|x, 𝒟) where y is the target, x is the input and 𝒟 is the dataset. One can thus inspect the variance (or alternatively the standard deviation, as we used in our experiments) and uncover uncertainty, using the predictive distribution. A way of learning the predictive distribution is to learn a distribution over the parameters i.e. the parametric posterior distribution p(θ|𝒟). There are many ways of computing epistemic uncertainty, but the two most common approaches are Monte Carlo Dropout <cit.> and Deep Ensembles <cit.>. Many empirical results demonstrated that Deep Ensembles work better in general while being very computationally expensive. Due to computational resource constraints, we decided to empirically demonstrate the effectiveness of our approach using McDropout. MC Dropout provides a scalable way to learn a predictive distribution. It works by randomly switching off neurons acting as regularizers in a given neural network. Each dropout configuration corresponds to a different sample from the approximate parametric posterior distribution p(θ|𝒟). Sampling from the (approximate) posterior enables Monte Carlo integration of the likelihood of the model, which uncovers the predictive distribution. Our uncertainty-based selection pipeline is detailed in Figure <ref> and in the algorithm <ref>. In each adaptation round, we use a finetuned model, to select the most uncertain samples from the augmentation set and add them to the training set for the next round of the model adaptation. In this work, we use the following three state-of-the-art (SOTA) ASR models: Wav2Vec2-XLSR-53 <cit.>, Hubert-Large <cit.> and NVIDIA Conformer-CTC Large (en-US) <cit.>, which we hereafter refer to as wav2vec, Hubert, and Nemo respectively. Hubert leverages the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. Wav2vec learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. Nemo combines convolutional neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. §.§ Experimental Design In our work, we explored two selection modes: most and random. The most strategy consists of selecting the most uncertain audio samples from the augmentation set, while the random method, as its name indicates, consists of selecting random audio samples across the augmentation dataset. In this work, we used the words "sampling" and "selection" interchangeably with the same meaning. We performed various experiments aimed at evaluating the efficiency and applicability of our proposed approach. Each experiment at the model level is performed in three domains: general, clinical, and both (which is the combination of general and clinical data). For each domain, we have three setups: (a) baseline which consists of simply finetuning a pretrained model on the entire dataset, (b) EU-Most where the pretrained model is finetuned within our framework using the most uncertain samples, and (c) EU-Random uses random selection. In order to answer questions 1-4, we use the epistemic uncertainty of audio samples, hereafter referred to as Uncertainty WER (U-WER). The rationale behind the U-WER is the following: as we acquire new data points that are intrinsically beneficial to the model and its learning, the U-WER should (consistently) decrease or be constant across the active learning rounds. We could therefore affirm that the model is gaining in robustness, knowledge, and performance; which is a very important aspect of generalization. This last point will be enforced by answering question 5 for which we used the standard WER. To answer question 5, we evaluated our approach with two speech pretrained models (Nemo, and Hubert), and across 3 external datasets (SautiDB, CommonVoice, and MedicalSpeech). Additionally, to answer questions 1-4, for consistency and better visualization, we consider exclusively the top-10 (in terms of frequency) accents that are present across three adaptation rounds and both selection modes. Regarding the very low-resource settings, we consider the 5 accents with the least number of hours of recordings. For our experiments, we use various types of GPUs: 6 RTX 800 (with 48 and 128 GB), and 4 A100 (with 48 and 128 GB). Our training and evaluation were constantly deployed over a month. Our models have ∼311281844 trainable parameters. Each audio sample has been normalized and processed with a sample rate of 16kHz. § RESULTS AND DISCUSSION Table <ref> presents the results for wav2vec, the primary model used to validate our hypotheses. We observe that our uncertainty-based selection approach yields considerable performance gains over the baseline. More importantly, we observe that the most selection provides the best results across all domains, especially the clinical and both (combined) domains. On the one hand, intuitively, the predominance of most makes sense because, through continual and progressive exposure to the most uncertain samples, the model updates its parameters to become more robust to the noise from uncertain samples. Therefore, it learns to better represent those uncertain samples, yielding a performance improvement. On the other hand, we anticipate that the gaps in performance are more noticeable in the general and clinical domains because their audio samples are the most difficult to capture due to various factors (e.g., domain-specific clinical jargon with several abbreviations for several medical subdomains). This result and analysis provide an answer to question 2, establishing most as the most beneficial selection strategy across all domains. This is also confirmed in Figures <ref>, <ref> and <ref>. Given the diversity of traits (accents, speakers, gender, age) across the dataset, we attempt to characterize the audio samples and domains that provide the best learning signal to the model. To ground our analysis, we evaluated accent information for top-k uncertain samples. The results are presented in Figures <ref>, <ref> and <ref>. For further analyses, we focus only on most results. For each domain, we find that the top-10 accents are consistent across all rounds (see Figures <ref> (a), <ref> (a), and <ref> (a)), representing the accents with the top-10 highest number of recording hours. With each successive most uncertain top-k samples, we observe that those accents remain the most uncertain and therefore the most difficult to transcribe. From the country and accent statistics (Tables <ref>, <ref> and <ref>), we observe that those languages have the highest number of speakers and geographical spread across many countries. This interesting observation can be understood as the linguistic richness factor, which potentially explains the high variance induced in the dataset. This implies that, in general, the accents that benefit the model and its learning are the ones that are linguistically rich and diverse. Generally speaking, the model adapts and continuously improves its performance on those accents across the rounds. However, it is also important to notice that for the clinical and both domains (see Figures <ref> (a), and <ref> (a)), the accent level analysis also reveals some perturbation and noise in the WER variation, which remains constant or slightly increases from Round 2 to Round 3: this suggests that for a critical and complex domain (like the clinical one), there is, and should be, a trade-off between linguistic diversity, rounds, and the value of k samples to be selected. Ultimately, we can positively answer questions 1 and 3, affirming that the model adapts qualitatively and quantitatively well to the beneficial accents and benefits from all domains. Figures <ref> (b), <ref> (b), and <ref> (b) also provide a positive answer to question 4, as we can see a generally improving (or steady) performance on truly low-resource accents in our training dataset, across domains and rounds. This makes our approach very relevant to the low-resourcefulness situation prevalent in many African languages. Finally, to demonstrate the effectiveness of our approach under different models and datasets, we experiment with two pretrained models (Hubert and Nemo), and with three datasets containing accented speech on general and clinical domains, only using most selection method. The results are respectively presented in Tables <ref> and <ref>. The main takeaway is that our uncertainty-based adaptation approach performs better than baselines. This efficiently demonstrates that our approach can be applied to any model architecture and with any dataset. § CONCLUSION We proposed an uncertainty-based model adaptation process that uses epistemic uncertainty, as an effective approach to build cost-efficient, robust, and linguistically diverse automatic speech recognition systems in a Pan-African Clinical context. We designed two selection methods: most and random sampling, and showed in all our experiments, that our proposed approach achieves better performance across different models and datasets. We also showed that the most uncertainty-based selection method, provided better results compared to the random method. Finally, our analyses suggest that our approach enables ASR models to select and learn from the most informative data samples making it very suitable for low-resource settings. § LIMITATIONS In section <ref>, the results and our analyses revealed that while our approach provided performance improvements, and benefited from linguistically rich accents, we observed that for complex domains (such as clinical and both), it is important to have a stopping criteria and a trade-off between the number of adaptation rounds and the amount of new data points selected at each round. We believe doing such analyses will provide useful insight to improve our current approach. Also, similar to machine learning and deep learning approaches, this work was limited by the availability of and access to computational resources. With more computational resources, we would explore Deep Ensembles <cit.> as an alternative to our current approach to estimating epistemic uncertainty from the model. acl_natbib § HYPER-PARAMETERS Table <ref> shows the hyper-parameter settings used in this study. The top_k value in the table is changed according to the domain used in each of the experiments. For example, when conducting experiments in the general domain, we set the value of top_k to 2k. § COUNTRY STATISTICS Table <ref> shows the statistics of the countries across the dataset. § DATASET ACCENTS STATS Tables <ref> and <ref> provide a list of AfriSpeech accents along with the number of unique speakers, countries where speakers for each accent are located, duration in seconds for each accent, and their presence in the train, dev, and test splits. § MOST COMMON ACCENT DISTRIBUTION Figures <ref> and <ref> show the most common accent distribution across the general domain in both random and most selection modes. § ASCENDING AND DESCENDING ACCENTS Figure <ref> shows ascending and descending accents across the Top 2k most uncertain samples.
http://arxiv.org/abs/2306.09317v1
20230615175237
QCD resummation of dijet azimuthal decorrelations in pp and pA collisions
[ "Mei-Sen Gao", "Zhong-Bo Kang", "Ding Yu Shao", "John Terry", "Cheng Zhang" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ]
=1 5pt -3pt
http://arxiv.org/abs/2306.01431v1
20230602104247
On Knowledge Editing in Federated Learning: Perspectives, Challenges, and Future Directions
[ "Leijie Wu", "Song Guo", "Junxiao Wang", "Zicong Hong", "Jie Zhang", "Jingren Zhou" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.DC" ]
Zero-Shot Blind Audio Bandwidth Extension Eloi Moliner, Filip Elvander, Member, IEEE, and Vesa Välimäki, Fellow, IEEE Manuscript received June 1, 2023; revised XXX YY, 2023. This research is part of the activities of the Nordic Sound and Music Computing Network—NordicSMC, NordForsk project no. 86892. (Corresponding author: Eloi Moliner) E. Moliner, F. Elvander, and V. Välimäki are with the Department of Information and Communications Engineering, Aalto University, Espoo, Finland (e-mail: [email protected]). July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ As Federated Learning (FL) has gained increasing attention, it has become widely acknowledged that straightforwardly applying stochastic gradient descent (SGD) on the overall framework when learning over a sequence of tasks results in the phenomenon known as “catastrophic forgetting”. Consequently, much FL research has centered on devising federated increasing learning methods to alleviate forgetting while augmenting knowledge. On the other hand, forgetting is not always detrimental. The selective amnesia, also known as federated unlearning, which entails the elimination of specific knowledge, can address privacy concerns and create additional “space” for acquiring new knowledge. However, there is a scarcity of extensive surveys that encompass recent advancements and provide a thorough examination of this issue. In this manuscript, we present an extensive survey on the topic of knowledge editing (augmentation/removal) in Federated Learning, with the goal of summarizing the state-of-the-art research and expanding the perspective for various domains. Initially, we introduce an integrated paradigm, referred to as Federated Editable Learning (FEL), by reevaluating the entire lifecycle of FL. Secondly, we provide a comprehensive overview of existing methods, evaluate their position within the proposed paradigm, and emphasize the current challenges they face. Lastly, we explore potential avenues for future research and identify unresolved issues. The concept of Federated Learning (FL) has garnered increasing attention as a means of conducting collaborative training on decentralized clients while preserving the privacy of individual data. Recent studies have shown that the knowledge of the overall framework is not static, leading to the development of counterpart techniques such as federated increasing learning and federated unlearning. However, there remains a lack of extensive surveys covering recent advances and thorough analysis of this issue. In this paper, we present a comprehensive survey on knowledge editing in FL, aiming to summarize the cutting-edge research and broaden the horizons for different domains. Firstly, we propose an integrated paradigm namely Federated Editable Learning (FEL) by rethinking the complete lifecycle of FL. Second, we summarize existing methods, examine their positions in that paradigm, and highlight their current challenges. Finally, we discuss some promising directions and open problems for further research. § INTRODUCTION Federated Learning (FL) <cit.> facilitates the collaborative learning of a global model by multiple local clients, while concurrently ensuring secure protection of privacy for each individual client. It effectively addresses the issue of data silos without completely compromising the privacy of the clients. In recent years, FL has garnered significant attention in the academic community and achieved remarkable successes in a variety of industrial applications such as autonomous driving <cit.>, wearable technology <cit.>, and medical diagnosis <cit.>. In general, the majority of existing FL methods <cit.> are formulated for static application scenarios, where the data and tasks of the overall FL framework are fixed and known ahead of time. However, in real-world applications, the situation is often dynamic, where local clients receive new task data in an online manner. To handle this type of situation, researchers are investigating how FL can be adapted to learn continuously over a sequence of tasks. It has become widely acknowledged that utilizing straightforward stochastic gradient descent (SGD) on FL when learning over a sequence of tasks results in the phenomenon known as “catastrophic forgetting”, which implies that the model forgets what it had previously learned when acquiring new knowledge <cit.>. As a result, a significant proportion of researchers have focused on devising methods namely federated increasing learning to augment knowledge while concurrently mitigating the problem of forgetting <cit.>. On the other hand, forgetting is not always detrimental. Selective amnesia, also referred to as federated unlearning <cit.>, which involves the elimination of specific knowledge, can address privacy concerns even create additional “space” for acquiring new knowledge. It is possible that in the future, FL will be required to completely remove any indication of having learned a specific data or task. As we look towards the future, imagine a FL service provider whose system is continuously updated by learning new skills from the data collected from its customers' daily lives. Occasionally, the provider may be required to delete previously acquired behaviors and/or knowledge regarding specific tasks or data that have been identified as raising potential fairness <cit.>, privacy <cit.>, or security concerns <cit.>. Nevertheless, there is a scarcity of extensive literature reviews that encompass recent advancements and provide a detailed examination of this subject matter. As of the time of writing this paper, federated unlearning has not yet been well studied in the federated increasing learning setting where the underlying data distribution can shift over time. In this paper, we undertake a comprehensive survey of the field of knowledge editing (augmentation/removal) in FL, with the aim of synthesizing the most recent research advancements and broadening the understanding of its potential applications across various domains. Overall we make the following contributions: * We introduce an integrated paradigm, referred to as Federated Editable Learning (FEL), by reevaluating the entire lifecycle of FL. * We present a thorough examination of existing methods, assess their position within the proposed framework, and highlight the current limitations and challenges they encounter. * We investigate the areas for future research and pinpoint unresolved issues. Towards efficient lifelong knowledge editing in FL, enabling FL to precisely forget what the user has specified without deteriorating the rest of the acquired knowledge; or, FL to not alter the model's behavior in other contexts when augmenting knowledge. § FEDERATED EDITABLE LEARNING In this section, we will introduce the concept of the Federated Learning Lifecycle, including its background, motivation, and definition. Different from most existing FL applications, we emphasize that a complete FL lifecycle should not only focus on the learning process to obtain a well-trained model, but also empower the reverse unlearning process to ensure user privacy protection. Therefore, we propose our Federated Editable Learning (FEL) framework to support the sustainable development of a federated learning system. §.§ The Lifecycle of Federated Learning Artificial Intelligence (AI) has become an essential component of life today, which achieves significant successes in various domains, such as Computer Vision (CV) <cit.>, Natural Language Processing (NLP), etc <cit.>. With the increase in advanced sensing and computing capabilities of ubiquitous mobile devices, AI architectures are gradually shifting from traditional data-centralized cloud server to the distributed edge. Besides, considering the importance of user data privacy protection, the federated learning (FL) concept has been proposed. As an emerging and novel distributed machine learning paradigm, FL adopts collaborative model training on extensive user devices to obtain a model containing globally shared knowledge. Only model parameters are exchanged between the server and user devices, so that the user data never leaves the local side and its privacy protection is guaranteed. Therefore, involving more user device participation to contribute their data for training is an important and critical principle in the FL scenario. However, in practical FL applications <cit.>, the FL system is always in a dynamic changing process, which can be divided into the following cases: * Data Dynamic: The local data of user devices already involved inside the FL system is constantly updated (generating new data & deleting obsolete data). * Device Dynamic: The participated user devices of the FL system are also changing (new devices join & old devices exit). For example, user devices from different time zones have their own available periods. In fact, the essence of both two dynamic cases is all about the data flow in the FL system. Besides, the current mainstream machine learning models also have their own constraints. Given a specific model architecture, there is an upper limit on the knowledge amount that the model can contain <cit.>. Generally speaking, a larger model with more parameters can learn more knowledge. Therefore, for an FL system with given model architecture, the global model has to keep updating its knowledge to adapt the above system dynamic and knowledge constraints. This model updating involves not only learning new knowledge from new data or devices, but also removing the negative effects of obsolete data or devices from the current model. We refer to them as “Learning process" and “Unlearning process", respectively. However, the majority of the existing FL frameworks mainly focus on the learning process, while the unlearning process is neglected <cit.>. A fixation on only learning new knowledge can lead to the model quickly reaching its knowledge upper limit and thus being unable to further adapt to the system dynamic. The “unlearning process" is also a necessary component of the FL system, which can help remove obsolete knowledge and create space for new knowledge in the future. Based on the above insights, we are the first to propose the concept of lifecycle for the current FL paradigm, where the demonstration of a complete FL lifecycle is shown in Figure <ref>. It incorporates both sides of “Learning" and “Unlearning " to achieve the expected model knowledge editing, which enables the sustainable development of the FL system. In addition, auditing the results of model editing is also a crucial element for the FL lifecycle. In the learning process, we need to ensure that the user device has performed the corresponding training requirements honestly and credibly. In the unlearning process, we need to ensure that the knowledge of deleted data is fully removed from the current model, while the knowledge of remaining data is kept unchanged. §.§ What is Federated Editable Learning Carrying the above concept of FL lifecycle, we introduce our framework named Federated Editable Learning (FEL) as the cornerstone for a perfect implementation of the FL lifecycle. The ultimate objective of FEL is to empower user devices to freely control their own private data in the FL system, while ensuring the FL global model can adaptively adjust its knowledge to handle the system dynamics. On the one hand, the user devices have the right to decide which part of their data will be contributed to participate in the collaborative training process of FL system, and the knowledge contained in limited data can be absorbed into the FL global model. On the other hand, the user devices should also have the right to revoke their previously participated data from the FL system, i.e., deleting the historical influences induced by participated data from the current FL global model. The model after deletion operation should behave as if these data never participate in FL training, and those relevant obsolete knowledge in the model also need to be removed. Therefore, to achieve the objectives of FEL in both aspects, we characterize the existing FL-related works into two categories: Federated Increasing Learning (FIL) and Federated Unlearning (FU). Federated Increasing Learning (FIL): The work in this category focuses on how to obtain new knowledge for the global model from the constant data flowing into the FL system, and there are several critical challenges that need to be addressed in FIL. First, the contributed data of user devices may only occur once in the FL system, the server must leverage the only opportunity to derive knowledge from this single participation. To address this challenge, we provide a comprehensive survey on Federated Continual Learning (FCL), where more details are provided in Sec.<ref>. Second, user devices are constrained by limited resources (e.g., memory space, computation unit, etc), which may result in a very limited amount of data being generated on them. As we know, good knowledge representation of AI comes from the big data analysis from massive amounts of data. Thus, how to extract knowledge with generalization from a small amount of specialization data is a serious challenge. We discover the success of many existing works on Federated Few-shot Learning (FFsL) to handle the above challenge, and provide an exhaustive survey to summarize the current research frontier. Federated Unlearning (FU): The work in this category focuses on deleting the obsolete data as well as its historical influence in the current model, which not only protects the user data privacy but also creates "space" for new knowledge in the future. A straightforward way is to retrain a new model from scratch with the remaining data only. However, naive retraining demands huge computational resources and time costs, which is completely unacceptable for an FL system. Therefore, we provide a detailed survey about the existing advanced or optimized retraining-based methods in Sec.<ref>. Except for exact retraining, approximate unlearning methods are the mainstream in the current FU field. Its objective is to generate an approximate unlearning model in a fast and computationally efficient manner, whose behavior is almost equivalent to an exact retraining model. A comprehensive survey about approximate FU is provided in Sec.<ref>. § FEDERATED INCREASING LEARNING In this section, we are going to consider Federated Increasing Learning (FIL) problems, which involve the federated training over time. In the standard FL setting, the objective is to build a joint model using a certain amount of data from a multitude of coordinated devices in a decentralized way. One typical assumption in standard FL is that the whole training dataset is available from the beginning of the training stage. However, this assumption rarely holds in real-world FL applications, where local clients often collect new data progressively, during several days, or weeks, depending on the context. Moreover, new clients with unseen new data may participate in the FL training, further aggravating the model and could be unable to converge to a solution. For these reasons, we need to introduce Increasing Learning (IL) <cit.> into FL. FIL research gains a lot of importance since it addresses the difficulties of training a model gradually with data collected over different periods of time, adapting to the new instances and trying to preserve the previous knowledge. §.§ Challenges in Federated Increasing Learning In the FIL setting, each local client collects the training data continuously with its own preference, while new clients with unseen new data could join the FL training at any time. More specifically, the data distributions of the collected classes across the current and newly-added clients are non-independent and identically distributed (non-i.i.d.). FIL requires these local clients to collaboratively train a global model to learn new data continuously, with constraints on privacy preservation and limited memory storage <cit.>. Under such circumstances, an ideal framework should recognize new classes and meanwhile maintain discriminability over old classes, which is called Federated Continual Learning (FCL). The main difficulty in FCL is catastrophic forgetting <cit.>. Catastrophic forgetting refers to the phenomenon that occurs when optimizing the model with new classes, the formerly acquired knowledge on old classes is quickly forgotten. Under severe circumstances, only limited novel instances are available to incrementally update the model. Meanwhile, local clients often have very limited storage memory to store few-shot old data. As a result, the task of recognizing few-shot new classes without forgetting old classes is called federated few-shot class-incremental learning. Such lack of data would further exacerbate local forgetting caused by class imbalance at the local clients and global forgetting brought by the non-i.i.d class imbalance across clients. §.§ Federated Continual Learning Mostly FCL methods address task-continual learning (task-CL) scenario <cit.>, where information about the task-ID of examples is known at test time. However, more challenging scenario is class-continual learning (class-CL), where the model has to distinguish among all the classes of all the tasks at test time <cit.>. In the following part, we will review the literature on task-based FCL and class-based FCL, respectively. For the task-CL problem in FL, a large number of works focus on the problem of catastrophic forgetting. For example, LFedCon2 <cit.> aims to use light, traditional classification models, e.g., a generalized linear model (GLM), a decision tree (DT), to support real-time, continuously and autonomously learning phase in a privacy-preserving and decentralized manner. Yoon et al., <cit.> propose FedWeIT, in which each client learns a series of tasks from the private local data stream, meanwhile different clients can also learn from others to enhance their learning performance. Specifically, a learnable mask vector is trained to filter the relevant knowledge from other clients during the aggregation phase. To solve the concept drift (i.e., the underlying distribution of data can change in unforeseen ways over time), CDA-FedAvg <cit.> designs a detection mechanism to monitor concept drift, so that each device has enough autonomy to decide when to train and what data to use. By such means, the server will simply orchestrate the process. FedCurv <cit.> and FedCL <cit.> adopt EWC <cit.>, which aims to improve the generalization ability of the federated models. For class-CL scenarios in FL, FCL methods can be divided into the following three types <cit.>: 1) Regularization-based approaches, which compute the importance of weights for previous tasks and penalize the model for changing them (i.e., FLwF <cit.> use distillation loss to transfer the knowledge from the server and decrease the forgetting of previously learned tasks); 2) Exemplar-based approaches, which store exemplars from previous tasks, i.e., Hendryx et al., <cit.> use federated prototypical networks to efficiently learn new classes in sequence; 3) Bias-correction approaches, which deal explicitly with bias towards recently-learned tasks (Legate et al., <cit.> adopt Truncated Cross Entropy (TCE) to force each client to learn by adapting the model’s internal representation of the classes present in its training data). §.§ Federated Few-shot Learning With the goal of extracting the inductive bias from base classes and generalizing it to unseen classes, few-shot learning has been widely explored in recent years. However, training FSL models on distributed devices is still an open problem. The first work of this topic was from Chen et al., <cit.> who explored federated meta-learning by applying FedAvg on meta-learning approaches such as MAML <cit.> in a straightforward way. Another line of work focuses on data augmentation to alleviate data scarcity, i.e., FewFedWeight <cit.> proposes an energy-based weighting algorithm for updating the weights of pseudo examples generated by the global model and a dynamic aggregation method based on the performance of client models. Then, FedFSL <cit.> formulates the training in an adversarial fashion and optimizes the client models to produce a discriminative feature space that can better represent unseen data samples, while FedAffect <cit.> considers a more challenge scenario: local participants design their models independently. Besides that, CSFedL <cit.> proposes an adaptive client selection strategy to mitigate the impact caused by malicious participation, to obtain a more effective few-shot model. § FEDERATED UNLEARNING After the FL training of a model is completed, clients may require the FL server to remove parts of data contribution from the global model to protect the user’s privacy and avoid legal risks. The scenario is called federated unlearning. The server should transform the model into an updated one that operates as if those deleted data never participated in FL training. In the section, we first discuss several major challenges in federated unlearning and then summarize the emerging federated unlearning works from perspectives of exact federated unlearning and approximate federated unlearning. §.§ Challenges in Federated Unlearning Compared with traditional machine learning, the characteristics of FL bring three major challenges to the unlearning technique as follows. 1) Iterative Learning: At the beginning of each round in FL, the model of each client is the aggregation result for all clients in the previous round. Such an intertwining of client training results in each round leads to the fundamental challenge of federated unlearning. 2) Information Isolation: Privacy protection, one of the major advantages of FL, prevent FL servers from accessing the client data. In other words, every client maintains its data samples and trains the model locally. 3) Stochastic Training: The process of FL training is non-deterministic. For each round, the FL server randomly selects the clients for global aggregation while each client randomly selects and orders batches of data for local training. §.§ Exact Federated Unlearning A naive way to make the best FL model that provably forgets the target data is to retrain a new model based on the remaining data from scratch. However, it is prohibitively expensive for an FL server to fully retrain a model in terms of computation and time overhead. Some works are designed to achieve the unlearning in such a way that the produced models are effectively the same as the ones obtained with retraining but at a cheaper computing cost. We call these works exact federated unlearning, which are summarized as follows. Some ensemble learning-based works are designed for machine unlearning originally, however, their idea can be applied in federated unlearning. For example, <cit.> propose an efficient exact unlearning framework. It divides the dataset into several isolated sub-datasets, each corresponding to a sub-model, accelerating the retraining process and ensuring the retrained model's accuracy. <cit.> present a novel neural network named LegoNet, composed of a fixed encoder (i.e., the backbone for representation learning) and multiple isolated adapters to be retrained for unlearning. The adapters occupy few parameters of LegoNet; thus, the re-trained parameters during unlearning can be significantly reduced. Moreover, without compromising the privacy of clients in FL, <cit.> develop a cryptography-based approach for federated unlearning. It presents a revocable federated learning framework for random forest (RF) called RevFRF by designing a customised homomorphic encryption-based protocol. RevFRF guarantees two levels of unlearning: 1) the remaining participants cannot utilize the data of an honest and leaving participant in the trained model; 2) a dishonest participant cannot get back to utilize the data of the remaining participants memorized by the trained model. §.§ Approximate Federated Unlearning Although the existing works for exact federated unlearning alleviate the expense of retraining to some extent, their cost is still unacceptable in most FL applications. Thus, recent works achieve higher efficiency of federated unlearning by relaxing the effectiveness and certifiability requirements for the new model after unlearning, which is called approximate federated learning. §.§.§ Gradient Recovery To overcome the high resource cost caused by the model retraining of the exact federated unlearning described in Sec.<ref>, the gradient recovery-based approach reconstructs the unlearned model based on the historical parameter updates of clients that have been retained at the FL server during the training process. <cit.> propose the first federated unlearning approach named FedEraser, reconstructing a new model based on the historical parameter updates of clients stored in the FL server. To speed up the retraining while maintaining the model performance, FedEraser has a calibration method for the stored historical updates. <cit.> propose an efficient retraining algorithm based on the diagonal empirical Fisher Information Matrix (FIM) for FL, by observing the first-order Taylor expansion of the loss function during the unlearning process. Moreover, to reduce approximation errors in retraining, the proposed algorithm has an adaptive momentum technique. <cit.> propose an FL model recovery method to recover a model from poisoning attacks using historical information rather than training from scratch. For each recovery, the server can estimate the model update of a client in each round based on its stored historical information during the past training process. <cit.> propose a federated recommendation unlearning method tailed for FL-based recommendation systems (FedRecs). The main idea is to revise historical updates and leverage the revised updates to speed up the reconstruction of a FedRec. §.§.§ Parameter Updating The above works for federated unlearning via gradient recovery require the FL server to store historical updates, which burdens the server. Therefore, another group of federated unlearning is to scrub the trained FL model of information to be forgotten, which we summarize as follows. <cit.> propose a federated unlearning method to eliminate a client’s contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model’s performance without using any data from the clients. <cit.> develop a Bayesian federated unlearning method called Forget-Stein Variational Gradient Descent (Forget-SVGD) based on SVGD, a particle-based approximate Bayesian inference approach via gradient-based deterministic updates. <cit.> allow a client to perform the unlearning by training a model to maximize the empirical loss via a Projected Gradient Descent algorithm. The previous works mainly focus on client-level federated unlearning (i.e., removing the data of a specific client from the model). To solve the limitation, <cit.> propose a general framework covering client-level, class-level, and sample-level federated unlearning. The framework comprises a reverse stochastic gradient ascent (SGA) algorithm with elastic weight consolidation (EWC) to achieve fine-grained elimination of training data at different levels. §.§.§ Architecture Modification Some approaches implement federated unlearning by modifying the model architecture. For example, <cit.> propose a channel pruning-based method to remove information about particular classes in an FL model. Its main idea is to quantify the class information learned by each channel without globally accessing the data, and then forget special classes by pruning the channels with the most class discrimination. <cit.> propose an output filtering technique to remove particular classes in logit-based classification models by applying linear transformation to the output logits, but do not modify the weights in the models. §.§.§ Noise Perturbation <cit.> focuses on randomly perturbing the trained model to unlearn specific data samples, which is motivated by the idea of differential privacy. <cit.> propose a new federated unlearning scheme named informed federated unlearning that unlearns a client's contribution with quantifiable unlearning guarantees. Unlearning guarantees are provided by introducing a randomized mechanism to perturb an intermediate model selected from the training process with client-specific noise. § VERIFICATION & AUDITING The previous literature review summarizes the state-of-the-art approaches for knowledge editing in the FL scenario, which together serve as a support to realize the FL system lifecycle. In addition, to guarantee our objectives are fully achieved according to specified requirements during the FL lifecycle, the verification for the learning process and auditing for the unlearning process are also critical system components, where a comprehensive survey on them is provided in this section. §.§ Verification Methods for Learning Process Although the increasing learning process can enable the Fl global model always to acquire new knowledge from the data to adapt the system dynamics, the expected knowledge can only be obtained by correct training according to the specified requirements. Thus, the learning process of user devices must be verifiable to ensure knowledge correctness. We summarize several tools able to verify the FL process as follows. §.§.§ Trusted Execution Environment As a secure environment maintained by each CPU, Trusted Execution Environment (TEE) is a hardware technology that outsources code execution on a protected memory region named enclave in any untrusted devices and enables the verification of the execution results. Some existing TEE-based FL frameworks depend on the TEE deployment on FL servers and user devices <cit.>. Similarly, a verifiable FIL framework can be realized by outsourcing the increasing learning process to the user devices' TEE. §.§.§ Proof of Learning The concept of Proof of Learning (PoL) proposed by <cit.> enables a verifier (e.g., an FL server) to assess the integrity of training computations for untrusted workers (e.g., user devices). Its main idea is to verify if a sequence of intermediate states (i.e., checkpoints of intermediate weights) came from training and are not random (or worse, forged by a malicious party). To prove it, the workers need to provide a sequence of batch indices for the same intermediate model updates. Although the PoL is a general approach, it may leak the privacy of user devices in FL, which remains to be solved. §.§.§ Swarm Learning For auditing and accountability in FIL, it is necessary to record the increasing learning process in a public, transparent, and tamper-proof ledger. By combining blockchain technology with FL for a new learning scheme called swarm learning <cit.>, the learning process can be fully recorded by such a ledger in a distributed manner despite the existence of malicious user devices. §.§ Auditing Methods for Unlearning Process It's easy to understand that the auditing mechanism is unnecessary in the category of “exact" FU, such as retraining since the revoked data is never involved in the new retraining model. However, for the another mainstream of “approximate" FU category, the auditing mechanism is critical and necessary, which can validate the effectiveness of these methods, i.e., How large is the difference between the approximate model and the exact retraining model? Membership inference attack: Given a data sample and the black-box access of the trained model, the goal of this kind of attack is to detect whether the data sample is inside the training dataset of this model <cit.>. More specifically, we use adversarial machine learning to obtain an inference model, which can recognize the differences in the target model's prediction results on the inputs that inside the training dataset versus outside the training dataset <cit.>. Membership inference attack is very effective on detecting data leakage, which can reflect whether the approximate unlearning still contains the information of deleted data or not. Information Leakage: Many machine learning models will inherently leak some private information during their training process <cit.>, such as the intermediate gradients . Many existing works have shown that the raw training data can be recovered with the gradients of each update step, where the gradients are accessible for both user devices and server in the FL scenario. This kind of information leakage is utilized and called gradient inversion attacks <cit.>. Therefore, we can compare the model difference before and after unlearning to infer whether the information of deleted data will still be leaked. Backdoor attacks: The techniques of backdoor attack are proposed to inject backdoors into the training data samples to poison the model. The derived model will make accurate predictions on clean data samples, but trigger the backdoor to make the wrong predictions on contaminated data samples. The backdoor attack technique can be utilized to validate the effectiveness of approximate FU. More specifically, the user devices can contaminate part of their own data samples during the FL training process <cit.>. If the contaminated data samples are successfully deleted, the unlearning model will predict them into their correct class. Otherwise, the unlearning model will trigger the backdoor to assign them to the wrong class. § CONCLUSION & FUTURE VISION In this paper, we introduce a novel concept of the FL lifecycle, which integrates both federated increasing learning (FIL) and federated unlearning (FU) to achieve knowledge editing for the FL system. As far as we know, it is the first time providing a comprehensive survey on FL system knowledge editing, including concepts, perspectives, challenges, and future vision. We summarize the state-of-the-art approaches for knowledge learning & unlearning in FL system and organize a clear taxonomy of them for handling different challenges. Moreover, we reclassify the representative verification and auditing mechanisms, which ensure that the knowledge editing process follows the specified requirements while the results are consistent with the expectation. Although each of the current knowledge editing techniques achieves similar purposes, they are still independent of each other because of the different methods used. Therefore, we discuss some promising directions within the future vision. Flexible & Unified Knowledge Editing: So far, knowledge editing needs to find respective solutions for various challenges, which makes their application scenarios very limited. There is an urgent need for a self-contained and unified knowledge editing framework that can flexibly achieve both learning and unlearning requirements. For example, gradient descent is applied for the learning process, while gradient ascent may be utilized for the unlearning process. A unified framework allows for more freedom of knowledge editing within the system and a large number of subsequent derivative efforts based on the same technical kernel can be integrated into the framework as components. Knowledge Disentanglement & Reassembly: Another promising direction is knowledge architecture advances. If the knowledge architecture inside the model can achieve free assembly like building blocks, the whole knowledge editing process will become extremely easy. A few existing works have designed multi-branch models for knowledge disentanglement in FL systems, which enables the user devices to independently extract different knowledge representations. For example, the whole model knowledge of each user can be disentangled into global shared knowledge and local personalized knowledge in <cit.>. Therefore, the learning and unlearning processes just need to simply add or remove the corresponding branches. Trustworthy FL Community: With the explosive growth of user devices, user heterogeneity, and their varying relationships, it's difficult for traditional FL with an authoritative server to manage the whole community. The ultimate future form of FL is an autonomous and trustworthy community with massive participants, where blockchain-based techniques are the foundation to support this future vision. Each user's activity details in the FL community are uploaded to their respective blocks for maintenance, including the learning and unlearning records, the data usage (only the data index, not the local raw data itself), the resource allocation, and so on. Any participants in the community can initiate the verification and auditing for others. The explosive growth of user devices, user heterogeneity and their varying relationships in FL will evolve from the “AI community” with authoritative management in FL into an unprecedented “AI society” in FL where massive participants and pluralistic interests behind them are calling for FL Governance to grapple with the promise and pitfalls. Particularly, to accommodate the blooming social activities while resisting the expanded attack surface, urgent concerns are raised about the property rights (e.g., multi-party ownership of AI property), social risk (e.g., security threats from uncertain interactions), and resource marketing (e.g., anti-monopoly of AI resources) in FL from governmental agencies, academia and industry. named
http://arxiv.org/abs/2306.11085v1
20230619175800
Minimax optimal testing by classification
[ "Patrik Róbert Gerber", "Yanjun Han", "Yury Polyanskiy" ]
math.ST
[ "math.ST", "cs.DS", "stat.TH" ]
Minimax optimal testing by classification Department of Mathematics Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA 02139, USA Institute for Data, Systems, and Society Massachusetts Institute of Technology 50 Ames St, Cambridge, MA 02142, USA Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology 32 Vassar St, Cambridge, MA 02142, USA Gerber, Han and Polyanskiy This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing (). In , one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample () and likelihood-free hypothesis testing (), and show that achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation () separation ϵ and the probability of error δ in a variety of non-parametric settings, including discrete distributions, d-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of for each class. As another highlight, we recover the minimax optimal complexity of over discrete distributions, which was recently established by <cit.>. The corresponding simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random. § INTRODUCTION The rapid development of machine learning over the past three decades has had a profound impact on many areas of science and technology. It has replaced or enhanced traditional statistical procedures and automated feature extraction and prediction where in the past human experts had to intervene manually. One example is the technique that has become known as `classification accuracy testing` (CAT). The idea, first explicitly described in <cit.>, is extremely simple. Consider the setting of two-sample testing: suppose the statistician has samples X and Y of size n from two distributions P_X and P_𝖸 respectively on some space X, and wishes to test the hypotheses TS H_0: P_X = P_Y versus H_1 : P_X≠ P_Y. The statistician has many classical methods at their disposal such as the Kolmogorov-Smirnov or the Wilcoxon – Mann – Whitney test. Friedman's idea was to use machine learning as a powerful tool to summarize the data and subsequently apply a classical two-sample test to the transformed data. More concretely, the proposal is to train a binary classifier C: X →{0,1} on the labeled data ∪_i=1^n {(X_i,0), (Y_i,1)} and compare the samples C(X_1), …, C(X_n) and C(Y_1), …, C(Y_n). Friedman's idea to use classifiers to summarize data before applying classical statistical analysis downstream can be generalized beyond two-sample testing (<ref>). Likelihood-free inference (LFI), also known as simulation-based inference (SBI), has seen a flurry of interest recently. In LFI, the scientist has a dataset Z_1,…,Z_m iid∼ P_θ^⋆ and is given access to a black box simulator which given a parameter θ produces a random variable with distribution P_θ. The goal is to do inference on θ^⋆. The key aspect of the problem, lending the name `likelihood-free`, is that the scientist doesn't know the inner workings of the simulator. In particular its output is not necessarily differentiable with respect to θ and the density of P_θ cannot be evaluated even up to normalization. This setting arises in numerous areas of science where highly complex, mechanistic, stochastic simulators are used such as climate modeling, particle physics, phylogenetics and epidemiology to name a few, and its importance was realized as early as <cit.>. In this paper we study the problem of likelihood-free hypothesis testing (LFHT) proposed recently in <cit.> as a simplified model of likelihood-free inference. Compared to two-sample testing, here in addition to the dataset Z of size m, we have two `simulated` samples X,Y of size n each from P_X and P_Y respectively. The goal is to test the hypotheses LFHT H_0 : Z_i ∼ P_X versus H_1 : Z_i ∼ P_Y. It is important that apriori P_ X and P_ Y are only known to belong to a certain ambient (usually non-parametric) class. This stands in contrast with the earliest appearances of (<ref>) in <cit.>, where authors studied the rate of decay of the type-I and type-II error probabilities for fixed P_X, P_Y. In the context of (<ref>) the idea of Friedman materializes as follows. First, train a classifier C: X →{0,1} to distinguish between P_X and P_Y and second, compare the transformed dataset { C(Z_j))}_j=1^m to { C(X_i)}_i=1^n and { C(Y_i)}_i=1^n. The second step compares iid samples of Bernoulli random variables (provided C is trained on held out data), thus any reasonable test simply thresholds the number of Z_j classified as 1, namely the test is of the form 1/m∑_j=1^m C(Z_j) ≥γ for some γ∈ [0,1]. The idea to classify Z as coming from either P_X or P_Y based on the empirical mass on some separating set S = C^-1({1}) ≈{d P_Y/d P_X≥1} has been attributed to Scheffé in folklore <cit.>. To illustrate the genuine importance of these ideas, we draw on the famous Higgs boson discovery. In 2012 <cit.> at the Large Hadron Collider (LHC) a team of physicists announced that they observed the Higgs boson, an elementary particle theorized to exist in 1964. It is regarded as the crowning achievement of the LHC, the most expensive instrument ever built. They achieved this feat via likelihood-free inference, using the ideas of classification accuracy testing/Scheffé's test in particular. As part of their analysis pipeline they trained a boosted decision tree classifier on simulated data and thresholded counts of observations falling in the classification region. This work was initiated as an attempt to understand the theoretical properties of classifier-accuracy testing, motivated by the clear practical interest in these questions. Our intuition told us that restricting the classifier to have binary output might throw away too much statistical power. In regions with large (small) density ratio, the binary output ought to loose useful information about the (un)certainty of the classifier output. The Neyman-Pearson Lemma phrases this succinctly: the optimal classifier aggregates the log density ratio, while heuristically Scheffé's test aggregates indicators that the log density ratio exceeds some threshold. The operational implication of this would be to train probabilistic classifiers C: X → approximating the log density ratio, and to aggregate this -valued output instead of the binary output. However, our results show that this is not necessary for optimality, at least in the minimax sense. §.§ Informal description of the results We study the problems of goodness-of-fit testing, two-sample testing and likelihood-free hypothesis testing in a minimax framework (see Section <ref> for precise definitions). Namely, given a family of probability distributions P, we study the minimum number of observations n (and m for ) that are required to perform the test with error probability less than δ∈(0,1/2) in the worst case over the distributions P_X and P_Y. We show for multiple natural classes P that there exist minimax optimal (with some restrictions) classification accuracy tests. Let us clarify what we mean by `classification-accuracy' tests for goodness-of-fit testing () and the problems and . Suppose we have a sample X of size 2n from the unknown distribution _X. We also have a second sample Y of size 2n from _Y∈ P which corresponds to the known null distribution in the case of and is unknown in the case of ,. Finally, for we have an additional sample Z of size 2m from _Z∈{_X,_Y}. Write D_tr{X^tr,Y^tr,Z^tr} for the first halves of each sample and D_te{X^te,Y^te, Z^te} for the rest. We train a classifier C: X →{0,1} on the input D_tr that aims to assign 1 to P_X and 0 to P_Y. Going forward, it will be easier to think of C in terms of the `separating set' S C^-1({1}). Thus, S is a random subset of X whose randomness comes from D_tr and potentially an external seed. Given two datasets {A_i}_i=1^a,{B_j}_j=1^b, we define the classifier-accuracy statistic T_S(A,B) 1/a∑_i=1^a {A_i ∈ S} - 1/b∑_j=1^b {B_j ∈ S}. The name `classifier-accuracy' is given due to the fact that T_S (X^te,Y^te)+1 is equal to the sum of the fraction of correctly classified test instances under the two classes. Finally, we say a test is a classifier-accuracy test if its output is obtained by thresholding |T_S| for some classifier C = _S on the test data D_te. There exist classifier-accuracy tests with minimax (near-)optimal sample complexity for all problems ,, and multiple classes of distributions P. §.§ Proof sketch The bulk of the technical difficulty lies in finding a good separating set S⊆ X. But how do we measure the quality of S? Define the “separation” (S) _X(S) - _Y(S), and the “size” τ(S) min{_X(S)_X(S^c), _Y(S)_Y(S^c)}. The following lemma describes the performance of classifier-accuracy tests (<ref>) in terms of and τ. Consider the hypothesis testing problem H_0: p=q versus an arbitrary alternative H_1. Suppose that the learner has constructed a separating set S such that |(S)|=|p(S)-q(S)|≥ for every (p,q)∈ H_1, and τ(S)=(p(S)(1-p(S))(q(S)(1-q(S)))≤τ for every (p,q)∈ H_0 ∪ H_1. Then using only the knowledge of τ, the classifier-accuracy test (<ref>) with n test samples from both p and q and an appropriate threshold achieves type-I and type-II errors at most δ, provided that n ≥ c log(1/δ)/( 1 + τ/) for a large enough universal constant c>0. With Lemma <ref> in hand it is clear how we need to design S. It should satisfy |(S)| is big under H_1, and τ(S) is small under both H_0 and H_1 with probability 1-δ. The latter condition, namely that τ is small i.e. C=_S is imbalanced, may seem unintuitive as given any two (sufficiently regular) probability distributions there always exists a balanced classifier whose separation is optimal up to constant. Let , Q be two distributions on a generic probability space ( X, ℱ). Then (, Q) ≤ 2sup{( C(X)=0)- Q( C(X)=0):( C(X)=0) = ( C(X)=1)}, where C: X→{0,1} is a possibly randomized classifier. Here the constant 2 is tight. Despite Proposition <ref>, we find that choosing a highly imbalanced classifier C is crucial in obtaining the minimax sample complexity in some classes. This has interesting implications for practical classifier-accuracy testing. Indeed, classifiers are commonly trained to minimize some proxy of misclassification error; however, the above heuristics show that this is not necessarily optimal, instead one should seek imbalanced classifiers with large separation. Another way to phrase it is that when training a classifier for testing one should have the downstream task in mind, namely, maximizing the power of the resulting test, and not classification accuracy. §.§ Prior work and contribution The problem of two-sample (TS) testing (aka closeness testing) and the related problem of goodness-of-fit (GoF) testing (aka identity testing) has a long history in both statistics and computer science. We only mention a small subset of the literature, directly relevant to our work. In seminal works Ingster studied (GoF) for the Gaussian sequence model <cit.> and for smooth densities <cit.> in one dimension. Extensions to multiple dimensions and (TS) can be found in works such as <cit.>. For discrete distributions on a large alphabet the two problems appeared first in <cit.>, see also <cit.> and the survey <cit.>. Recent work <cit.> has focused on GoF and TS with vanishing error probability. The problem of likelihood-free hypothesis testing appeared first in the works <cit.>, who studied the asymptotic setting. Minimax likelihood-free hypothesis testing (LFHT) was first studied by the information theory community in <cit.> for a restricted class of discrete distributions on a large alphabet, with a strengthening by <cit.> to vanishing error probability (in some regimes). More recently, the problem was proposed in <cit.> as a simplified model of likelihood-free inference, and authors derived minimax optimal sample complexities for constant error in the settings studied in the present paper. The idea of using classifiers for two-sample testing was proposed in <cit.> and has seen a flurry of interest <cit.>. In likelihood-free inference the output of classifiers can be used as summary statistics for Approximate Bayesian Computation <cit.> or to approximate density ratios <cit.> via the 'likelihood-ratio trick'. A classifier with binary {0,1} output was used in the discovery of the Higgs boson <cit.> to determine the detection region. Our work is the first to study the non-asymptotic properties of classifier-based tests in any setting and we find that classifier-accuracy tests are minimax optimal for a wide range of problems. As a consequence of our results we resolve the minimax high probability sample complexity of over all classes studied, and also obtain new, tight results on high probability and . §.§ Structure In Sections <ref> and <ref> we define the statistical problems and distribution classes we study. In Tables <ref> and <ref> we present all sample complexity results, and in Section <ref> we indicate how to derive them. Sections <ref>, <ref> and <ref> study the problem of learning good separating sets for discrete and smooth distributions and the Gaussian sequence model respectively. The appendix contains all proofs omitted from the main text, including all lower bounds in Appendix <ref>. § RESULTS §.§ Technical preliminaries §.§.§ Two-sample, goodness-of-fit and likelihood-free hypothesis testing Formally, we define a hypothesis as a set of probability measures. Given two hypotheses H_0 and H_1 consisting of distributions on some measurable space X, we say that a function ψ: X →{0,1} tests the two hypotheses against each other with error at most δ∈ (0,1/2) if max_i=0,1max_P ∈ H_i P_S ∼ P(ψ(S) ≠ i) ≤δ. Throughout the remainder of this section let P be a class of probability distributions on X. Suppose we observe independent samples X ∼ P_X^⊗ n, Y ∼ P_Y^⊗ n and Z∼ P_Z^⊗ m whose distributions P_X, P_Y, P_Z∈ P are unknown to us. We now define the problems at the center of our work. Given a known P_0 ∈ P, goodness-of-fit testing is the comparison of GoF H_0: P_X= P_0 versus H_1: ( P_X, P_0) ≥ϵ based on the sample X. Write n_(ϵ, δ, P) for the smallest number such that for all n ≥ n_TS there exists a function ψ : X^n →{0,1} which given X as input tests between H_0 and H_1 with error probability at most δ, for arbitrary P_X, P_0∈ P. Two-sample testing is the comparison of TS H_0: P_X= P_Y versus H_1: ( P_X, P_Y) ≥ϵ based on the samples X,Y. Write n_TS(ϵ, δ, P) for the smallest number such that for all n ≥ n_TS there exists a function ψ : X^n × X^n →{0,1} which given X,Y as input tests between H_0 and H_1 with error probability at most δ, for arbitrary P_X, P_Y∈ P. Likelihood-free hypothesis testing is the comparison of LF H_0: P_Z= P_X versus H_1: P_Z= P_Y based on the samples X,Y,Z. Write R_LF(ϵ, δ, P) ⊆ R^2 for the maximal set such that for all (n,m)∈ N^2 with n≥ x, m≥ y for some (x,y)∈ R_LF, there exists a function ψ : X^n × X^n× X^m →{0,1} which given X,Y,Z as input, successfully tests H_0 against H_1 with error probability at most δ, provided ( P_X, P_Y) ≥ϵ and P_X, P_Y∈ P. §.§.§ Classes of distributions We consider the following nonparametric families of distributions. Smooth density. Let C(β, d, C) denote the set of functions f:[0,1]^d → R that are ⌈β-1⌉-times differentiable and satisfy f_ C_βmax(max_0≤ |α| ≤⌈β-1⌉f^(α)_∞, sup_x≠ y ∈ [0,1]^d, |α|=⌈β-1⌉|f^(α)(x)-f^(α)(y)|/x-y^β-⌈β-1⌉_2) ≤ C, where ⌈β-1⌉ denotes the largest integer strictly smaller than β and |α|=∑_i=1^dα_i for the multiindex α∈^d. We write P_H(β, d, C_H) for the class of distributions with Lebesgue-densities in C(β, d, C_H). Distributions on a finite alphabet. For k ∈ N, let P_D(k) {all distributions on the finite alphabet [k]}, P_Db(k, C_Db) {p ∈ P_D(k): p_∞≤ C_Db/k}, where C_Db > 1 is a constant. In other words, P_Db are those discrete distributions that are bounded by a constant multiple of the uniform distribution. Gaussian sequence model on the Sobolev ellipsoid. Define the Sobolev ellipsoid E(s, C) of smoothness s > 0 and size C>0 as {θ∈ R^ N: ∑_j=1^∞ j^2sθ_j^2 ≤ C}. For θ∈^∞ let μ_θ = ⊗_i=1^∞ N(θ_i, 1), and define our second class as P_G(s, C_G) {μ_θ : θ∈ E(s, C_G)}. To briefly motivate the study of P_G, consider the classical Gaussian white noise model. Here we have iid observations of the stochastic process Y_t = f(t) t + W_t, t∈[0,1], where (W_t)_t≥0 denotes Brownian motion and f∈ L^2[0,1] is unknown. Suppose now that {ϕ_i}_i≥1 forms an orthonormal basis for L^2[0,1] and given an observation Y define the values y_i ⟨ Y, ϕ_i⟩ = ∫_0^1 f(t) ϕ_i(t) t + ∫_0^1 ϕ_i(t) W_t θ_i + ϵ_i. Notice that ϵ_i ∼ N(0,1) and that [ϵ_iϵ_j] = _i=j. In other words, the sequence {y_i}_i≥1 is an observation from the distribution μ_θ. Consider the particular case of ϕ_1≡1 and ϕ_2k=√(2)cos(2π kx),ϕ_2k+1=√(2)sin(2π kx) for k≥1 and assume that f satisfies periodic boundary conditions. Then θ denotes the Fourier coefficients of f and the condition that ∑_j=1^∞ j^2sθ_j^2 ≤ C is equivalent to an upper bound on the order (s,2)-Sobolev norm of f, see e.g. Proposition 1.14 of <cit.>. In other words, by studying the class P_G we can deduce results for signal detection in Gaussian white noise, where the signal has bounded Sobolev norm. §.§ Minimax sample complexity of classifier-accuracy tests In Tables <ref> and <ref> we present our and prior results on the minimax sample complexity of , and ; here * unmarked entries denote minimax optimal results achievable by a classifier-accuracy test; * entries marked with (OPT) denote minimax optimal results that are not known to be achievable by any classifier-accuracy test; * entries marked with (CAT) denote the best known result using a classifier-accuracy test. In the constant error regime (δ=Θ(1)) the results of Tables <ref> and <ref> are well known; for instance, the sample complexities of , , and under P_ D were characterized in <cit.>, respectively[<cit.> only resolved the minimax sample complexity of for P_D up to log(k)-factors in some regimes. However, by combining the classifier accuracy tests of this paper for m ≤ n and the reduction to two-sample testing with unequal sample size <cit.> for m > n these gaps are filled.]. Less is known under the high-probability regime (δ=o(1)): for P_ D, n_ was characterized in <cit.> for uniformity testing, with the general case following from the flattening reduction <cit.>; n_ was characterized in <cit.>. For R_LF, the k>n case for P_Db is resolved by <cit.>, and the achievability direction of the case m > n of R_LF for P_D can be deduced from <cit.> via the natural reduction between and (see <cit.>). The remaining upper bounds are achievable by the classifier-accuracy tests below, and the proofs of all lower bounds are deferred to Appendix <ref>. As for the efficacy of classifier-accuracy tests, the upper bounds in Tables <ref> and <ref> follow from the combination of Lemma <ref> and the following results: * P_Db: see Corollary <ref>; * P_H: see Section <ref> and Corollary <ref>; * P_G: see Proposition <ref>; * P_D: for , see Proposition <ref> if k < log(1/δ)/ϵ^4, and Proposition <ref> otherwise; for , see Proposition <ref>; for , see Proposition <ref> if n≥ k ∧ m, and Section <ref> and Proposition <ref> otherwise. § LEARNING SEPARATING SETS In this section, we construct the separating sets S used in the classifier-accuracy test (<ref>). Section <ref> is devoted to discrete distribution models P_Db and P_ D, where we need a delicate tradeoff between the expected separation and the size of S. A similar construction in the Gaussian sequence model P_ G is presented in Section <ref>. §.§ The discrete case Given two iid samples X,Y of sizes N_X, N_Y iid∼(n) from unknown discrete distributions p=(p_1,…,p_k),q=(q_1,…,q_k) over a finite alphabet [k]={1,2,…,k}, can we learn a set Ŝ⊆ [k] using X,Y that separates p from q? To measure the quality of a given separating set A⊆ [k], we define two quantities (A) p(A)-q(A) and τ(A) min{p(A)p(A^c), q(A)q(A^c)}. Intuitively, the first quantity (A) measures the separation of A, and the second quantity τ(A) measures the size of A. Recall that by Lemma <ref>, in order to perform the classifier-accuracy test (<ref>), we aim to find a separating set Ŝ such that |(Ŝ)| is large and τ(Ŝ) is small. The rest of this section is devoted to the construction of Ŝ satisfying (<ref>), and we will present our results on learning separating sets in order of increasing complexity. Notation: for a random variable X we write σ^2(X) for the optimal sub-Gaussian variance proxy of X. In other words, σ^2(X) is the smallest value such that exp(λ (X- X)) ≤exp(λ^2σ^2(X)/2) holds for all λ∈. §.§.§ A natural separating set Let {X_i,Y_i}_i ∈ [k] be the empirical frequencies of each bin i ∈ [k] in our samples X,Y, i.e. nX_i∼(np_i) and nY_i∼(nq_i). A natural separating set is the following: Ŝ_1/2 {i: X_i > Y_i or X_i=Y_i and C_i=1}, where C_1,C_2… C_k are iid (1/2) random variables. We use the subscript “1/2” to illustrate our tie-breaking rule: when X_i = Y_i, the symbol i is added to the set with probability 1/2. Our first result concerns the separating power of the above set. Suppose p,q ∈ P_D(k) with (p,q)≥ϵ. There exists a universal constant c>0 such that ((Ŝ_1/2) ≥ cϵ^2(n/k√(n/k)1/ϵ)) ≥ 1-δ, provided n ≥1/c n_TS(ϵ, δ, P_D(k)). Together with the trivial upper bound τ(Ŝ_1/2)≤ 1/4, Proposition <ref> and Lemma <ref> imply that using Ŝ_1/2 achieves the minimax sample complexity for the following problems: * in P_Db and P_ D as long as k= O(log(1/δ)/ϵ^4); * in P_Db as long as k= O(log(1/δ)/ϵ^4), and in P_ D for all (k,ϵ,δ); * in P_Db as long as k= O(log(1/δ)/ϵ^4), and in P_ D as long as n≥ m. However, in the remaining regimes the above test could be strictly sub-optimal. This failure comes down to two issues. First, Proposition <ref> requires n≳ n_(ϵ,δ, P_ D(k)) in order to find a good separating set, which can be sub-optimal when the optimal sample complexity for the original testing problem is only n≳ n_(ϵ,δ, P_ D(k)). Second, the quantity τ(Ŝ_1/2) is Ω(1) in the general case because the tie-breaking rule adds too many symbols to the set. These issues will be addressed separately in the next two sections. §.§.§ The “better of two” separating sets This section aims to find a separating set Ŝ with essentially the same separation as Ŝ_1/2 in Proposition <ref>, but with a smaller τ(Ŝ). The central idea is to use a different tie-breaking rule from Ŝ_1/2. Given a subset D ⊆ [k], we define the imbalanced separating sets Ŝ_>(D) = {i∈ D:X_i>Y_i}, Ŝ_<(D) = {i ∈ D:X_i < Y_i}. In other words, in both Ŝ_> and Ŝ_<, we do not include the symbols with X_i = Y_i in the separating set. Consequently, |Ŝ_>(D)| |Ŝ_<(D)| is upper bounded by the sample size; if in addition q_i is bounded from above uniformly over i∈ D, this will yield good control of τ for both separating sets Ŝ_>(D) and Ŝ_<(D). In particular, τ(Ŝ_>(D)) τ(Ŝ_<(D)) = O( 1 (nmax_i∈ Dq_i)). Next we aim to show that the above sets achieve good separation. However, there is a subtlety here: removing the ties from Ŝ_1/2 may no longer guarantee the desired separation, as illustrated in the following proposition. Consider the distributions p,q on [3k] with p_i = {i≤ k}/(2k) + {i > k}/(4k) and q_i = {i≤ k}/k. Then, for n ≤ 0.6k, sep(Ŝ_>([3k])) < 0. Proposition <ref> shows that sticking to only one set Ŝ_> or Ŝ_< fails to give the same separation guarantees as Proposition <ref>. A priori it may seem that Ŝ_> is designed to capture elements of the support where p is greater than q, but it fails to do so spectacularly. An intuitive explanation of this phenomenon is as follows. Since the probability of each bin is small (≲1/k) under both p and q, in the small n regime[Technically, to satisfy the stated conditions we would require n≲√(k), but the described event captures dominant effects even for larger √(k)≪ n≪ k.] can expect that (a) each bin appears either once or not at all and (b) there is no overlap between the observed bins in sample X and Y. In this heuristic picture, the set Ŝ_> is simply the set of observed bins in the X-sample. Each X-sample falling in the first k bins contributes -1 2k to the separation, while each X-sample in the last 2k bins contributes only +1 4k to the separation. Since p puts mass 1/2 on both the first k and last 2k bins, there is an equal number of n/2 observations in each part and the overall separation is ≍-n 8k. Similar results can be proved for Ŝ_< with p,q as above but swapped, and also for modified p,q separated by smaller ϵ in for any ϵ∈ (0,1). Motivated by the above discussion, in the sequel we consider the sets Ŝ_>, Ŝ_< jointly. Specifically, the next proposition shows that at least one of the sets Ŝ_> and Ŝ_< have a good separation. There exists a universal constant c>0 such that for any D ⊆ [k] and probability mass functions p,q, it holds that [(Ŝ_>(D))-(Ŝ_<(D))] ≥ c∑_i ∈ Dn(p_i-q_i)^2/√(n(p_i∧ q_i)+1) |p_i-q_i|, σ^2((Ŝ_>(D))) + σ^2((Ŝ_<(D))) ≤1/c∑_i ∈ Dp_i+q_i/n |p_i-q_i|^2. Based on Proposition <ref>, our final separating set is chosen from these two options, based on evaluation on held out data. As for the choice of D, in this section we choose D = [k]. The following corollary summarizes the performance of this choice under P_Db. Suppose p,q ∈ P_Db(k, O(1)) with (p,q)≥ϵ. There exists a universal constant c>0 such that using the samples X,Y we can find a set Ŝ⊆ [k] which, with probability 1-δ, satisfies |(Ŝ)| ≥ cϵ^2(1/ϵ√(n/k)n/k) and τ(Ŝ) ≤1/c(1n/k), provided n≥1/cn_(ϵ, δ, P_Db(k, O(1))). By Corollary <ref> and Lemma <ref>, using the above set Ŝ achieves the minimax sample complexity for all problems , , and and all parameters (k,ϵ,δ) under P_Db. However, under P_ D, the performance of Ŝ is no better than that of Ŝ_1/2. This is because a good control of τ(Ŝ_>([k])) requires a bounded probability mass function; in other words, choosing D=[k] is not optimal for finding the best separating set under P_ D. In the next section, we address this issue by choosing D to be one of O(log k) subsets of [k]. §.§.§ The “best of O(log k)" separating sets This section is devoted to the two missing regimes m ≥ n for over P_D and k ≳log(1/δ)/ϵ^4 for over P_D (cf. discussion after Proposition <ref> and Corollary <ref>). For the former, recall that the classifier-accuracy test based on Ŝ_1/2 achieves the sample complexity n ≳ n_(ϵ,δ, P_ D) + k√(log(1/δ))/√(n)ϵ^2. If n ≳ k then (<ref>) is the same as n≳ n_; if m/log(1/δ) ≲ n then (<ref>) is implied by n≳ n_ + klog(1/δ)/√(m)ϵ^2, which is optimal within an O(log^1/2(1/δ)) factor (cf. Table <ref>). In our application to we take m=∞, and the missing regime k ≳log(1/δ)/ϵ^4 corresponds precisely to n_≲ k. Summarizing, in the remainder of this section we may assume that k (m/log(1/δ)) ≳ n. Let t = k∧ (c_0m/log(1/δ)), where c_0>0 is a small absolute constant. By the previous paragraph, we assume without loss of generality that t > n. For ℓ = ⌈log_2(t/n)⌉≥ 1, define the following ℓ+2 subsets of [k]: D_0 = {i: q̂_i^0 ≤1/t}, D_j = {i: q̂_i^0 ∈(2^j-1/t, 2^j/t]} for j∈[ℓ], D_ℓ+1 = {i: q̂_i^0 > 2^ℓ/t}. Here q̂_i^0 denotes the empirical pmf of m/2 held out samples drawn from q (for , one can understand q̂_i^0 = q_i for the distribution q is known). The motivation behind the above choices is the “localization” of each q̂_i^0, as shown in the following lemma. For a small enough universal constant c_0>0, with probability at least 1-kδ it holds that for each i∈ [k]: * if q̂_i^0∈ D_0, then q_i < 2/t; * if q̂_i^0∈ D_j for some j∈ [ℓ], then q_i ∈ (2^j-2/t, 2^j+1/t]; * if q̂_i^0∈ D_ℓ+1, then q_i > 2^ℓ-1/t. Lemma <ref> ensures that with high probability, the distribution q restricted to each set D_j is near-uniform. This is similar in spirit to the idea of flattening used in distribution testing <cit.>. The proof of Lemma <ref> directly follows from the Poisson concentration in Lemma <ref> and is thus omitted. Our main result of this section is the next proposition, which shows that there exist some j∈{0,1,⋯,ℓ+1} and Ŝ⊆ D_j such that Ŝ is a near-optimal separating set within logarithmic factors. Suppose p, q∈ P_ D(k) with (p,q)≥ϵ, and X, Y are n iid samples drawn from p, q respectively. There exists a universal constant c>0 such that using the samples X, Y, we can find some j∈{0,1,⋯,ℓ+1} and a set Ŝ⊆ D_j which, with probability 1- O(kδ), satisfies |(Ŝ)| ≥ c(ϵ/ℓ)^2 n/k if j = 0 n/√(kt/2^j) if j ∈ [ℓ+1] and τ(Ŝ)≤n2^j/ct provided that n√(1m/log(1/δ)k)≥1/c n_(ϵ/ℓ,δ, P_ D). By Proposition <ref> and Lemma <ref>, using the above set Ŝ leads to the following sample complexity guarantee for the problems and : * for under P_ D, it succeeds with n=Θ(n_(ϵ/ℓ, δ /k, P_D)) observations, which is within a multiplicative O(log^Θ(1)(k)) factor of the minimax optimal sample complexity in the missing k≥log(1/δ)/ϵ^4 regime; * for under P_ D and m≥ n, it succeeds with n=Θ(n_(ϵ/ℓ,δ/k, P_D)√(klog(k/δ)/m)) observations, which is within a multiplicative O(log^Θ(1)(k)log(k/δ)) factor of the minimax optimal sample complexity in the missing n ≤ m k. Therefore, classifier-accuracy tests always lead to near-optimal sample complexities for all ,, and problems under both P_Db and P_ D, within polylogarithmic factors in (k, 1/δ). We leave the removal of extra logarithmic factors for classifier-accuracy tests as an open problem. §.§ The smooth density case We briefly explain how Corollary <ref> can be used to learn separating sets between distributions in the class P_H of β-Hölder smooth distributions on [0,1]^d. The reduction relies on an approximation result due to Ingster <cit.>, see also <cit.>. Let P_r be the L^2-projection onto piecewise constant functions on the regular grid on [0,1]^d with r^d cells. There exist constants c_1,c_2 independent of r such that for any f ∈ P_H(β, d, C_H), P_rf_2≥ c_1f_2-c_2r^-β. For simplicity write f,g for the Lebesgue densities of P_X, P_Y∈ P_H. Suppose ( P_X, P_Y) = 1/2f-g_1 ≥ϵ. By Jensen's inequality and Lemma <ref>, ϵ≲P_r(f-g)_2 for r ≍ϵ^-1/β. The key observation is that P_rf is essentially the probability mass function of the distribution P_X when binned on the regular grid with r^d cells. We can now directly apply the results for P_Db (Corollary <ref>) with alphabet size k ≍ϵ^-d/β, which combined with Lemma <ref> leads to the sample complexity guarantees in Table <ref> for the smooth density class P_H in all three problems , and . §.§ The Gaussian case Suppose we have two samples X,Y of size n from ⊗_j=1^∞ N(θ^X_j, 1) =: μ_θ^X and μ_θ^Y respectively, where θ^X,θ^Y have Sobolev norm θ_s^2 ∑_j θ_j^2 j^2s bounded by a constant. In addition, (μ_θ^X, μ_θ^Y) ≥ϵ > 0. We use θ̂^X and θ̂^Y to denote the empirical mean vector from samples X and Y, respectively. The separating set is constructed as follows: Ŝ = {Z ∈^ : T(Z) ≥ 0}, where T(Z) = 2∑_j=1^J (θ̂^X_j-θ̂^Y_j)(Z_j - (θ̂^X_j+θ̂^Y_j)/2) for some J ∈ℕ to be specified. This is simply a truncated version of the likelihood-ratio test between μ_θ̂^X and μ_θ̂^Y, where we set all but the first J coordinates of θ̂^X and θ̂^Y to zero. The performance of the separating set is summarized in the next proposition. There exists universal constants c,c' such that when J = ⌊ cϵ^-1/s⌋ the inequality (μ_θ^X(Ŝ)-μ_θ^Y(Ŝ) ≥ c'(√(nϵ^1/s)1/ϵ)ϵ^2) ≥ 1-δ holds, provided n≳1/c'n_TS(ϵ, δ, P_G). Applying Proposition <ref> and Lemma <ref> with the trivial bound τ(Ŝ) ≤ 1/4 leads to the sample complexity guarantees in Table <ref> for the Gaussian sequence model class P_G in all three problems , and . § ACKNOWLEDGEMENTS YH was generously supported by the Norbert Wiener postdoctoral fellowship in statistics at MIT IDSS. YP was supported in part by the National Science Foundation under Grant No CCF-2131115. Research was sponsored by the United States Air Force Research Laboratory and the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. alpha § AUXILIARY LEMMAS We state some auxiliary lemmas which will be used for the proof. We begin with a simple identity for standard normal distributions. Take a,b ∈ and let Z be standard normal. Then Φ(aZ+b) = Φ(b/√(1+a^2)). Let Z' be a standard Gaussian independent of Z. Then Φ(aZ+b) = (aZ+b≥ Z') = (Z'-aZ/√(1+a^2)≤b/√(1+a^2)) = Φ(b/√(1+a^2)). The following lemma is the celebrated result of Gaussian Lipschitz concentration. Let Q be a d-dimensional standard Gaussian and let f:^d → be σ-Lipschitz. Then f(Q) is sub-Gaussian with variance proxy σ^2. The next lemma states the Chernoff bound for Poisson random variables. For all λ > 0 and x≥0 we have ((λ)-λ≥ x) ≤exp(-x^2/2(λ+x)), ((λ)-λ≤ -x) ≤exp(-x^2/2λ). The following technical lemma is helpful in establishing the Bernstein concentration in Lemma <ref>. Let a≥0, p,q ∈ [0,1] and define τ = p(1-p)∧ q(1-q), ν = p(1-p)∨ q(1-q). Then it always holds that a√(ν/2) ≤ a√(τ) + a^2 + |p-q|. In particular, if |p-q|≥ a√(τ) + a^2, then 4|p-q| ≥ a√(τ) + a√(ν) + a^2. After rearranging and noting that 1+2√(2)<4, it is clear that the first inequality implies the second. Below we prove the first inequality. Since the claim is invariant under the transformations (p,q)↦ (q,p) and (p,q)↦ (1-p,1-q), it suffices to consider the case where p≤ 1/2 and p(1-p)≤ q(1-q). It further suffices to consider the case where p≤ q≤ 1/2: if not, then p≤ 1-q≤ 1/2, and the transformation (p,q)↦ (p,1-q) keeps (τ,ν) invariant while makes |p-q| smaller. The proof is then completed by considering the following two scenarios: * if p≥ q/2, then ν = q(1-q)≤ 2p(1-p)=2τ, so a√(ν/2)≤ a√(τ); * if p≤ q/2, then 2a√(ν)≤ a^2+ν≤ a^2 + q≤ a^2 + 2(q-p). § OMITTED PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> Before we prove Lemma <ref>, we begin with a technical lemma on the Bernstein concentration of the classifier-accuracy test (<ref>). Suppose A_1,…,A_n iid∼(p) and B_1,…,B_m iid∼(q). Let τ = p(1-p)∧ q(1-q) and define the averages A̅ = 1/n∑_i=1^nA_i and B̅=1/m∑_j=1^mB_j. There exists a universal constant c>0 such that (|A̅-B̅| ≤1/2|p-q| - 1/2√(clog(1/δ) τ/n m) - 1/2clog(1/δ)/n m) ≤δ, (|A̅ - B̅| ≥ 2|p-q| + 2√(clog(1/δ)τ/n m) + 2clog(1/δ)/n m) ≤δ. Let ν = p(1-p)∨ q(1-q). Note that the first inequality is trivially true if |p-q| ≤√(clog(1/δ) τ/n m) + clog(1/δ)/n m. Assuming otherwise, by the second statement of Lemma <ref>, the first probability is upper bounded by (|A̅-B̅| ≤ |p-q| - 5/8√(clog(1/δ)τ/n m) - 1/8√(clog(1/δ)ν/n m) - 5/8clog(1/δ)/n m). By choosing c sufficiently large (independently of p,q,n,m,δ), and applying Bernstein's inequality separately to both A̅ and B̅, the above probability can be made smaller than δ. For the second inequality, using the first statement of Lemma <ref>, it is upper bounded by (|A̅-B̅| ≥ |p-q| + √(clog(1/δ)τ/n m) + 1/√(2)√(clog(1/δ) ν/n m) + clog(1/δ)/n m). Again, taking c sufficiently large (independently of p,q,n,m,δ) and applying Bernstein's inequality separately to both A̅ and B̅, the above probability can be made smaller than δ. Now we proceed to prove Lemma <ref>. Using n test samples (X,Y) from both p and q, consider the following classifier-accuracy test: we accept H_0 if | 1/n∑_i=1^n ((X_i∈ S) - (Y_i∈ S) ) | ≤√(cτlog(1/δ)/n) + clog(1/δ)/n, and reject H_0 otherwise. Here c>0 is a large absolute constant, and we note that the threshold only relies on the knowledge of τ in addition to (n,δ). To analyze the type-I and type-II errors, first assume that H_0 holds. Since (S)=0 under H_0, the second statement of Lemma <ref> implies that we accept H_0 with probability at least 1-δ/2 if c>0 is large enough. If H_1 holds, with probability at least 1-δ/2, by the first statement of Lemma <ref> we have | 1/n∑_i=1^n ((X_i∈ S) - (Y_i∈ S) ) | ≥ || - (√(cτlog(1/δ)/n) + clog(1/δ)/n). By the lower bound of n assumed in Lemma <ref>, in this case we will reject H_0, as desired. §.§ Proof of Proposition <ref> Let μ be a non-negative measure on some space X and let a,b : X →_+ such that ∫ a(x)μ(x)>0 and b(x)=0 only if a(x)=0. Then inf_x∈spt(μ)(a(x)/b(x)) ≤∫ a(x)μ(x)/∫ b(x)μ(x)≤sup_x∈spt(μ)(a(x)/b(x)). Defining 0/0=1, we have ∫ a(x) μ(x) = ∫a(x)/b(x)b(x)μ(x) ≤sup_x∈spt(μ)(a(x)/b(x)) ∫ b(x)μ(x). The other direction follows analogously. Let p,q be the densities of , Q with respect to a common dominating measure, and let E {x:p(x)> q(x)} so that (, Q)=(E)- Q(E) > 0. Assume without loss of generality that (E) + Q(E) ≥ 1. Given t∈[0,1] define E_t {x:p(x)-q(x)/p(x)+q(x)≥ t}, so that the map t↦(E_t) + Q(E_t) is non-increasing and left-continuous. Note that E_0=E while E_1 = ∅, so that t^⋆ = max{t∈ [0,1]: (E_t) + Q(E_t) ≥ 1} exists. Now choose the randomized classifier C as follows: C(x) = 0 if x ∈ E_(t^⋆)^+, 1 if x ∉ E_t^⋆, (r) if x ∈ E_t^⋆ -E_(t^⋆)^+, where E_(t^⋆)^+ = ∩_t>t^⋆ E_t ⊆ E_t^⋆, and r := 1 - (E_(t^⋆)^+) - Q(E_(t^⋆)^+)/(E_t^⋆) + Q(E_t^⋆) - (E_(t^⋆)^+) - Q(E_(t^⋆)^+)∈ [0,1]. This classifier is balanced, as ( C(X)=0) + Q( C(X)=0) = (E_(t^⋆)^+) + Q(E_(t^⋆)^+) + r((E_t^⋆) + Q(E_t^⋆) - (E_(t^⋆)^+) - Q(E_(t^⋆)^+)) = 1. For t∈[0,1] define f(t) ((E_t)-(E_t))/((E_t)+(E_t)) if (E_t)+(E_t) > 0, 1 otherwise. Let 0 ≤ t ≤ s ≤ 1, we show that f(t) ≤ f(s). Without loss of generality assume that f(s)<1 and that (E_s\ E_t)+(E_s\ E_t) > 0. Notice that f(t) ≤ f(s) if and only if ∫_E_t\ E_s(p(x)-q(x)) x/∫_E_t\ E_s(p(x)+q(x)) x!≤∫_E_s(p(x)-q(x)) x/∫_E_s(p(x)+q(x)) x. However, the above inequality follows from Lemma <ref>. Thus, it holds that ( C(X)=0) - Q( C(X)=0)/( C(X)=0) + Q( C(X)=0)≥ f(t^⋆) ≥ f(0) = (E) - Q(E)/(E) + Q(E). Plugging in ( C(X)=0) + Q( C(X)=0) = 1 and (E)+(E) ≤ 2 yields the result. To show tightness, one can consider p(x)=_[0,1], q(x) = (1+ϵ)1_[0,1/(1+ϵ)], C(x) = 1_x∈ (1/(2+ϵ),1], and let ϵ→ 0^+. § OMITTED PROOFS FROM SECTION <REF> §.§ Useful Lemmas Before we present the formal proofs, this section summarizes some useful lemmas on the expected value and sub-Gaussian concentration of the separation. Let μ≥λ≥ 0 and X∼(μ),Y∼(λ). Then (X>Y)+ 1/2(X=Y) - 1/2≥ c(μ-λ/√(λ + 1) 1) holds, where c>0 is a universal constant. For t ∈ [λ,μ] define the function f(t) = ((t) > Y) + 1/2((t)=Y). Clearly f(λ) = 1/2. We have /ṭ((t) > Y) = -((t) > Y) + ((t) > Y-1) = ((t) = Y). Similarly we get /ṭ((t)=Y) = -((t)=Y)+((t)=Y-1). Thus, we obtain f'(t) = 1/2[((t) ∈{Y-1,Y})]. Next we prove the following inequality: if y is a non-negative integer with |y-t|≤ 8√(t), then ((t) = y) = Ω(1/√(t+1)). To prove (<ref>), we distinguish three scenarios: * If t<1/100, then the only non-negative integer y with |y-t|≤ 8√(t) is y=0. Therefore ((t)=y) = e^-t = Ω(1). * If 1/100≤ t≤ 100, then 0≤ y≤ 180. In this case, ((t) = y) ≥min_1/100 ≤ t ≤ 100min_0≤ y≤ 180((t)=y) = Ω(1). * If t>100, then for t-8√(t)≤ y_1≤ y_2≤ t+8√(t), we have ((t)=y_1)/((t)=y_2) = t^y_2-y_1y_2!/y_1! = ∏_y=y_1+1^y_2t/y = (1± O(t^-1/2) )^ O(16√(t)) = Θ(1). In the above we have used that |t/y-1| = O(t^-1/2) for all y∈ [y_1, y_2], and y_2 - y_1 ≤ 16√(t). Consequently, ((t)=y) = Ω((|(t)-t|≤ 8√(t))/16√(t)) = Ω(1/√(t)), where the last step is due to Chebyshev's inequality. Now we apply (<ref>) to prove Lemma <ref>. We first show that for non-negative integer y, { |y-λ|≤ 2√(λ)}∧{√(λ)≤√(t)≤√(λ) + 1 }⟹{ |y-t|≤ 8√(t)}. In fact, if √(λ)<√(2)-1, then the LHS of (<ref>) implies that y=0 and t< 2, thus (<ref>) holds. If √(λ)≥√(2)-1, then the LHS of (<ref>) implies that |y-t| ≤ |y-λ| + (t - λ) ≤ 2√(λ) + (2√(λ) + 1) < 8√(λ)≤ 8√(t), and (<ref>) holds as well. Next, by (<ref>) and (<ref>), as well as Chebyshev's inequality (|Y-λ| ≤ 2√(λ)) ≥3/4, we have f'(t) ≥3/8min_y≥ 0:|y-λ|≤2√(λ)((t) =y) ≥3/8{√(λ)≤√(t)≤√(λ) + 1 }·min_|y-t|≤ 8√(t)((t) =y) = Ω({√(λ)≤√(t)≤√(λ) + 1 }/√(t+1)) = Ω({√(λ)≤√(t)≤√(λ) + 1 }/√(λ +1)). Finally, for some absolute constant c>0 it holds that f(μ) - f(λ) = ∫_λ^μ f'(t)ṭ≥ c∫_λ^μ{√(λ)≤√(t)≤√(λ) + 1 }/√(λ +1)ṭ≥ c(μ-λ/√(λ+1)∧ 1), which is the statement of the lemma. For any D⊆ [k], each of (Ŝ_s(D)), s∈{>,<,1/2} is sub-Gaussian with variance proxy σ^2 which can be bounded as σ^2 ≲∑_i ∈ D (p_i-q_i)^2 p_i+q_i/n = O(1/n), with universal hidden constants. Using standard tail bounds of the Poisson distribution (Lemma <ref>) we have for any i ∈ D with p_i>q_i, (i ∈Ŝ_<(D)) ≤(i ∉Ŝ_1/2(D)) ≤(i ∉Ŝ_>(D)) = ((np_i) ≤(nq_i)) ≤((np_i) - np_i ≤ -1/2n(p_i-q_i)) + ((nq_i) - nq_i > 1/2n(p_i-q_i)) ≤ 2exp(-cn(p_i-q_i)^2/p_i+q_i) for some universal c>0. Similarly, if i ∈ D with p_i ≤ q_i we get (i ∈Ŝ_>(D)) ≤(i ∈Ŝ_1/2(D)) ≤(i ∉Ŝ_<(D)) = ((np_i) ≥(nq_i)) ≤ 2exp(-cn(p_i-q_i)^2/p_i+q_i). Using these estimates we turn to bounding the moment generating function of (Ŝ_s) for s ∈{>,<,1/2}. Before doing so, recall <cit.> that the best-possible sub-Gaussian variance proxy σ^2_opt(μ) of the (μ) distribution satisfies σ^2_opt(μ) = 1/2 - μ/log(1/μ-1), where the values for μ∈{0,1/2,1} should be understood as the limit of the above expression (resulting in σ^2_opt = 0,1/4,0 respectively). Notice also that μ↦σ^2_opt(μ) is increasing on [0,1/2] and decreasing on [1/2,1], and σ^2_opt(μ) ≤2/log(2/μ) if 0<μ<1/4, 1/4 if 1/4≤μ≤ 3/4, 2/log(2/(1-μ)) if 3/4<μ<1. Let T ⊆ D denote the subset of indices given by T = {i∈ D:2exp(-cn(p_i-q_i)^2/p_i+q_i) ≥1/4} = {i∈ D:(p_i-q_i)^2 ≤p_i+q_i/nlog(8)/c}. Now, for any s∈{>,<,1/2}, the sub-Gaussian variance proxy σ_s^2 of (Ŝ_s) - (Ŝ_s) = ∑_i ∈ D (p_i-q_i)({i∈Ŝ_s}-(i∈Ŝ_s))) is at most σ_s^2 ≤∑_i∈ T(p_i-q_i)^2/4 + ∑_i∈ D∖ T (p_i-q_i)^2·2(p_i+q_i)/cn(p_i-q_i)^2≲∑_i ∈ D (p_i-q_i)^2 p_i+q_i/n, where the second step used the definition of T. In particular, since ∑_i∈ D(p_i+q_i)/n≤ 2/n, the above expression is always upper bounded by O(1/n). §.§ Proof of Proposition <ref> By Lemma <ref>, we have (Ŝ_1/2) = ∑_i∈[k](i∈Ŝ_1/2)(p_i-q_i) = ∑_i∈[k]((i∈Ŝ_1/2)-1/2)(p_i-q_i) ≳∑_i∈[k](n|p_i-q_i|/√(n (p_i q_i)+1)1) |p_i-q_i| ≥min_G⊆[k]{∑_i∈ Gn(p_i-q_i)^2/√(n(q_i p_i)+1) + ∑_i∉G |p_i-q_i|}. Applying the Cauchy-Schwarz inequality twice, we can bound the first term above by ∑_i ∈ Gn(p_i-q_i)^2/√(n(q_i+p_i)+1)≥n(∑_i∈ G|p_i-q_i|)^2/∑_i∈ G√(n(q_i+p_i)+1)≥n(∑_i∈ G |p_i-q_i|)^2/√(2nk+k^2). Therefore, we get the lower bound (Ŝ_1/2) ≳min_0≤ϵ_1≤ϵ{nϵ_1^2/√(k(n+k)) + ϵ-ϵ_1} = ϵ^2λ if ϵ < λ 2 ϵ-λ 4≥ϵ 2 if ϵ≥λ 2≳ϵ^2 (1ϵ∧√(n k)∧n k) where λ = √(k(n+k)) n≍√(k n)∨k n. By Lemma <ref> we know that (Ŝ_1/2) is sub-Gaussian with variance proxy O(1/n), which implies that |(Ŝ_1/2)| ≳ϵ^2(1/ϵ√(n/k)n/k) with probability at least 1-δ, provided that ϵ^2(1/ϵ√(n/k)n/k) ≳√(log(1/δ)/n). The above rearranges to n≳ n_(ϵ, δ, P_D). §.§ Proof of Proposition <ref> A direct computation gives 2(Ŝ_>) = 2∑_i=1^3k (p_i-q_i)(i∈Ŝ_>) =-((n/2k) > (n/k)) + 1-e^-n/(4k) ≤ -(1-e^-n/(2k))e^-n/k + 1-e^-n/(4k) = -e^-n/k+e^-3n/(2k) + 1-e^-n/(4k)≤ 0, for exp(-n/(4k)) ⪆ 0.86. Rearranging, this gives the sufficient condition n/k ≤ 0.6. §.§ Proof of Proposition <ref> Similar to the proof of Proposition <ref>, we have by Lemma <ref> that (Ŝ_1/2(D)) = ∑_i ∈ D (p_i-q_i) (i∈Ŝ_1/2(D)) ≥ c E(D) + 1/2{p(D)-q(D)} -(D∖Ŝ_1/2(D)) = ∑_i ∈ D (q_i-p_i)(i∉Ŝ_1/2(D)) ≥ c E(D) + 1/2{q(D)-p(D)} where c>0 is universal and E(D) = ∑_i ∈ Dn|p_i-q_i|^2/√(n(p_i q_i)+1) |p_i-q_i|. Therefore, [(Ŝ_>(D))-(Ŝ_<(D))] = [(Ŝ_1/2(D)) - (D∖Ŝ_1/2(D))] ≥ 2c E(D). The bound on the sub-Gaussian variance proxy follows directly from Lemma <ref>. §.§ Proof of Corollary <ref> By a two-fold sample splitting, suppose that we have independent held out samples (X̃, Ỹ) identical in distribution to (X,Y). In the sequel we will use samples (X,Y) to construct two separating sets, and use samples (X̃, Ỹ) to make a choice between them. Let the sets Ŝ_> Ŝ_>([k]), Ŝ_<Ŝ_<([k]) be constructed using X,Y. By Proposition <ref> and <ref>, we have |(Ŝ_>)| |(Ŝ_<)| ≳ϵ^2(1/ϵ∧√(n/k)∧n/k), σ^2(Ŝ_>) + σ^2(Ŝ_<) ≲∑_i∈ [k]p_i+q_i/n≲1/k∨ n, where the last step have used that p_i+q_i≲ 1/k in P_Db. Going forward, we assume that ϵ^2(1/ϵ√(n/k)n/k) ≳√(log(1/δ)/k n), which rearranges to n≳ n_(ϵ, δ, P_D). Consequently, this ensures that |(Ŝ_>)||(Ŝ_<)| ≳ϵ^2(1/ϵ√(n/k)n/k) with probability 1- O(δ). Moreover, as n≳log(1/δ), with probability at least 1-δ we have (n)≤ 2n (cf. Lemma <ref>). Under this event, one has |Ŝ_>| ∨ |Ŝ_<|≤ 2n, and τ(Ŝ_>) ∨τ(Ŝ_<) ≲|Ŝ_>| ∨ |Ŝ_<|/k∧ 1 ≤2n/k∧ 1. Next we make a choice between Ŝ_> and Ŝ_< based on held out samples (X̃, Ỹ). Let p̂, q̂ denote the empirical pmfs constructed using X̃, Ỹ respectively. For any set A⊆[k] write (A) = p̂(A)-q̂(A). We define our final estimator to be Ŝ = Ŝ_> if |(Ŝ_>)| ≥ |(Ŝ_<)|, Ŝ_< otherwise. Clearly τ(Ŝ)≤τ(Ŝ_>) ∨τ(Ŝ_<)≲ 1∧ (n/k). To show the high-probability separation of Ŝ, note that by Lemma <ref>, it holds with probability at least 1- O(δ) that |(Ŝ)| ≥1/2|(Ŝ)| - O(√(τ(Ŝ)log(1/δ)/n) + log(1/δ)/n) = 1/2|(Ŝ_>)| ∨ |(Ŝ_<)| - O(√(log(1/δ)/n∨ k) + log(1/δ)/n) ≥1/4|(Ŝ_>)| ∨ |(Ŝ_<)| - O(√(log(1/δ)/n∨ k) + log(1/δ)/n) = Ω(ϵ^2(1/ϵ√(n/k)n/k) ) - O(√(log(1/δ)/n∨ k) + log(1/δ)/n). Here the first term always dominates the second as long as n≳ n_(ϵ, δ, P_D). §.§ Proof of Proposition <ref> Similar to the proof of Corollary <ref>, we apply a two-fold sample splitting to obtain n independent held out samples (X̃, Ỹ). In the sequel we construct 2(ℓ+2) candidate separating sets from (X,Y), and make a choice among them using held out samples (X̃, Ỹ). The construction of the 2(ℓ+2) separating sets is simple: for each j∈{0,1,⋯,ℓ+1}, we construct two sets Ŝ_>(D_j) and Ŝ_<(D_j). The following lemma summarizes some properties of these separating sets. Recall that we assume that t = k(c_0 m/log(1/δ)) > n so that ℓ = ⌈log_2(t/n)⌉≥ 1. Fix any j∈{0,1,⋯,ℓ+1}, and let ϵ_j = ∑_i∈ D_j |p_i - q_i|. With probability at least 1-δ, the following statements hold: * if j=0, then |(Ŝ_>(D_0)) | ∨|(Ŝ_<(D_0)) | ≳ E_0 - O(√(E_0log(1/δ)/n)), where E_0 = ∑_i∈ D_0 n|p_i-q_i|^2 ∧ |p_i-q_i| ≳n ϵ_0^2/k =: Ẽ_0(ϵ_0). * if j∈ [ℓ], then |(Ŝ_>(D_j)) | ∨|(Ŝ_<(D_j)) | ≳ E_j - O(√(E_jlog(1/δ)/n)), where E_j = ∑_i∈ D_j n|p_i-q_i|^2 ∧ |p_i-q_i| ≳n ϵ_j^2/√(kt/2^j)=: Ẽ_j(ϵ_j). * if j=ℓ+1, then |(Ŝ_>(D_ℓ+1)) | ∨|(Ŝ_<(D_ℓ+1)) | ≳ E_ℓ+1 - O(√(log(1/δ)/n)), where E_ℓ+1 = ∑_i∈ D_ℓ+1n|p_i-q_i|^2/√(nq_i)∧ |p_i-q_i| ≳√(n/k)ϵ_ℓ+1^2 =: Ẽ_ℓ+1(ϵ_ℓ+1). We prove the above statements separately. * Case I: j=0. By Proposition <ref>, it holds that [(Ŝ_>(D_0)) - (Ŝ_<(D_0))] ≳∑_i∈ D_0 n|p_i-q_i|^2 ∧ |p_i-q_i| = E_0, where we have used Lemma <ref> that q_i≤ 2/t≤ 2/n for all i∈ D_0. Moreover, σ^2((Ŝ_>(D_0)))∨σ^2((Ŝ_<(D_0))) ≲∑_i∈ D_0 |p_i-q_i|^2 ∧p_i+q_i/n≲∑_i∈ D_01/n(n|p_i-q_i|^2 ∧ |p_i-q_i|) = E_0/n, where the last inequality is due to the following deterministic inequality: if q≤ 2/n, then |p-q|^2 ∧p+q/n≲1/n(n|p-q|^2 ∧ |p-q|). The proof of the above deterministic inequality is based on two cases: * if p≤ 3/n, then |p-q|^2 ≲ |p-q|^2 ∧ (|p-q|/n); * if p > 3/n, then p+q≲ n|p-q|^2 ∧ |p-q|. Consequently, we have the first statement. For the second statement, similar to the proof of Proposition <ref> we have E_0 ≥min_ϵ_0' ∈ [0,ϵ_0](n(ϵ_0')^2/k + ϵ_0 - ϵ_0') ≳ϵ_0^2(1/ϵ_0∧n/k) ≍nϵ_0^2/k. * Case II: j∈ [ℓ]. By Proposition <ref> and Lemma <ref> we have [(Ŝ_>(D_j)) - (Ŝ_<(D_j))] ≳∑_i∈ D_j n(p_i-q_i)^2 ∧ |p_i-q_i| = E_j. Similar to Case I, we have σ^2((Ŝ_>(D_j)))∨σ^2((Ŝ_<(D_j))) ≲∑_i∈ D_j |p_i-q_i|^2 ∧p_i+q_i/n≲E_j/n, and the first statement follows. For the second statement, note that |D_j|≤ t/2^j-1= O(√(kt/2^j)) by Lemma <ref>. Therefore, E_j ≥min_ϵ_j' ∈ [0,ϵ_j](n(ϵ_j')^2/|D_j| + ϵ_j - ϵ_j') ≳ϵ_j^2(1/ϵ_j∧n/√(kt/2^j)) ≍nϵ_j^2/√(kt/2^j). * Case III: j=ℓ+1. By Proposition <ref> and Lemma <ref>, we have [(Ŝ_>(D_ℓ+1)) - (Ŝ_<(D_ℓ+1))] ≳∑_i∈ D_ℓ+1n(p_i-q_i)^2/√(nq_i)∧ |p_i-q_i| = E_ℓ+1. The first statement then follows from Lemma <ref>. The second statement then follows from E_ℓ+1≥min_ϵ_ℓ+1' ∈ [0,ϵ_ℓ+1](n(ϵ_ℓ+1')^2/√(nk) + ϵ_ℓ+1 - ϵ_ℓ+1') ≳ϵ_ℓ+1^2(1/ϵ_ℓ+1∧√(n/k)) ≍√(n/k)ϵ^2_ℓ+1. The proof is complete. Based on Lemma <ref>, we are about to describe how we choose from the sets {Ŝ_>(D_j), Ŝ_<(D_j)}_j=0^ℓ+1. Similar to the proof of Corollary <ref>, using the held out samples (X̃, Ỹ), we can obtain the empirical estimates (Ŝ_s(D_j)) for all s∈{>,<} and j∈{0,1,⋯,ℓ+1}. With a small absolute constant c_1>0 and Ẽ_j as defined in Lemma <ref>, the selection rule is as follows: if there is some s∈{>,<} and j∈{0,1,⋯,ℓ+1} such that |(Ŝ_s(D_j))| ≥ c_1Ẽ_j(ϵ/(ℓ+2)), then choose Ŝ = Ŝ_s(D_j); if there is no such pair (s,j), choose an arbitrary Ŝ. We first show that with probability at least 1- O(kδ), such a pair (s,j) exists. Since p-q_1≥ϵ, there must exist some j∈{0,1,⋯,ℓ+1} such that ϵ_j ≥ϵ/(ℓ+2). As long as n ≥ c_2 n_(ϵ/ℓ,δ, P_ D) for a large constant c_2>0, one can check via Lemma <ref> that |(Ŝ_>(D_j)) | ∨ |(Ŝ_<(D_j)) | ≥ 4c_1Ẽ_j(ϵ/(ℓ+2)) for a small enough universal constant c_1>0. Assuming that n≳log(1/δ), we have τ(Ŝ_>(D_j))∨τ(Ŝ_<(D_j)) = O(n2^j/t) with probability 1- O(δ) due to Poisson concentration (Lemma <ref>). On this event, it holds with probability at least 1-δ that (cf. Lemma <ref>) |(Ŝ_>(D_j)) | ∨ |(Ŝ_<(D_j)) | ≥ 2c_1Ẽ_j(ϵ/(ℓ+2))- O(√(2^jlog(1/δ)/t) + log(1/δ)/n), which is at least c_1Ẽ_j(ϵ/(ℓ+2)) as long as n √(t/k)≍ n √(1m/log(1/δ)k)≥ c_3 n_(ϵ/ℓ,δ, P_ D) for some large c_3>0. Therefore, provided (<ref>) holds, the desired pair (j,s) exists with probability 1- O(kδ) due to a union bound. Conversely, if |(Ŝ_s(D_j))| ≥ c_1Ẽ_j(ϵ/(ℓ+2)) holds for some (s,j), the true separation |(Ŝ_s(D_j))| is at least of the same order as well. Indeed, Lemma <ref> shows that |(Ŝ_s(D_j))| ≥1/2|(Ŝ_s(D_j))| - O(√(2^jlog(1/δ)/t) + log(1/δ)/n), which is at least c_1E_j(ϵ/(ℓ+2))/4 as long as (<ref>) holds. This completes the proof. §.§ Proof of Proposition <ref> The statement of Proposition <ref> follows immediately from the following lemma. Let sep(Ŝ) μ_θ^X(Ŝ) - μ_θ^Y(Ŝ). There exist universal constants c_i > 0,i∈[5] such that for J = ⌊ c_1ϵ^-1/s⌋ we have [ sep(Ŝ)] + c_2/√(n) ≥c_3ϵ^2/ϵ + √(J/n) (|(Ŝ) - (Ŝ)| ≥ t+c_4/√(n)) ≤ 2exp(-c_5 n t^2) for all t≥0. Write ·,⟨·,·⟩ for the ℓ^2 norm/inner product restricted to the first J coordinates. Notice that given θ̂^X and θ̂^Y, T(θ) is simply a Gaussian random variable with T(θ) = θ̂^Y-θ^2-θ̂^X-θ^2 and (T) = 4θ̂^X-θ̂^Y^2. Define the vectors U = {θ̂^X_j-θ̂^Y_j}_j=1^J V = {θ̂^X_j+θ̂^Y_j}_j=1^J. Note that they are independent, jointly Gaussain with variance 2I_J/n and means equal to the first J coordinates of θ^X∓θ^Y respectively. Let Φ be the cdf of the standard Gaussian and ϕ = Φ' be its density. The separation can be written as sep(Ŝ) = f(θ^X) - f(θ^Y), where f(θ) = Φ(θ̂^Y-θ^2-θ̂^X-θ^2/2θ̂^X-θ̂^Y) = Φ(-1/2⟨ V ,U/U⟩ + ⟨θ, U/U⟩). We focus on proving the desired tail bound first. To make the dependence on the variables explicit, write g(U,V) = f(θ^X)-f(θ^Y) for the separation. Given U, V is a N(θ^X+θ^Y, 2I_j/n) random variable. Differentiating g and using that ϕ is 1/√(2π e)-Lipschitz we have ∇_V g(U,V) = -1/2U/U(ϕ(-1/2⟨ V,U/U⟩ + ⟨θ^X, U/U⟩) - ϕ(-1/2⟨ V,U/U⟩ + ⟨θ^Y, U/U⟩)) ≤1/√(8π e)|⟨θ^X-θ^Y, U/U⟩| ≤C_G/√(8π e). By Lipschitz concentration of the Gaussian distribution (Lemma <ref>) we conclude that g-[g|U] is sub-Gaussian with variance proxy C_G^2/(4π en). Next we study the concentration of [g|U]. To this end, note that .-1/2⟨ V, U/U⟩ + ⟨θ, U/U⟩| U ∼ N(⟨θ-1/2(θ^X+θ^Y), U/U⟩, 1/2n). Thus, using the independence of U and V and Lemma <ref> we obtain [g(U,V)|U] = [f(θ^X)-f(θ^Y)|U] = Φ(W/√(4+2/n)) - Φ(-W/√(4+2/n)), where we write W ⟨θ^X-θ^Y, U/U⟩. Let Φ̃= Φ(·/√(4+2/n)) to ease notation. Once again by Lipschitzness of Φ, we obtain for every t≥0 that (|Φ̃(W)-Φ̃(W)| ≥ t) ≤(|Φ̃(W)-Φ̃( W)| ≥ t - Φ̃_Lip√((W))) ≤(|W- W| ≥t/Φ̃_Lip - √((W))), and an analogous inequality can be obtained for -W. The last ingredient is showing that W concentrates well. W is sub-Gaussian with variance proxy 1/(2n). To simplify notation, let τ = θ^X-θ^Y, σ^2 = 1/(2n) and let Q be a zero-mean identity-covariance Gaussian random vector so that W d=⟨τ, τ + σ Q/τ + σ Q⟩. We have ⟨τ, τ + σ Q/τ + σ Q⟩ = ⟨τ/τ + σ Q, τ + σ Q/τ + σ Q⟩_|·|≤1 almost surely(τ+σ Q-τ+σ Q)_σ^2 sub-Gaussian + σ⟨τ/τ+σ Q, Q⟩_σ^2 sub-Gaussian, where we use that τ + σ Q≥τ by Jensen's inequality, and apply Lemma <ref> twice. Overall, this implies that W is sub-Gaussian with variance proxy σ^2=1/(2n) as required. Recall that we have decomposed the separation as follows: (Ŝ) - (Ŝ) = g - [g|U]_ O(1/n) sub-Gaussian + Φ̃(W)-Φ̃(-W) - [Φ̃(W)-Φ̃(-W)]_ O(1/n) sub-Gaussian tails beyond O(1/√(n)), which completes the proof. Let us turn to calculating the expected separation. We have already seen that sep(Ŝ) = [Φ̃(W) - Φ̃(-W)]. Again by Lipschitzness we have |Φ̃(W)-Φ̃( W)| ≤Φ̃_Lip|W- W| ≲ 1/√(n) by Lemma <ref>. Thus, we see that sep(Ŝ) + Ω(1/√(n)) ≥Φ̃( W) - Φ̃(- W), where the implied constant is universal. To simplify notation, let τ = θ^X-θ^Y, σ^2 = 1/(2n) and let Q be a standard normal random variable. Looking at W we have W = ⟨τ, τ + σ Q/τ + σ Q⟩ = 1/σ⟨τ, ∇_Qτ+σ Q⟩ = 1/σ[⟨τ, Q⟩τ + σ Q] by Stein's identity. By the rotational invariance of the Gaussian distribution, the above is equal to W = τ/σ[Q_1√((τ + σ Q_1)^2 + … + σ^2 Q_J^2)] = τ/σ[ Q_1 √((τ + σ Q_1)^2 + … + σ^2 Q_J^2) - Q_1 √(τ^2 + σ^2 Q_1^2 + … + σ^2 Q_J^2)] = 2τ^2 [Q_1^2/√((τ + σ Q_1)^2 + … + σ^2 Q_J^2) + √(τ^2 + σ^2 Q_1^2 + … + σ^2 Q_J^2)]. By the Cauchy-Schwarz inequality we have ( |Q_1|)^2 ≲[Q_1^2/√((τ + σ Q_1)^2 + … + σ^2 Q_J^2) + √(τ^2 + σ^2 Q_1^2 + … + σ^2 Q_J^2)] × (τ + σ√(J)). Plugging into our expression for W this yields W ≳τ^2/τ + σ√(J). To clarify notation, let us now write ·_J for the ℓ^2-norm restricted to the first J coordinates. Taking J = c ϵ^-1/s it holds that τ_J^2 = τ^2 - ∑_j>Jτ_j^2 ≥τ^2 - J^-2s∑_j > Jτ_j^2 j^2s = τ^2 - c^-2sϵ^2 τ_s^2. Since τ_s≲1 and τ≥ϵ by assumption, we see that for large enough universal constant c we have τ_J≥ϵ/2. Since the map x↦ x^2/(x+c) is increasing for x,c > 0 it follows that W ≳ϵ^2/ϵ + √(J/n) for a universal implied constant. By the inequality Φ(x) - Φ(-x) ≥ x/2 for x ∈ [0,1] we obtain Φ̃( W) - Φ̃(- W) ≥ 1 W/2, which completes the proof. § LOWER BOUNDS Recall the notation of Section <ref>. Given two hypotheses H_0, H_1, our aim is to lower bound the minimum achievable worst-case error. To this end, we use the following standard fact: min_ψmax_i=0,1sup_P ∈ H_i_S∼ P(ψ(S)≠ i) ≥1/2(1-( P_0, P_1)), where P_0,P_1 are any random probability distributions with (P_i ∈ H_i)=1 and P_i denote the corresponding mixtures and denotes the total variation distance. Hence, deriving a lower bound of order δ on the minimax error reduces to the problem of finding mixtures P_i such that 1-( P_0, P_1) =Ω( δ). To this end we utilize standard inequalities between divergences. For any probability measures , the inequalities 1-(,) ≥1/2e^-()≥1/2(1+χ^2()) hold, where and χ^2 denote the Kullback-Leibler and χ^2 divergence respectively. Many of our lower bounds will follow from reduction to prior work. §.§ Lower bounds for P_Db In <cit.> the authors gave the construction of distributions p_η,ϵ, p_0 ∈ P_Db(k, 2) (originally due to Paninski) for a mixing parameter η such that (p_η,ϵ, p_0) = ϵ≍√((p_η,ϵ, p_0)) for all η, where the implied constant is universal. They further showed that χ^2(_η p_η,ϵ^⊗ n, p_0^⊗ n) ≤exp(cn^2ϵ^4/k)-1 and χ^2(_η[p_0^⊗ n⊗ p_ϵ,η^⊗ (n+m)] _η[ p_0^⊗ n⊗ p_ϵ,η^⊗ n⊗ p_0^⊗ m]) ≤exp(cm(n+m)ϵ^4/k)-1 for a universal c>0. More precisely, (<ref>) can be extracted from <cit.> using the chain rule for χ^2 (as opposed to ). §.§.§ Lower bound for and Take P_0 = p_0^⊗ 2n and P_1=p_ϵ, η_0^⊗ n⊗ p_0^⊗ n in (<ref>) for a fixed η_0. Then, by Lemma <ref> and the data-processing inequality we have 1-( P_0, P_1) ≥1/2exp(-n(p_ϵ,η p_0)) ≥1/2exp(-cnϵ^2) !=Ω(δ) for a universal c>0. This shows that , are impossible at total error δ unless n ≳log(1/δ)/ϵ^2, which gives the first term of our lower bound. For the second term, consider the random measures P_0 = p_0^⊗ 2n and P_1 = p_0^⊗ n⊗ p_ϵ,η^⊗ n in (<ref>). Then using (<ref>) and Lemma <ref> we have 1-( P_1, P_0) ≥1/21/1+χ^2( P_1 P_0) ≥1/2exp(-cn^2ϵ^4/k) !=Ω(δ). Therefore, is impossible unless n≳√(klog(1/δ))/ϵ^2, which yields the second term of our lower bound. §.§.§ Lower bound for The necessity of m≳log(1/δ)/ϵ^2 and n≳√(klog(1/δ))/ϵ^2 follows as for above. Taking P_0 = p_0^⊗ n⊗ p_ϵ,η^⊗ n⊗ p_0^⊗ m and P_1 = p_0^⊗ n⊗ p_ϵ,η^⊗ (n+m) in (<ref>), using (<ref>) and Lemma <ref> we obtain the inequality 1-( P_0, P_1) ≥1/21/1+χ^2( P_1 P_0) ≥1/2exp(-cm(m+n)ϵ^4/k) !=Ω(δ). Therefore, is impossible with error O(δ) unless mn≳ klog(1/δ)/ϵ^4 (note that the m^2-term is never active), which completes the lower bound proof. §.§ Lower bounds for P_H We don't provide the details because they are entirely analogous to Section <ref> and rely on classical constructions that can be found in <cit.>. §.§ Lower bounds for P_G Given a vector η∈{± 1}^ define the measure _η = ⊗_j=1^∞ N(η_j c_1ϵ^2s+1/2s, 1) if 1 ≤ j ≤ c_2ϵ^-1/s, N(0, 1) otherwise. Let η_1,η_2,… be iid uniform signs in {±1}, and γ_η be the mean vector of _η. Writing ·_s for the Sobolev-norm of smoothness s and · for the Euclidean norm, we see that for any η γ_η_s^2 = ∑_j=1^∞ j^2sγ_η j^2 = ∑_j=1^c_2ϵ^-1/s j^2s c_1^2 ϵ^2s+1/s≤ c_1^2 ϵ^2s+1/s(2c_2ϵ^-1/s)^2s+1≍ c_1^2 c_2^2s+1, γ_η^2 = ∑_j=1^∞γ_η j^2 = c_1^2 ϵ^2s+1/s c_2ϵ^-1/s≍ c_1^2c_2ϵ^2. Then for any C_G>0 we can choose c_1,c_2 independently of ϵ such that _0, _η∈ P_G(s, C_G) almost surely and γ_η=10ϵ. Then for ϵ≤ 1/10 we know that (_0,_η) = 2Φ(γ_η/2) - 1 ≥ϵ. §.§.§ Lower bounds for and Take P_0 = _0^⊗ 2n and P_1 = _^⊗ n⊗_0^⊗ n. Then (P_0 P_1) = n (_0 _) = n c_2ϵ^-1/s(c_1ϵ^2s+1/2s-0)^2/2≍ nϵ^2. Using Lemma <ref> this gives us 1-(P_0,P_1) ≳exp(-(P_0P_1)) = exp(-Θ(nϵ^2)) !=Ω(δ). By (<ref>) we know then that n ≳log(1/δ)/ϵ^2 is necessary for both and over P_G. To get the second term in the minimax sample complexity consider the construction P_0 = _0^⊗ 2n and P_1 = _η^⊗ n⊗_0^⊗ n where η is a uniformly random vector of signs. Writing ω = c_1ϵ^2s+1/2s note that _η^⊗ n = ⊗_j=1^c_2ϵ^-1/s(1/2 N(ω, 1)^⊗ n + 1/2 N(-ω, 1)^⊗ n). From here we can compute (P_0 P_1) ≍ϵ^-1/s( N(0,1)^⊗ n1/2 N(ω, 1)^⊗ n + 1/2 N(-ω, 1)^⊗ n) ≍ϵ^-1/s(n/2ω^2 - _X ∼ N(0,I_n)logcosh(ω∑_i=1^n X_i)) ≤ϵ^-1/s/4 n^2 ω^4 ≍ n^2ϵ^4s+1/s, where we used the inequality logcosh(x) ≥x^2/2 - x^4/12 for all x∈. Thus, using Lemma <ref>, 1-(P_0 P_1) ≳exp(-(P_0_1)) ≥exp(-Θ(n^2ϵ^4s+1/s)) !=Ω(δ). By (<ref>) we know then that n ≳√(log(1/δ))/ϵ^2s+1/2/s is necessary for both and over P_G. §.§.§ Lower bounds for If m≥ n, from the lower bound n≳ n_ we conclude that mn≳ n_^2, as desired. Therefore, throughout this section we assume that m<n. Let P_0 = _η^⊗ n⊗_0^⊗ n⊗_η^m and P_1=_η^⊗ n⊗_0^⊗ n⊗_0^⊗ m, where η is a uniformly random vector of signs. Once again, we define ω = c_1ϵ^2s+1/2s. We follow a proof similar to the cases P_Db, P_H in <cit.>. We use the data processing inequality, the chain rule and tensorization of χ^2: χ^2( P_0 P_1) = χ^2(_η^⊗ (n+m)_η^⊗ n⊗_0^⊗ m) = (_X_1_η_1|X_η_1'|X_1∫_^mexp(-1/2∑_j=1^m {(z_j-η_1ω)^2 + (z_j-η_1'ω)^2})/(2π)^m/2exp(-1/2∑_j=1^mz_j^2) z)^c_2ϵ^-1/s-1, where X_1 ∼ (1/2 N(ω,1/n)+1/2 N(-ω, 1/n)) and η_1, η_1' | X_1 are iid scalar signs from the posterior p(· | X_1), with joint distribution p(η_1, X_1) = ϕ(√(n)(X_1 - η_1ω))/2. The Gaussian integral above can be evaluated exactly and we obtain χ^2( P_0 P_1) = (_X_1,η_1,η_1'exp(ω^2mη_1η_1'))^c_2ϵ^-1/s-1. Now, we can calculate (η_1=η_1') = _X_1p(X_1|η_1=1)^2+p(X_1|η_1=-1)^2/(p(X_1|η_1=1)+p(X_1|η_1=-1))^2 = 1/2 + 1/4∫(p(x_1|η_1=1) -p(x_1|η_1=-1))^2/p(x_1|η_1=1)+p(x_1|η_1=-1)x̣_1 ≤1/2 + 1/16∑_b∈{± 1}χ^2( N(bω,1/n) N(-bω,1/n)) = 1/2 + exp(4ω^2n)-1/8. Together with (η_1=η_1') ≤ 1, we have _X_1,η_1,η_1'exp(ω^2mη_1η_1') ≤ e^-ω^2m + (1/2+1/2∧e^4ω^2n-1/8)(e^ω^2m-e^-ω^2m) = cosh(ω^2m) + tsinh(ω^2m), with t = 1∧ ((e^4ω^2n-1)/4). Distinguish into two scenarios: * if t=1, then 4ω^2n≥ 1, and the above expression is e^ω^2 m≤ e^4ω^4nm; * if t<1, then ω^2 n≤ 1/2 and t≤ 8ω^2 n. Since m<n, and cosh(x)≤ 1+x^2, sinh(x)≤ 2x for all x∈ [0,1], the above expression is at most 1 + (ω^2 m)^2 + 2tω^2m ≤exp(17ω^4mn). Combining the above scenarios, we have χ^2( P_0 P_1) ≤exp(17ω^4nm· c_2ϵ^-1/s)-1. Thus, we obtain 1-( P_0, P_1) ≳1/1+χ^2( P_0 P_1)≥exp(-17ω^4nm· c_2ϵ^-1/s) !=Ω(δ). This gives the desired lower bound nm ≳log(1/δ)/ϵ^4s+1/s. §.§ Lower bounds for P_D Clearly all lower bounds that apply to P_Db also apply to P_D; in particular this gives the sample complexity lower bound for . In addition, lower bounds on the minimax high-probability sample complexity of were derived in <cit.>. Hence, inspecting the claimed minimax rates, we only need to consider the problem in the cases m ≤ n ≤ k and n ≤ m ≤ k. We give two separate constructions for the two cases, both inspired by classical constructions in the literature. As opposed to the i.i.d. sampling models, we will use the Poissonized models and rely on the formalism of pseudo-distributions as described in <cit.>. Specifically, suppose we can construct a random vector (p,q) ∈ [0,1]^2 such that 1) p = q = Θ(1/k) and | [p-q]| =Θ(ϵ/k); and 2) the following χ^2 upper bounds hold for the Poisson mixture: χ^2([(np)⊗(nq)⊗(mp)] [ (np)⊗(nq)⊗(mq)]) ≤ B(n,m,ϵ,k), χ^2([(nq)⊗(np)⊗(mp)] [(np)⊗(nq)⊗(mp)]) ≤ B(n,m,ϵ,k); then (n,m) ∈ R_LF(ϵ, δ, P_D) requires kB(n,m,ϵ,k) ≳log(1/δ) (essentially via Lemma <ref>). §.§.§ Case m ≤ n ≤ k Suppose that m ≤ n ≤ k/2, and let p,q be two random variables defined as (p,q) = (1/n, 1/n) with probability n/k, (ϵ/k, 2ϵ/k) with probability 1/2(1-n/k), (ϵ/k, 0) with probability 1/2(1-n/k). Note that [p]=[q]=Θ(1/k) and |[p-q]|=Θ(ϵ/k). Let X,Y ∈^3 be random, whose distribution is given by X | (p,q) ∼(np) ⊗(nq) ⊗(mp), Y | (p,q) ∼(np) ⊗(nq) ⊗(mq). Now, for any (a,b,c) ∈^3 we have (X=(a,b,c)) = 1/a!b!c!(n/k e^-2-m/n(m/n)^c + 1/2 (1-n/k) e^-(3n+m)ϵ/k(ϵ n/k)^a (2ϵ n/k)^b (ϵ m/k)^c + 1/2 (1-n/k) e^-(n+m)ϵ/k(ϵ n/k)^a _b=0(ϵ m/k)^c ). Similarly, for Y we get (Y=(a,b,c)) = 1/a!b!c!( n/k e^-2-m/n(m/n)^c + 1/2 (1-n/k) e^-(3n+2m)ϵ/k(ϵ n/k)^a (2ϵ n/k)^b (2ϵ m/k)^c + 1/2 (1-n/k) e^-nϵ/k(ϵ n/k)^a _b=c=0). In particular, we have (Y=(a,b,c)) = Ω(1/a!b!c!) 1 if (a,b,c) = (0,0,0), n/k(m/n)^c otherwise. Notice also that (X=(a,b,c)) - (Y=(a,b,c)) =1/a!b!c!1/2(1-n/k) e^-nϵ/k_=Θ(1)(ϵ/k)^a+b+c n^a+b m^c [2^be^-(2n+m)ϵ/k(1-2^ce^-mϵ/k) + _b=0(e^-mϵ/k-_c=0)]_ I_bc = Θ(1)/a!b!c!(ϵ/k)^a+b+cn^a+bm^c I_bc, where |I_bc| ≲nmϵ^2/k^2 if b=c=0, 2^bmϵ/k if b≥1,c=0, nϵ/k if b=0,c=1, 2^b+c otherwise. We now turn to bounding the χ^2-divergence between X and Y. Using the estimates (<ref>), we obtain χ^2(X Y) = ∑_(a,b,c)∈^3((X=(a,b,c))-(Y=(a,b,c)))^2/(Y=(a,b,c)) ≲ I_00^2 + (∑_b=c=0,a≥1 + ∑_a≥0,b+c≥1) 1/a!b!c!(ϵ/k)^2a+2b+2c n^2a+2b m^2c I_bc^2/n/k(m/n)^c = I_00^2(1+ ∑_a≥11/a!ϵ^2a n^2a-1/k^2a-1) + (∑_a≥01/a!ϵ^2a n^2a/k^2a) ∑_b+c≥11/b!c!ϵ^2b+2cn^2b+c-1m^c/k^2b+2c-1 I_bc^2 ≲n^2m^2ϵ^4/k^4(1 + nϵ^2/k e^ϵ^2n^2/k^2)_=Θ(1) + e^ϵ^2 n^2/k^2_=Θ(1)∑_b+c≥11/b!c!ϵ^2b+2cn^2b+c-1m^c/k^2b+2c-1 I_bc^2. Focusing on the sum and decomposing it as ∑_b+c≥1 = ∑_c=0,b≥1 + ∑_b=0,c=1 + ∑_b=0,c≥2 + ∑_b,c≥1 we have the estimates ∑_b+c≥11/b!c!ϵ^2b+2cn^2b+c-1m^c/k^2b+2c-1 I_bc^2 ≲∑_c=0,b≥11/b!ϵ^2b+2 n^2b-1 4^b m^2/k^2b+1 + ϵ^4mn^2/k^3 + ∑_b=0,c≥21/c!ϵ^2c n^c-1m^c 4^c/k^2c-1 + ∑_b,c≥11/b!c!ϵ^2b+2cn^2b+c-1m^c4^b+c/k^2b+2c-1 ≲ϵ^4m^2n/k^3 + ϵ^4mn^2/k^3 + ϵ^4m^2n/k^3 + ϵ^4mn^2/k^3≲ϵ^4m n^2/k^3. As m≤ k, we obtain χ^2(X Y) ≲ϵ^4mn^2/k^3. By (<ref>) we conclude that in the regime m ≤ n ≤ k, (n,m) ∈ R_LF(ϵ, δ, P_D) requires n^2m≳ k^2log(1/δ)/ϵ^4, as desired. §.§.§ Case n ≤ m ≤ k This case is entirely analogous to the previous case with minor modifications. Suppose that n ≤ m ≤ k/2, and let p,q be two random variables defined as (p,q) = (1/m, 1/m) with probability m/k, (ϵ/k, 2ϵ/k) with probability 1/2(1-m/k), (ϵ/k, 0) with probability 1/2(1-m/k). Let X,Y ∈^3 be random, whose distribution is given by X | (p,q) ∼(np) ⊗(nq) ⊗(mp), Y | (p,q) ∼(nq) ⊗(np) ⊗(mp). Now, for any (a,b,c) ∈^3 we have (X=(a,b,c)) = 1/a!b!c!(m/k e^-2n/m-1(n/m)^a+b + 1/2(1-m/k)e^-(3n+m)ϵ/k(ϵ n/k)^a(2ϵ n/k)^b (ϵ m/k)^c + 1/2(1-m/k)e^-(n+m)ϵ/k(ϵ n/k)^a_b=0(ϵ m/k)^c). Similarly, for Y we get (Y=(a,b,c)) = 1/a!b!c!(m/k e^-2n/m-1(n/m)^a+b + 1/2(1-m/k)e^-(3n+m)ϵ/k(2ϵ n/k)^a(ϵ n/k)^b (ϵ m/k)^c + 1/2(1-m/k)e^-(n+m)ϵ/k_a=0(ϵ n/k)^b(ϵ m/k)^c). In particular, we have (Y=(a,b,c)) = Ω(1/a!b!c!) 1 if (a,b,c) = (0,0,0), m/k(n/m)^a+b otherwise. Notice that (X=(a,b,c))-(Y=(a,b,c)) = 1/a!b!c!1/2(1-m/k) e^-(n+m)ϵ/k(ϵ/k)^a+b+c n^a+bm^c (e^-2nϵ/k(2^b-2^a) + _b=0-_a=0)_ J_ab = Θ(1)/a!b!c!(ϵ/k)^a+b+cn^a+bm^c J_ab, where |J_ab| ≲ 0 if a+b=0, nϵ/k if a+b = 1, 2^a+b if a+b≥2. We now turn to bounding the χ^2-divergence between X and Y. We have χ^2(X Y) = ∑_(a,b,c)∈^3((X=(a,b,c))-(Y=(a,b,c)))^2/(Y=(a,b,c)) ≍∑_a+b+c≥ 11/a!^2b!^2c!^2(ϵ/k)^2a+2b+2c n^2a+2bm^2cJ_ab^2/1/a!b!c!m/k(n/m)^a+b ≍∑_a+b+c≥ 11/a!b!c!ϵ^2a+2b+2c n^a+bm^2c+a+b-1J_ab^2/k^2a+2b+2c-1 = e^ϵ^2m^2/k^2_Θ(1)∑_a+b≥11/a!b!ϵ^2a+2bn^a+bm^a+b-1J_ab^2/k^2a+2b-1, where the last step follows from J_ab=0 if a=b=0. Now writing t=a+b and distinguishing into cases t=1 and t≥ 2, by (<ref>) we have χ^2(XY) ≲ϵ^4 n^3/k^3 + ∑_t≥ 22^t/t!ϵ^2t n^t m^t-1 4^t/k^2t-1≲ϵ^4 n^3/k^3 + ϵ^4n^2m/k^3≲ϵ^4n^2m/k^3, where the last line uses that n ≤ m. Once again, we can conclude by (<ref>) that n^2m≳log(1/δ)k^2/ϵ^4 is a lower bound for the sample complexity of .
http://arxiv.org/abs/2306.04297v2
20230607095617
On Artin's Primitive Root Conjecture for Function Fields over $\mathbb{F}_{q}$
[ "Leonhard Hochfilzer", "Ezra Waxman" ]
math.NT
[ "math.NT", "math.AG", "11R58, 11T23" ]
In 1927, E. Artin proposed a conjecture for the natural density of primes p for which g generates (ℤ/pℤ)^×. By carefully observing numerical deviations from Artin’s originally predicted asymptotic, Derrick and Emma Lehmer (1957) identified the need for an additional correction factor; leading to a modified conjecture which was eventually proved to be correct by Hooley (1967) under the assumption of the generalised Riemann hypothesis. An appropriate analogue of Artin's primitive root conjecture may moreover be formulated for an algebraic function field K of r variables over . Relying on a soon to be established theorem of Weil (1948), Bilharz (1937) provided a proof in the particular case that K is a global function field (i.e. r=1), which is correct under the assumption that g ∈ K is a geometric element. Under these same assumptions, Pappalardi and Shparlinski (1995) established a quantitative version of Bilharz's result. In this paper we build upon these works by both generalizing to function fields in r variables over and removing the assumption that g ∈ K is geometric; thereby completing a proof of Artin's primitive root conjecture for function fields over . In doing so, we moreover identify an interesting correction factor which emerges when g is not geometric. A crucial feature of our work is an exponential sum estimate over varieties that we derive from Weil's Theorem. Regular black hole from regular initial data Pankaj S. Joshi July 31, 2023 ============================================ § INTRODUCTION §.§ Classical Primitive Root Conjecture Consider an odd prime p ∈ℕ, and recall that the multiplicative group (/p)^× is cyclic of order p-1. We say that g ∈∖{0} is a primitive root mod p (denoted ord_p(g) = p-1) if the p-adic valuation satisfies v_p(g) = 0, and if the reduction g mod p generates the group (/p)^×. If g ∈^×,2 is a perfect square or g = -1, then there are at most finitely many primes p such that ord_p(g) = p-1. Artin's primitive root conjecture states that in all other cases, g is a primitive root mod p for infinitely many primes p. §.§ Primitive Root Conjecture for 𝔽_q(t) Artin first proposed his conjecture in a private conversation with Helmut Hasse on September 12, 1927. Hasse then assigned the problem to his PhD student, Herbert Bilharz, who began working on the problem in 1933. Shortly afterwards, Erdős announced a proof of the conjecture, conditional on the generalized Riemann hypothesis for certain Dedekind zeta functions. Erdős did not, in fact, publish any formal paper on the topic; but the threat was nonetheless sufficient motivation for Bilharz to shift the topic of his dissertation to the analogous problem over global function fields. For more on this interesting history see <cit.>. The simplest instance of Artin's conjecture in the function field setting is as follows. Let denote a finite field of q elements and [t] the ring of polynomials with coefficients in . We moreover let 𝒫_n⊂[t] denote the subset of prime monic polynomials of degree n ∈ℕ. For g(t) ∈(t) and P ∈𝒫_n such that v_P(g)=0, we let ord_P(g(t)) denote the order of g(t) in the multiplicative group ([t]/(P))^×, where (P) ⊆[t] denotes the prime ideal generated by the prime P. In particular, g(t) generates ([t]/(P))^× if and only if ord_P(g) = q^n-1, in which case we say that g is a primitive root mod P. Two immediate obstructions prevent g(t) from being a primitive root modulo infinitely many prime P ∈[t]. First, note that if g(t) ∈^× is a constant in [t], then ord_P(g(t)) ≤ q-1, and therefore g(t) cannot be a primitive root modulo P ∈𝒫_n, whenever n >1. Second, suppose g(t) = h(t)^ℓ for some prime ℓ| q-1. Since q^n-1 = (q-1)(q^n-1+… +q+1), we find that ℓ| q^n-1 for any n ∈. Thus ord_P(g) ≤q^n-1/ℓ < q^n-1, from which it follows that g(t) cannot be a primitive root modulo any prime P ∈[t]. In the [t] setting, Artin's primitive root conjecture is then the following claim: Primitive Root Conjecture for (t): Suppose g(t) ∈(t) ∖ is not an ℓ^th power, for any prime ℓ| q-1. Then g(t) is a primitive root mod P for infinitely many prime P ∈[t]. For certain particular choices of g(t) ∈[t], the above conjecture may be proved using elementary methods. For instance, when g(t) = t^m + c, Jensen and Murty <cit.> offer a proof that relies only on the theory of Gauss sums. When g(t) ∈[t] is irreducible, Kim and Murty <cit.> provide a proof that relies only on establishing a zero-free region for relevant L-functions. §.§ Primitive Root Conjecture for Function Fields To formulate Artin's primitive root conjecture over more general function fields, it is appropriate to take a more geometric viewpoint (see Section <ref> for a rather self-contained overview of this geometric set-up.) Let X be a geometrically integral projective variety over of dimension r >0, and write K = (X) for its function field. Given a point 𝔭∈ X, denote by 𝒪_𝔭 = 𝒪_X,𝔭 the stalk of the structure sheaf 𝒪_X at 𝔭. Abusing notation, we moreover write 𝔭⊂𝒪_𝔭 to denote the unique maximal ideal, and then let κ_𝔭 = 𝒪_𝔭/𝔭 denote the corresponding residue field. If ∈ X is moreover closed, then κ_𝔭 is a finite field extension of the base field of X, i.e. 𝔭 := [κ_𝔭] is finite. Fix g ∈ K, and let 𝔭 denote a closed point of X. We say g is regular at 𝔭 if g lies in the image of the embedding 𝒪_𝔭↪ K. Upon pulling g back under this embedding, we may then consider g ∈κ_𝔭. We say that g ∈ K is a primitive root modulo 𝔭 if g is regular at 𝔭 and if g generates the multiplicative group κ_𝔭^×. In such a case, we moreover refer to as an Artin prime for g. As with the case over [t], it may easily be shown that if g lies in or is an ℓ^th for some prime ℓ| q-1, then g may be a primitive root modulo for at most finitely many closed points ∈ X. In this more general set-up, Artin's primitive root conjecture is then the following claim: Primitive Root Conjecture for K: Let K = (X) denote the function field of some geometrically integral projective variety X over . Suppose moreover that g ∈ K ∖ is not an ℓ^th power, for any prime ℓ| q-1. Then for any such X, g is a primitive root modulo , for infinitely many closed points ∈ X. Bilharz  <cit.> addressed the special case in which X is a geometrically integral projective curve over (i.e. the case in which K is a global function field). His result was conditional on the Riemann hypothesis for global function fields - a theorem subsequently established by André Weil <cit.>. Bilharz's proof contains a gap, however, for cases in which g ∈ K is not a geometric element (see Definition <ref>). Though this oversight has been previously noted, no corrected proof for the case in which g ∈ K is geometric has thus far appeared in the literature (see  <cit.> for a more detailed discussion). In our work, we remove the assumption that g ∈ K be a geometric element. Applying an exponential sum estimate derived from Weil's theorem (Proposition <ref>) we moreover generalize to projective varieties of arbitrary dimension. Note that every function field over (i.e. every field extension K/ of positive finite transcendence degree) may be recovered as K = (X), where X is some geometrically integral projective variety of dimension r > 0 over . We thus in fact complete a proof of the following theorem: Let K denote any function field over . Then Artin's primitive root conjecture holds for K. Several additional variations of Artin's primitive root conjecture have been studied over function fields; for example, over Carlitz modules <cit.>, rank one Drinfeld modules <cit.>, and one dimensional tori over function fields <cit.>. We further note that primitive polynomials are of interest for engineering applications involving LFSRs (linear-feedback shift registers). §.§ Quantitative Results As above, let X denote a geometrically integral projective variety over , and let K denote its function field. If K is a global function field and g ∈ K geometric, Bilharz demonstrated that the set of Artin primes for g ∈ K has positive Dirichlet density, i.e. that lim_s → 1∑_∈ P_g|κ_|^-s/∑_∈ X|κ_|^-s>0, where the sum in the denominator runs over the closed points in X, and where P_g⊆ X denotes the set of Artin primes for g ∈ K. From this, Bilharz then deduced Artin's primitive root conjecture for K. A more quantitative description for this count was provided by Pappalardi and Shparlinski. Let N_X(g,n) denote the number of closed points ∈ X such that deg = n, and such that g ∈ K is a primitive root modulo . In the particular case that X= 𝒞 is a non-singular projective curve and g ∈ K is geometric, Pappalardi and Shparlinski provided an asymptotic description of N_𝒞(g,n), proving that for any ε > 0, N_𝒞(g,n) = φ(q^n-1)/n+O_ε,g,𝒞(q^n/2(1+ε)). In this work, we generalize the above result by providing an asymptotic count for N_X(g,n) while removing the assumptions that g ∈(X) is geometric; that X is non-singular; and moroever allowing X to a geometrically integral projective variety of arbitrary finite dimension r ≥ 1. Our main result is the following: Let X/ be a geometrically integral projective variety of dimension r ≥ 1 with function field K = (X). Let g ∈ K be a rational function. If g ∈ or g is a full ℓ^th power in K for some rational prime ℓ| q-1 then N_X(g,n) = 0 for all n > 1. Otherwise, for any ε > 0, N_X(g,n) = ρ_g(n) ( φ(q^n-1)q^n(r-1)/n + O_ε,X,g( q^n(r-1/2+ε)) ), where ρ_g(n) is as in (<ref>). By <cit.>, we note that φ(q^n-1) ≫_ν q^n(1-ν) for any ν∈ (0,1). Thus (<ref>) indeed yields a main term, in the limit as q^n→∞. If g ∈ K is a geometric element, then ρ_g(n) = 1 for all n ≥ 1. We thus recover equation (<ref>) in the case that X= 𝒞 is a projective curve (i.e. r =1). For non-geometric g ∈ K^×, it is possible that ρ_g(n) = 0 for certain n ∈ℕ. Nonetheless, we show that ρ_g(n) ≥ 1 for infinitely many n ∈ℕ, thereby confirming Theorem <ref> (see Section <ref>). §.§ Comparison to Classical Setting Let 𝒫⊂ℕ denote the set of primes, and let 𝒫_g⊆𝒫 denote the subset of Artin primes for g, namely the primes p ∈𝒫 for which ord_p(g) = p-1. When g ∈ is not an exact power, Artin conjectured that the natural density of 𝒫_g⊆𝒫 is equal to A:=∏_p prime(1-1/p(p-1))≈ .3739558, a value known as the Artin constant. Due to careful numerical observations pioneered by Derrick and Emma Lehmer, it later emerged that, for certain g, an additional correction factor is needed. Slightly more generally, the natural density of 𝒫_g⊆𝒫 is conjectured to equal c_gA_h, where A_h is an explicit Euler product, and c_g∈. A_h is moreover a linear factor, which depends on the value of the largest integer h ≥ 1 such that g is an h^th power in , while c_g is a quadratic correction factor that takes into account entanglements between number fields of the form (ζ_ℓ,g^1/ℓ), ℓ∈𝒫. This modified conjecture was eventually proven correct by Hooley <cit.> under the assumption of the generalised Riemann Hypothesis (GRH) for Dedekind ζ-functions. Returning to the function field setting, let P_n denote the closed points of X of fixed degree n, and let P_n(g) ⊆ P_n denote the subset of Artin primes for g ∈ K; so that #P_n(g)=N_X(g,n). When X is a (non-singular) curve and g is geometric, it follows from (<ref>) that the density of P_n(g) ⊆ P_n is asymptotic to A(n) := φ(q^n-1)q^-n, in the limit as q^n →∞. More generally, we find from (<ref>) that the density of P_n(g) ⊆ P_n is asymptotic to A_g(n) := ρ_g(n)φ(q^n-1)q^-n, where ρ_g(n) depends on the factorization behaviour of g in K ⊗_𝔽_q. Note that A(n) does not, in fact, converge in the limit as n →∞. Indeed, even the natural density of Artin primes, namely the limit lim_N →∞∑_n=1^NN_X(g,n)/∑_n=1^N#P_n, does not, in general, exist. This was demonstrated by Bilharz <cit.> and expanded upon by Perng <cit.>. §.§ Structure of Paper The remainder of this paper is structured as follows. In Section <ref> we provide an overview of the relevant geometric set-up in which to present Artin's primitive root conjecture for function fields over . In Section <ref> we then discuss geometric extensions and geometric elements g ∈ K. In Section <ref>, we use Weil's theorem to establish a very general estimate for exponential sums over a variety; a crucial step necessary to extend our results to the general setting of projective varieties. Section <ref> then establishes a proof of Theorem <ref>, and Section <ref> provides a proof of Theorem <ref>. Finally, in Section <ref> we provide a heuristic interpretation of the counting function N_X(g,n), in the case that K=(X) is a global function field; in order to draw a conceptual comparison between our correction factor, ρ_g(n), and the classical correction factor, c_g. §.§ Acknowledgments We thank Jonas Baltes, Chris Hall, Seoyoung Kim, Pieter Moree, Michael Rosen, and Damaris Schindler for helpful discussions. The second author was funded by a Zuckerman Post-doctoral fellowship at the University of Haifa. § BACKGROUND ON PROJECTIVE SCHEMES §.§ Projective Schemes A graded ring is a ring S endowed with a direct sum decomposition S = ⊕_d≥ 0S_d of the underlying additive group, such that S_dS_e⊂ S_d+e. We say that a non-zero element a ∈ S is homogeneous of degree d, denoted deg a=d, if a ∈ S_d. A homogenous ideal is an ideal I ⊂ S that is generated by a set of homogenous elements. The ideal consisting of elements of positive degree, namely S_+:= ⊕_d > 0S_d, is referred to as the irrelevant ideal. If S=⊕_d≥ 0S_d is a graded ring, and I ◃ S a homogenous ideal, then the quotient ring R = S/I is itself a graded ring, with R_d = S_d/(I ∩ S_d). Consider the set Proj(S):= {⊆ S: homogenous prime ideal, S_+⊄}. We define a topology on X = Proj(S) (called the Zariski topology) by designating the closed sets of Proj(S) to be of the form Z(I):={∈Proj(S): I ⊆}, where I ⊂ S denotes a homogenous ideal. A point 𝔭∈ X is said to be a closed point if {𝔭} = {𝔭}, equivalently, if there is no 𝔮∈ X such that ⊊𝔮. The distinguished open set associated to any homogenous element f ∈ S_+ is then given by X_f:=Proj(S) ∖ Z(⟨ f ⟩)= {∈Proj(S): f ∉}, and the collection of such sets, namely {X_f: f ∈ S_+ homogeneous}, forms a basis for the topology on Proj(S). The space Proj(S), together with its Zariski topology, is referred to as a projective scheme. The structure sheaf, denoted 𝒪_X, is a sheaf on Proj(S), defined on the distinguished open sets as 𝒪_X(X_f):= S_(f) = {a/f^n: a ∈ S is homogenous, n ∈_≥ 0, deg a = n·deg f }, i.e. as the zero-degree component of the localization {1,f,f^2,…}^-1S. The projective scheme X = Proj(S) is called integral if S_(f) is an integral domain for any homogeneous f ∈ S_+. An integral projective scheme is, in particular, irreducible as a topological space, and we let dim(X) denote the dimension of X. We then find that dim(Y) < dim(X), for any proper closed subset Y ⊂ X. Finally, we note that any integral scheme X has a generic point η, that is, an element η∈ X such that {η} = X. §.§ Function Fields, Stalks, and Residue Fields Let X denote an integral projective scheme. Then for any homogeneous f,g ∈ S_+, Frac(S_(f)) ≅Frac(S_(g)), where Frac(R) denotes the fraction field of an integral domain R. We can thus define 𝒦(X) := Frac(S_(f)), for any homogeneous f ∈ S_+, to be the function field of X, which can moreover be expressed explicitly as 𝒦(X):= {a/b: a, b ∈ S_d for some d ∈_≥ 0, b ≠ 0 }. The stalk at a point 𝔭∈ X refers to the local ring 𝒪_X,𝔭 := {a/b∈𝒦(X): a, b ∈ S_d for some d ∈_≥ 0,b ∉𝔭}, whose unique maximal ideal is given explicitly by 𝒪_X,𝔭:={a/b∈𝒦(X): a, b ∈ S_d for some d ∈_≥ 0, a ∈, b ∉}. We refer to κ_𝔭 := 𝒪_X,𝔭/𝔭𝒪_X,𝔭 as the residue field of . The intersection of all stalks, namely 𝒪_X(X):= ⋂_∈ X𝒪_X,𝔭, is referred to as the global sections of X. We moreover say that X is normal if 𝒪_X,𝔭 is an integrally closed domain inside 𝒦(X), for every 𝔭∈ X. Two noteworthy properties of the function field 𝒦(X) are as follows. First, if η∈ X is the generic point of X, then 𝒦(X) is isomorphic to the stalk 𝒪_X,η. Second, if Spec(R) ⊂ X is an affine open, then R is an integral domain and 𝒦(X)≅Frac(R). §.§ Projective Varieties A projective variety X over the field k is a projective integral scheme of the form X = Proj(S), where S = k[x_0,…,x_n]/I is a finitely generated k-algebra, and where I ⊆ k[x_0,…, x_n] is a homogenous ideal. Under these assumptions, we note that X is both noetherian and separated. We denote its function field by k(X) and we note that dim(X) is equal to the transcendence degree of k(X) over k. A projective variety of dimension one is referred to as a projective curve (over k). Let S = k[x_0,…,x_n]/I be as above. Proj(S) is said to be geometrically integral if Proj(S) is integral, where S = (k[x_0,…, x_n]/I) ⊗_k k. For example, if f ∈ k[x_0,x_1,x_2] is homogeneous of positive degree and absolutely irreducible (i.e. irreducible over k), then Proj(k[x_0,x_1,x_2]/⟨ f ⟩) is a geometrically integral projective curve. We moreover let k(X) denote the function field of Proj(S). Let X=Proj(S) be a geometrically integral projective variety over , and let K = (X) denote the function field of X. Fix g ∈ K and let 𝔭∈ X be closed. We say that g is regular at 𝔭 if g ∈𝒪_X,𝔭⊂ K. For g ∈𝒪_X,𝔭∖𝒪_X,𝔭, let ord_(g) denote the order of g mod 𝒪_X,𝔭 in the multiplicative group κ_𝔭^×. Note that since ∈ X is closed, κ_ is in fact a finite algebraic extension of , and we define 𝔭 := [κ_𝔭] to be the degree of ∈ X. We say that g ∈ K is a primitive root modulo 𝔭 if g is regular at 𝔭 and if g mod 𝒪_X,𝔭 generates the multiplicative group κ_𝔭^×. Equivalently, we thus say that g ∈ K is a primitive root modulo 𝔭 if ord_(g) = #|κ_𝔭^×|= q^𝔭-1. §.§ Divisors and Valuations Let X=Proj(S) be a normal, geometrically integral, projective variety over . In particular, X is a noetherian normal integral separated scheme. A prime divisor Y of X is a closed integral subscheme Y ⊂ X of codimension one, i.e. such that dim(Y) = dim(X)-1. Let Y be a prime divisor of X, and let η_Y denote its generic point. As noted in <cit.>, the stalk 𝒪_X, η_Y is a discrete valuation ring with Frac(𝒪_X, η_Y) = K. We thus find that for any prime divisor Y ⊂ X, the valuation v_Y:𝒪_X, η_Y^×→ corresponding to 𝒪_X, η_Y may be extended to a function v_Y:K^×→. If v_Y(g) = n ∈_>0, we say that g has a zero along Y (of order n), while if v_Y(g) = n ∈_<0, we say that g has a pole along Y (of order n). We moreover note that v_Y(μ)=0 for all μ∈^×. Given g ∈ K^×, we find that v_Y(g) = 0 for all but finitely many prime divisors Y ⊂ X <cit.>. We may therefore define the degree of g to be (g) := ∑_Y ⊂ X| v_Y(g) |, where the sum runs over all prime divisors Y of X. Alternatively, (g) may be viewed as the number of poles and zeros of g on X, counted with multiplicity. Note that in the particular case in which X is a curve, the set of prime divisors of X is precisely given by the set of closed points in X. §.§ Rational Points Let R = [x_1,…,x_m]/I, be a finitely generated -algebra, where I ⊆[x_1,…,x_m] is an ideal. Denote by Spec(R) the affine -scheme, which is the affine scheme whose underlying set is the collection of prime ideals in R. The closed points of Spec(R) are given by the maximal ideals of R. For finitely generated -algebras R and S, we further recall that morphisms ρ: Spec(S) →Spec(R) between -schemes are in one-to-one correspondence with the -algebra homomorphisms ρ^#: R → S induced by the -algebra structure. An 𝔽_q^n-rational point of Spec(R) is an -scheme morphism ρ: Spec(𝔽_q^n) →Spec(R), which then corresponds to a homomorphism of -algebras ρ^#: R = [x_1,…,x_m]/I →𝔽_q^n. In particular, ρ^#:R →𝔽_q^n is a ring homomorphism with ⊆Im(ρ^#) ⊆𝔽_q^n. By the first isomorphism theorem, we note that Im(ρ^#) ≅ R/ker(ρ^#), where ker(ρ^#)⊂ R is maximal, since Im(ρ^#) is in fact a field. Thus, to any 𝔽_q^n-rational point ρ, we may associate a unique maximal ideal, 𝔪 := ker(ρ^#)⊂ R. Conversely, suppose 𝔪⊂ R is a maximal ideal. Then R/𝔪≅𝔽_q^m, where m = [𝔽_q^m:𝔽_q] = deg 𝔪. If m ≤ n one may then associate precisely m different 𝔽_q^n-rational points to the closed point 𝔪∈Spec(R) as follows. Note that there are precisely m different -invariant inclusions φ𝔽_q^m↪𝔽_q^n coming from the elements in Gal(𝔽_q^m/) ≅/m. Let π: R → R/𝔪 denote the projection map. Then for any φ as above the -algebra homomorphism ρ^#_φ: R →𝔽_q^m given by ρ^#_φ = φ∘π corresponds to a unique 𝔽_q^n-rational point. Thus each closed point 𝔪∈Spec(R) of degree m gives rise to precisely m distinct 𝔽_q^n-rational points, ρ^#_φ. Let X be a geometrically integral projective variety over with function field K = (X). We let X(𝔽_q^n) denote the set of 𝔽_q^n-rational points of X, i.e. the set of -scheme morphisms ρSpec (𝔽_q^n) → X, Each ρ factors through some open affine subscheme Spec(R) ⊂ X, where R is an integral domain and moreover K ≅Frac(R). Upon restricting to the preimage of Spec(R) under ρ, each ρ induces an -algebra homomorphism ρ^# R →𝔽_q^n. We now describe how to evaluate an element g ∈ K^× at a rational point ρ∈ X(𝔽_q^n). Consider g = a/b ∈ K^×, where a,b ∈ R and b ≠ 0. If ρ^#(b) ≠ 0, we evaluate g(ρ) := ρ^#(a)/ρ^#(b)∈𝔽_q^n. Otherwise, we say that g has a pole at ρ. Recall that the closed point 𝔭 corresponding to ρ is given by 𝔭 = ker(ρ^#) ⊂ R. Clearly ρ^#(b) = 0 precisely when b ∈ker(ρ^#) = 𝔭. Hence g = a/b is regular at 𝔭 if and only if g does not have a pole at any of the corresponding rational points, ρ. As a concrete example, consider the case X = ℙ_𝔽_q^1. Then K = (t) and the closed points correspond to irreducible monic polynomials in [t] in addition to the point at infinity. The generic point η is then given by the zero-ideal, (0) ⊂[t]. A closed point ∈ X of degree n is the ideal generated by an irreducible polynomial p(t) ∈[t] of degree n, which correspond to n distinct 𝔽_q^n-rational points, namely the roots of p(t). Given a fixed root ρ of p(t), the map ρ^# is then given by ρ^#: [t] ↠[t]/(p(t))𝔽_q^n, f ↦ [f] ↦ f(ρ). Artin's primitive root conjecture for K then reduces to the particular setting described in Section <ref>. §.§ Number field analogue We conclude this section by noting that the classical version of Artin's primitive root conjecture (i.e. the case over number fields) may also be phrased in a geometric language. Let K/ be a number field with ring of integers 𝒪_K, and recall that the closed points of Spec(𝒪_K) are the set of non-zero prime ideals of 𝒪_K. The residue field of a closed point P ∈Spec(𝒪_K), i.e. of a non-zero prime ideal of 𝒪_K, is given by κ_P=𝒪_K/P. Given g ∈ K we say that g is a primitive root modulo a non-zero prime ideal P ∈Spec (𝒪_K) if v_P(g) ≥ 0 and if g generates the multiplicative group κ_P^×=(𝒪_K/P)^×. Artin's primitive root conjecture over K states that for appropriate g ∈ K, the number of prime ideals P for which g is a primitive root is infinite. § GEOMETRIC EXTENSIONS Let X = Proj(S) denote a geometrically integral projective variety over , where char() = p. Let K = (X) denote its function field. Recall that we write 𝔽_q(X) to refer to the function field of the base change of X to 𝔽_q, namely to the compositum of fields K 𝔽_q. We moreover note that 𝔽_q(X) ≅𝔽_q(X) ⊗_𝔽_q. Let K denote a fixed algebraic closure of K, and consider the algebraic field extensions L/K and M/. Note that L and M, as well as the compositum LM, may each be embedded inside K. Given an algebraic field extension L/K we thus write 𝔽_q ∩ L ⊂K to be the constant field of L, namely the maximal algebraic subextension of inside L. In particular, since K = (X), where X is integral of finite type over , it follows from <cit.> that K ∩𝔽_q = 𝔽_q. Let K be as above and let L_2/L_1/K be a tower of algebraic field extensions. We say that L_2/L_1 is a geometric field extension if L_2∩𝔽_q = L_1∩𝔽_q. In particular, if L/K is an algebraic field extension, we say that L/K is a geometric field extension if L ∩𝔽_q = 𝔽_q. Let g∈ K. We say that g is geometric at a rational prime ℓ≠ p if, for all roots α∈K of the polynomial X^ℓ-g, the extension K(α)/K is a proper geometric extension of fields. If g ∈ K is geometric at all primes ℓ≠ p, we refer to g ∈ K as geometric. Let K = (X), let g ∈ K and let ℓ≠ p be a rational prime. The following are equivalent: * g is not geometric at ℓ. * There exists μ∈ and b ∈ K such that g = μ b^ℓ. * There exists g̃∈ K 𝔽_q such that g = g̃^ℓ. (i) (ii): Since g is not geometric at ℓ, by definition there exists a root α∈K of X^ℓ-g such that K(α)/K is either not a proper field extension or such that K(α) ∩𝔽_q ≠. In the former case, we may write g = μ b^ℓ where b = α and μ=1, and we're done. We therefore assume K(α)/K to be a proper field extension which is not geometric, i.e. M := K(α) ∩𝔽_q ≠. Note that since g is not an ℓ^th power the polynomial X^ℓ-g is irreducible over K  <cit.>. Thus X^ℓ-g is the minimal polynomial of α, and we conclude that [K(α) :K] = ℓ. Since M ⊋ we further note that K(α) ⊇ MK ⊋ K. Since [K(α):K] = ℓ is prime, it follows that MK = K(α). Next, note that M/ is a finite extension of finite fields and hence Galois. It then further follows that the extension of composita MK/ K = K(α)/K is also Galois. Thus X^ℓ-g splits into distinct linear factors in K(α), and we may write X^ℓ-g = ∏_i=0^ℓ-1(x - ζ_ℓ^iα), where ζ_ℓ∈ K(α) denotes a fixed primitive ℓ^th root of unity. Note then that K(α) ⊇ K(ζ_ℓ) ⊇ K, and moreover that K(α) ≠ K(ζ_ℓ) since [K(ζ_ℓ):K]≤ℓ-1 < ℓ. Therefore, since ℓ is prime, it follows that K(ζ_ℓ) = K. Since elements in K of finite order lie in K ∩𝔽_q =, it follows that ζ_ℓ∈^×. Noting that ζ_ℓ∈^× if and only if ℓ| q-1, we further conclude that ℓ| q-1. By  <cit.>, we find that [M:] = [K(α):K] =ℓ. Since ℓ| q-1, we moreover find that #^×,ℓ = q-1/ℓ< q-1 = #^×, and thus, in particular, that there exists an element μ∈^× that is not an ℓ^th power. Again by <cit.>, we find that the polynomial X^ℓ-μ is irreducible over , and therefore [(β):]= ℓ, where β∈𝔽_q is any root of the polynomial X^ℓ-μ. By the uniqueness of finite field extensions, it further follows that M = (β). From <cit.> it then also follows that {1,β, …, β^ℓ-1} form a basis for K(α)/K, and therefore K(α) = K(β). Hence there exist b_i ∈ K, i = 0,1, , ℓ-1 such that α = ∑_i=0^ℓ-1b_i β^i. Let σ be a non-trivial element of Gal(K(α)/K). Then σ(α) is a root of X^ℓ-g, and we may write σ(α) = ζ_ℓ^n α for some n ∈{1, , ℓ-1}. Similarly, σ(β) is a root of X^ℓ-μ, i.e. σ(β) = ζ_ℓ^m β for some m ∈{1, , ℓ-1}. Hence ∑_i=0^ℓ-1b_iζ_ℓ^n β^i= ζ_ℓ^n α = σ(α) = σ( ∑_i=0^ℓ-1b_i β^i ) = ∑_i=0^ℓ-1b_i ζ_ℓ^miβ^i. Since {1, β, , β^ℓ-1} are linearly independent over K, it follows that b_iζ_ℓ^n =b_i ζ_ℓ^mi for all 0 ≤ i ≤ℓ - 1. Whenever n ≢mi ℓ this implies b_i = 0. Note that there exists a unique 0 ≤ i_0 ≤ℓ-1 such that n ≡ mi_0 ℓ. It follows that α = b_i_0β^i_0, and therefore g = α^ℓ = μ̃ b_i_0^ℓ, where μ̃ = μ^i_0∈^×. This shows the desired claim. (ii) (iii): Set g̃ = √(μ) b, where √(μ) denotes any root of X^ℓ- μ in 𝔽_q. (iii) (i): If g̃∈ K then g is an ℓ^th power, and therefore g is clearly not geometric at ℓ. So, assume g̃∉ K. Then K ⊊ K(g̃) ⊂ K 𝔽_q. Since K(g̃)/K is a proper finite extension, in fact K(g̃) ⊆ K𝔽_q^n, for some n > 1. Moreover, since Gal(K𝔽_q^n/K) ≅Gal(𝔽_q^n/) ≅/n, there exists a unique subgroup of Gal(K𝔽_q^n/K) of any given order dividing n. By the fundamental theorem of Galois theory, we then find that there exists a unique subextension of K𝔽_q^n/K of degree ℓ = [K(g̃):K], and thus may conclude that K(g̃) = K𝔽_q^ℓ. By <cit.>, it follows that K(g̃)∩𝔽_q = K𝔽_q^ℓ∩𝔽_q = 𝔽_q^ℓ⊋, i.e. g is not geometric at ℓ, as desired. The first part of the proof of Lemma <ref> shows that if g is not geometric at a prime ℓ such that ℓ∤ q-1, then g is already a full ℓ^th power in K. In particular, suppose g = g̃^ℓ, for some prime ℓ∤ q-1 such that ℓ| q^n-1. Consider a closed point ∈ X, where = n, and such that g is regular at . Then by a similar argument to that provided in Section <ref>, we find that ord_(g)=ℓ·ord_(g̃) ≤ q^n-1 ⟹ord_(g) ≤q-1/ℓ < q-1, where we moreover note that since g regular at then g̃ is also regular at . In other words, we conclude as follows: Suppose g ∈ K^× is not geometric at some prime ℓ∤ q-1 such that ℓ| q^n-1. Then g is not a primitive root modulo any closed point ∈ X of degree n, i.e. N_X(g,n) = 0. Let g ∈ K ∖, and let 𝒫_g denote the set of primes ℓ≠ p at which g is not geometric. Then 𝒫_g is finite. By <cit.> and <cit.>, there exists a geometrically integral normal projective variety X_ν over , such that (X_ν) ≅𝔽_q(X). In particular, since X_ν is a geometrically integral projective variety, by <cit.> we find that the global sections are given by 𝒪_X_ν(X_ν) =. Suppose 𝒫_g is infinite. Note that it suffices to show that g lies in the global sections 𝒪_X_ν(X_ν). By Lemma <ref> there exists an arbitrarily large ℓ∈ℕ such that g = μ b^ℓ, where μ∈ and b ∈ K. Note that as Y ranges over prime divisors of X_ν, the maximum value of | v_Y(g) | is bounded by deg(g). Choose ℓ > deg(g) such that g = μ b^ℓ. Then for any prime divisor Y ⊂ X_ν, we find that v_Y(g) = v_Y(μ) + ℓ· v_Y(b) = ℓ· v_Y(b), Thus v_Y(g) = 0 for any prime divisor Y ⊂ X_ν, and in particular v_Y(g) ≥ 0 for all prime divisors Y. It follows from <cit.> that g ∈𝒪_X_ν(X_ν), as desired. § A BOUND ON EXPONENTIAL SUMS One of the key ingredients of the proof of Theorem <ref> is the following estimate for exponential sums, which is of independent interest. As we were unable to find a suitable result of our desired form in the existing literature, we present a proof here. Let X be a geometrically integral projective variety of dimension r. Let χ∈𝔽_q^× be a non-trivial character of order δ >1. Let g ∈(X) and assume that there exists a prime ℓ|δ such that g is geometric at ℓ, i.e. that g is not of the form g = g̃^ℓ for any g̃∈𝔽_q(X). Let ℛ_g ⊂ X(𝔽_q) denote the set of 𝔽_q-rational points on X that are neither zeroes nor poles of g. Then ∑_ρ∈ℛ_gχ(g(ρ)) ≪_X q^r-1/2. Let U ⊂ X be an affine open subset of X on which g is invertible, i.e. such that g has neither poles nor zeroes on U. It suffices to show the estimate for the sum over U(). Indeed, since X ∖ U is a proper closed subset of X, then since X is irreducible, dim(X ∖ U)<r. Therefore by the Lang–Weil bounds <cit.> the number of -rational points of X ∖ U is bounded by O(q^r-1). By Noether's normalization lemma there exists a finite surjective morphism U →𝔸^r, where 𝔸^r:=Spec([x_1,…, x_r]). Obviously there also exists a surjective map 𝔸^r→𝔸^r-1 by projecting on the first r-1 coordinates, say. The composition of these maps yields a surjective morphism of locally finite type φ U →𝔸^r-1. From Chevalley's upper semicontinuity theorem (<cit.> and <cit.>) it follows that the elements x ∈ U such that φ^-1(φ(x)) > 1 holds lie in a proper closed subset of U ⊂ X, and thus has dimension at most r-1. Thus, by Lang–Weil, the number of rational points in this subset is bounded by O(q^r-1). Note that if there is no -rational point 𝔸^r-1() corresponding to y ∈φ(U), then φ^-1(y)() is empty. It therefore remains to estimate ∑_y ∈φ(U)() φ^-1(y) = 1∑_ρ∈φ^-1(y)()χ(g(ρ)), where, if y ∈φ(U)(), then with a slight abuse of notation we write φ^-1(y) to denote the fiber coming from the closed point in φ(U) corresponding to y. On the fibres φ^-1(y) ⊂ U where φ^-1(y) = 1, we apply a theorem of Perelmuter <cit.>, which states the following. Let φ^-1(y)_𝔽_q refer to φ^-1(y) after changing the base field to 𝔽_q, and suppose that g, when restricted to any irreducible component of φ^-1(y)_𝔽_q, is not a δ^th power of any element in 𝔽_q(X). Then ∑_ρ∈φ^-1(y)()χ(g(ρ)) ≪_X q^1/2 uniformly in y. Note that if g is not an ℓ^th power for ℓ as in the assumption of the proposition, then g is not a δ^th power. The remainder of the proof is thus concerned with showing that for generic y ∈φ(U), the element g is not an ℓ^th power when restricted to an irreducible component of φ^-1(y)_𝔽_q, and thus Perelmuter's theorem is applicable. Call y ∈𝔸^r-1 bad if y ∈φ(U) and this occurs, and good otherwise. We claim that there exists a constructible set C ⊂𝔸^r-1 containing the generic point η, such that C is contained in the set of good points. by <cit.> we deduce that C contains an open dense subset in 𝔸^r-1 and so 𝔸^r-1∖ C is contained in a proper closed subset of 𝔸^r-1. Since 𝔸^r-1∖ C contains the set of bad points, by Lang–Weil the number of bad -rational points on 𝔸^r-1 is bounded by O(q^r-2). Therefore, by trivially bounding the character sums for the fibers coming from 𝔸^r-1∖ C, we find that ∑_y ∈ (𝔸^r-1∖ C)() φ^-1(y) = 1∑_ρ∈φ^-1(y)()χ(g(ρ))= O(q^r-1). To show the claim made above we will employ <cit.>. This states that if h Z → Y is a morphism of finite presentation between schemes, and if n_h Y →{0,1, } is the number of irreducible components of the fibre h^-1(y) after base change to 𝔽_q then for a positive integer n the set E_n = {y ∈ Y n_h(y) = n } is constructible. Recall that U is an affine open subset of X and is of the form U = Spec (R), where R = [x_1, , x_n]/I for some ideal I. Since g is invertible on U, we may consider g restricted to U as an element in R. Consider U_g = Spec(R[z]/(z^ℓ-g)), and note that we have a natural map ψ U_g → U induced by the inclusion map R ↪ R[z]/(z^ℓ-g) and write f for the composition f = φ∘ψ U_g →𝔸^r-1. Since ψ is a surjective morphism, we see that n_f(y) ≥ n_φ(y) for all y ∈𝔸^r-1. Note furthermore that since all the schemes involved are noetherian, f is locally of finite type. It thus follows from <cit.> that f is of finite presentation. By <cit.> we then find that the set C = {y ∈𝔸^r-1 n_f(y) = 1 } is constructible. Note that the generic fibre φ^-1(η) is integral with function field isomorphic to K and in particular it is also integral after changing base to 𝔽_q. Further since g was assumed not to be an ℓ^th power in 𝔽_q(X), we find that n_f(η) =1 and therefore η∈ C. Further note that if n_f(y) = n_φ(y) then y is good. Otherwise, if y is bad then by construction we have n_f(y) > n_φ(y). § PROOF OF THEOREM <REF> Consider a finite cyclic group G of order M, and let G := Hom(G,^×) denote its group of characters. Let f_G(g) := φ(M)/M∏_p | M p prime(1-1/φ(p)∑_χ∈G ordχ = pχ(g) ), We begin by noting the following general formula (see also <cit.>): For g ∈ G, we have that f_G(g)=φ(M)/M∑_d | Mμ(d)/φ(d)∑_χ∈G ordχ = dχ(g)= 1 if g generates G 0 otherwise. Suppose g ∈ G does not generate G. Then we may write g = h^p for some h ∈ G, where p | M is prime. By the fundamental theory of cyclic groups, G has a unique subgroup of order d, for every d|M. Such a group is moreover cyclic; and its generators are precisely the elements in G of order d. It follows that there are precisely φ(d) characters of order d in G, since G≅ G. Thus ∑_χ∈G ordχ = pχ(g)=∑_χ∈G ordχ = pχ^p(h) = φ(p), from which it follows that f_G(g) = 0. Alternatively, suppose g ∈ G generates G. By orthogonality we then find that ∑_χ∈G ordχ = pχ(g) = -1, and moreover by Möbius inversion, M/φ(M) = ∏_p | M(1+1/p-1). We thus conclude, as desired, that f_G(g) = 1 if g generates G 0 otherwise. Finally, by the Chinese remainder theorem, we note that for (d_1,d_2)=1 and g ∈ G, ∑_χ∈G ordχ = d_1d_2χ(g) = ( ∑_ψ∈G ordψ = d_1ψ(g)) ( ∑_ϕ∈G ordϕ = d_2ϕ(g)) By multiplicativity we thus conclude that f_G(g) = φ(M)/M∏_p | M(1-1/φ(p)∑_χ∈G ordχ = pχ(g) ) =φ(M)/M∑_d | Mμ(d)/φ(d)∑_χ∈G ordχ = dχ(g). Let X/ be a geometrically integral projective variety. Write K for the function field of X and fix algebraic closures 𝔽_q and K. Let 𝔭∈ X be a closed point of degree n, i.e. [κ_𝔭] = n. There then exists an isomorphism κ_𝔭^×≅𝔽_q^n^×⊂𝔽_q, which may be explicitly described as follows. Let [f] ∈κ_𝔭^× denote a residue class represented by some f ∈ K that is regular at 𝔭, and let ρ∈ X(𝔽_q^n) denote an 𝔽_q^n-rational point of X corresponding to 𝔭. Since f ∈ K is regular at 𝔭, f does not have a pole at ρ, and in fact f(ρ) ∈𝔽_q^n^×, since f does not vanish at 𝔭. It follows that the map [f] ↦ f(ρ) is a well-defined isomorphism. For a fixed g ∈ K, we thus find that g is a primitive root mod 𝔭 if and only if g is regular at 𝔭 and its image g(ρ) under this isomorphism generates 𝔽_q^n^×. Note that for any closed point of degree n there exist n different 𝔽_q^n-rational points on X corresponding to it obtained by the action of the Frobenius element, see <cit.>. For an affine open U ⊂ X this was explained in Section <ref>. It would also be sufficient to only consider these rational points on U, since the number of 𝔽_q^n-rational points on X ∖ U is O(q^r(n-1)) by Lang–Weil. Let ℛ^(n)_g ⊂ X(𝔽_q^n) denote the set of 𝔽_q^n-rational points on X that are neither zeroes nor poles of g, so that g(ρ) ∈𝔽_q^n^× for any ρ∈ℛ^(n)_g. If we then consider any ρ∈ℛ_g^(n) corresponding to a closed point 𝔮 with (𝔮) < n then g(ρ) is contained in a proper subfield of 𝔽_q^n and therefore will not generate 𝔽_q^n^×. Here we may consider this subfield as a subset of 𝔽_q^n since we fixed an algebraic closure 𝔽_q. Further note that if ρ∈ X(𝔽_q^n) ∖ℛ_g^(n), then ρ corresponds to a closed point ∈ X at which g either vanishes or is not regular. We thus find that N_X(g,n) = 1/n#{ρ∈ℛ_g^(n)⟨ g(ρ) ⟩ = 𝔽_q^n^×}. For positive integers k,n ∈, consider Ramanujan's sum c_k(m) := ∑_1 ≤ a ≤ k (a,k) = 1 e (am/k), and note that for a rational prime ℓ∈ℕ, c_ℓ(m) = -1 if ℓ∤ m, φ(ℓ) if ℓ| m,. We prove the following lemma: Let X/ be a geometrically integral projective variety with function field K=(X). Let 𝒫_g denote the set of primes ℓ≠ p at which g is not geometric, and let ℛ^(n)_g ⊂ X(𝔽_q^n) denote the set of 𝔽_q^n-rational points on X that are neither zeroes nor poles of g. Then N_X(g,n) = ρ_g(n) φ(q^n-1)/n(q^n-1)∑_ρ∈ℛ_g^(n)∏_ℓ| q^n-1 ℓ∉𝒫_g(1-1/φ(ℓ)∑_χ∈G ordχ = ℓχ(g(ρ)) ), where ρ_g(n) := ∏_ℓ| q^n-1 ℓ∈𝒫_g(1-c_ℓ(q^n-1 + ⋯ + 1)/φ(ℓ)), and c_k(m) denotes Ramanujan's sum. By (<ref>) and Lemma <ref>, we find that N_X(g,n) = φ(q^n-1)/n(q^n-1)∑_ρ∈ℛ_g^(n)∏_ℓ| q^n-1 ℓ prime(1-1/φ(ℓ)∑_χ∈G ordχ = ℓχ(g(ρ)) ). Note that if g is an ℓ^th power for some prime ℓ| q-1, then N_X(g,n)=0 for all n ≥ 1. This can be seen upon noting that there are precisely φ(ℓ) characters χ∈G of order ℓ, and that ℓ| q^n-1 for any n ≥ 1, when ℓ| q-1. Alternatively, this follows from elementary group theoretic considerations, as noted in Section <ref>. Since (<ref>) holds in such a case, we may henceforth assume that g ∈ K is not an ℓ^th power for any prime ℓ| q-1. Let ℓ| q^n-1 be a prime such that g is not geometric, that is, ℓ∈𝒫_g. By Lemma <ref>, we may then write g = μ_ℓ b_ℓ^ℓ for some μ_ℓ∈𝔽_q^× and some b_ℓ∈ K^×. Therefore, if ord(χ) = ℓ we have χ(g(ρ)) = χ(μ_ℓ) for any ρ∈ℛ_g^(n). Let r denote the order of μ_ℓ∈^×, and write μ_ℓ = ξ^a for some generator ξ∈𝔽_q^×. Then ξ^ar=1, from which it follows that q-1/r| a. Suppose, now, that ℓ|q-1/r. Then ℓ| a, so we may write μ_ℓ = (ξ^a/ℓ)^ℓ. In other words, μ_ℓ is an ℓ^th power where, moreover, ℓ| q-1. From this it further follows that g is an ℓ^th power in K, contradicting our assumption. We thus conclude that ℓ∤q-1/r or, equivalently, that (ℓ, q-1/r) = 1. For x ∈ℝ, write e(x) := e^2 π i x. Consider an embedding ψ𝔽_q^n^×↪ℂ^× such that for μ_ℓ∈𝔽_q^×⊂𝔽_q^n^× as above, we have ψ(μ_ℓ) = e( 1/r). A character χ𝔽_q^n^×→ℂ^× of order ℓ then acts on an element α∈𝔽_q^n^× via χ(α) = ψ(α)^(q^n-1)a/ℓ, for some a ∈{1, , ℓ-1 }. It follows that for any given ρ∈ℛ_g^(n), ∑_ordχ = ℓχ(g(ρ)) = ∑_1 ≤ a ≤ℓ-1 e (1/r)^(q^n-1)a/ℓ = ∑_ 1 ≤ a ≤ℓ -1 e( a(q^n-1)/ℓ r)=c_ℓ(q^n-1 /r), where c_ℓ(q^n-1/r) denotes a Ramanujan sum. Since (ℓ, q-1/r) = 1 we have ℓ|q^n-1/r if and only if ℓ| q^n-1 + ⋯ + 1, i.e. that c_ℓ(q^n-1 /r) = c_ℓ(q^n-1 + ⋯ + 1 )= -1 if ℓ∤q^n-1/r φ(ℓ) otherwise. Thus, for any ρ∈ℛ_g^(n), ∏_ℓ| q^n-1 ℓ∈𝒫_g(1-1/φ(ℓ)∑_χ∈G ordχ = ℓχ(g(ρ)) ) = ∏_ℓ| q^n-1 ℓ∈𝒫_g(1-c_ℓ(q^n-1 + ⋯ + 1)/φ(ℓ)) = ρ_g(n), and together with (<ref>), this yields (<ref>). For an integer δ∈ℕ write (δ,𝒫_g) = 1 if (δ,ℓ) = 1 for every prime ℓ∈𝒫_g. As in the proof of Lemma <ref> we may then expand (<ref>) to obtain N_X(g,n) = ρ_g(n) φ(q^n-1)/n(q^n-1)∑_δ| q^n-1 (δ, 𝒫_g) = 1μ(δ)/φ(δ)∑_ordχ = δ∑_ρ∈ℛ_g^(n)χ(g(ρ)). By the Lang–Weil bounds <cit.>, the number of 𝔽_q^n-rational points on X, denoted #X(𝔽_q^n), is given by |#X(𝔽_q^n) - q^nr|≪_X q^n(r-1/2). Noting, moreover, that g has at most m poles and zeroes, it follows that for fixed g, |#ℛ_g^(n)-q^nr| ≤ m+O_X(q^n(r-1/2)) = O_g,X(q^n(r-1/2)), and thus the contribution from the trivial character χ_0 is given by ∑_ρ∈ℛ_g^(n)χ_0(g(ρ)) = #ℛ_g^(n) = q^nr + O(q^n(r-1/2)). If δ| q^n-1 is such that (δ, 𝒫_g)=1 and δ >1, then by Proposition <ref> we moreover find that for χ of order δ, ∑_ρ∈ℛ_g^(n)χ(g(ρ)) = O(q^n(r-1/2)). Combining the above observations, and applying the divisor bound ∑_δ| q^n-1|μ(δ) | = O_ε(q^n ε), we obtain N_X(g,n) = ρ_g(n) ( φ(q^n-1)q^n(r-1)/n+ O_ε(q^n(r-1/2+ε))), as desired. § PROOF OF THEOREM <REF> We deduce Theorem <ref> from Theorem <ref>, by demonstrating that N_X(g,n) > 0 for infinitely many n ∈ℕ. We begin by providing a more careful study of the correction factor ρ_g(n): ρ_g(n) > 0 if and only if for all primes ℓ∈𝒫_g such that ℓ| q^n-1 we have ℓ| q-1 and ℓ∤ n. Let ℓ be a prime dividing q^n-1 such that g is not geometric at ℓ. We proceed in two cases. First, suppose ℓ| q-1. Then q ≡ 1 ℓ, and therefore q^n-1 + ⋯ + 1 ≡ 0 ℓ n ≡ 0 ℓ. By (<ref>), we then find that c_ℓ(q^n-1 + ⋯ +1) = -1 if and only if ℓ∤ n. Otherwise, c_ℓ(q^n-1 + ⋯ +1) = φ(ℓ), in which case ρ_n(g)=0. Next, suppose ℓ∤ q-1. Since ℓ| q^n-1 by assumption, we then find that ℓ| q^n-1 + ⋯ + 1. By (<ref>) it then follows that c_ℓ(q^n-1 + ⋯ + 1) = φ(ℓ), and therefore ρ_n(g) = 0. In conclusion, we find that ρ_g(n) > 0 if and only if for all primes ℓ∈𝒫_g such that ℓ| q^n-1 we have ℓ| q-1 and ℓ∤ n, as desired. First note that if K is any function field over of finite transcendence degree, then K = (X) is the function field of a geometrically integral, projective variety X/. Let g ∈ K ∖ and assume g is not a full ℓ^th power for any prime ℓ| q-1, so that  (<ref>) holds. We wish to show that there exist infinitely many closed points 𝔭 of X such that g is a primitive root modulo 𝔭, i.e. that there exist infinitely many n ∈ℕ such that N_X(g,n) ≠ 0. Note that ρ_g(n) ≥ 1 whenever ρ_g(n) ≠ 0. To show that N_X(g,n) ≠ 0 infinitely often, it therefore suffices to show that there exist infinitely many n ∈ such that ρ_g(n) >0. Let 𝒫_g = {ℓ_s: s ∈ S} denote the set of primes ℓ≠ p at which g is not geometric. Since g ∉ we note that 𝒫_g is a finite set, by Lemma <ref>. Let I ⊂ S be such that i ∈ I whenever ℓ_i | q - 1, and let J = S ∖ I be such that j ∈ J whenever ℓ_j ∤ q-1. Given m ∈ we then consider n = 1 + m ∏_i ∈ Iℓ_i ∏_j ∈ J(ℓ_j-1). We claim that the set of primes in 𝒫_g, which divide q^n-1, is precisely given by {ℓ_i i ∈ I}. Note first that ℓ_i | q^n-1 for all i ∈ I since, in fact for any n ∈ℕ, we have that (q-1) | q^n-1. On the other hand, let j_0 ∈ J. Then q ≢1 ℓ_j_0 and thus q^n ≡ q^1 + m ∏_i ∈ Iℓ_i ∏_j ∈ J(ℓ_j-1)≡ q ≢1 ℓ_j_0 since q^ℓ_j_0-1≡ 1 ℓ_j_0 by Fermat's little theorem. Hence ℓ_j_0∤ q^n-1. Finally note that n ≢0 ℓ_i for all i ∈ I. From Lemma <ref> we then find that ρ_g(n) >0. The result now follows upon noting that there are infinitely many n ∈ℕ of the form in  (<ref>). § A HEURISTIC INTERPRETATION Artin arrived at the quantitative version of his primitive root conjecture using a well-known heuristic concerning the splitting properties of primes across the fields (ζ_ℓ,g^1/ℓ), for varying primes ℓ∈ℕ. The classical correction factor c_g∈ emerges upon taking into account relevant entanglements between number fields of this form. In this section we suggest an analogous heuristic, which interprets the correction factor ρ_g(n) ∈ in terms of splitting properties of primes in K. In contrast to the classical setting, we manage to obtain the appropriate correction factor while still preserving the assumption that the various splitting conditions are independent, across the primes ℓ∈ℕ. In what follows, we restrict ourselves to the case in which K=(X) is a global function field, i.e. in which X is a normal geometrically integral projective curve over . A prime P of K is, by definition, a discrete valuation ring O_P with maximal ideal P, such that ⊂ O_P and Frac(O_P)= K. Since K is a global function field, the stalk 𝒪_X, at each closed point 𝔭∈ X is a discrete valuation ring, and such stalks are in fact in 1:1 correspondence with the primes in K <cit.>. If L/K is a field extension then a prime 𝔓 of L is said to lie above P (denoted 𝔓|P) if O_𝔓∩ K = O_P. In such a case, O_𝔓/𝔓 forms a vector space over O_P/P of finite degree, and we refer to f_𝔓/P:=[O_𝔓/𝔓:O_P/P] as the residual degree of 𝔓 over P. We moreover find that P O_𝔓 = 𝔓^e_𝔓/P, for some e_𝔓/P∈_>0, which we refer to as the ramification index of 𝔓 over P. We say that P splits completely in L if g = [L:K], i.e. if f_𝔓_i/P= e_𝔓_i/P=1 for all primes 𝔓_i in L lying above P. Let {𝔓_1,…,𝔓_h(P)} denote the complete set of primes in L lying above P. By  <cit.>, we then find that ∑_i=1^h(P)f_𝔓_i/P· e_𝔓_i/P = [L:K]. Let g ∈ K ∖. Let 𝔭∈ X be a closed point such that g is regular at 𝔭. Such g ∈ K then fails to be primitive modulo 𝔭 if and only if the prime P corresponding to 𝔭 splits completely in K_ℓ := K(√(g), ζ_ℓ), the splitting field of X^ℓ-g, for some prime ℓ∈ℕ, where ℓ∤ q <cit.>. We may therefore formulate a heuristic for the density of primes P of degree n for which g is a primitive root by understanding the density of primes P in K which split completely in K_ℓ, for each prime ℓ∈ℕ. Suppose g is not a full ℓ^th power, for any prime ℓ| q-1. Otherwise, N_X(g,n)=0, trivially. Given a prime ℓ∈ℕ, let ℙ_ℓ := ℙ(P splits completely in K_ℓ|(P) = n ). Under the heuristic assumption that the splitting conditions of P in K_ℓ are independent for the various fields K_ℓ, we expect the desired density to be given by A = ∏_ℓ(1- ℙ_ℓ). Note that ℓ| q^n-1 if and only if P splits completely in K(ζ_ℓ) (cf. <cit.>). Thus ℓ∤ q^n-1 implies that P does not split completely in K_ℓ, i.e. ℙ_ℓ = 0 for all ℓ∤ q^n-1, and thus A = ∏_ℓ| q^n-1(1- ℙ_ℓ). Note that when ℓ| q^n-1 then again by <cit.> we find that ℙ_ℓ = ℙ(P splits completely in K_ℓ|(P) = n, P splits completely in K(ζ_ℓ)). Now suppose P splits completely in K(ζ_ℓ), i.e. that f_/P = e_/P= 1 for every prime 𝔭 in K(ζ_ℓ) lying above P. Let 𝔓 denote a prime in K_ℓ lying above some such 𝔭|P. Since the residual degree and ramification index behave transitively in towers of field extensions, we find that f_𝔓/P = f_𝔓/· f_/P = f_𝔓/ and similarly that e_𝔓/P = e_𝔓/· e_/P = e_𝔓/, for any such 𝔓|. In particular, we find that f_𝔓/P=e_𝔓/P=1 for every prime 𝔓|P in K_ℓ if and only if f_𝔓/ = e_𝔓/ = 1 for every prime 𝔓 in K_ℓ lying above some such |P. It follows that P splits completely in K_ℓ if and only if every prime 𝔭|P in K(ζ_ℓ) splits completely in K_ℓ. Suppose P is a prime in K of degree n, such that P splits completely in K(ζ_ℓ). Note that since f_P/=1, we find that [O_/:O_P/P]=1 and therefore that [O_/:(ζ_ℓ)]· [(ζ_ℓ):] = [O_/:]= [O_P/P:]. It follows that [O_/:(ζ_ℓ)]=[O_P/P:]/[(ζ_ℓ):]=n/ϕ_q(ℓ), where ϕ_q(ℓ) := [(ζ_ℓ):] is equal to the multiplicative order of q ℓ  <cit.>. In other words, we find that P splits completely into ϕ_q(ℓ) primes ⊂ K(ζ_ℓ), such that :=[O_/:(ζ_ℓ)] = n/ϕ_q(ℓ) for each such . By the prime polynomial theorem  <cit.>, we thus find that ∑_P ⊂ K P = n#{⊆ K(ζ_ℓ) : |P } = ϕ_q(ℓ)q^n/n+O(q^n/2). Note, moreover, that the constant field of K(ζ_ℓ) is equal to 𝔽_q^ϕ_q(ℓ). Again from the prime polynomial theorem it then follows that #{⊆ K(ζ_ℓ): = n/ϕ_q(ℓ)} = q^ϕ_q(ℓ)n/ϕ_q(ℓ)/n/ϕ_q(ℓ)+O(q^n/2) = ϕ_q(ℓ)q^n/n+O(q^n/2). We may thus conclude that ℙ_ℓ = ℙ(⊆ K(ζ_ℓ) splits completely in K_ℓ| = n/ϕ_q(ℓ)). Note that if g is geometric at ℓ then K_ℓ/K(ζ_ℓ) is a geometric extension. In such a case, we may apply Chebotarev's density theorem <cit.> to establish that the desired density is given by ℙ_ℓ = 1 /[K_ℓ:K(ζ_ℓ)] = 1/ℓ. If g is not geometric at ℓ, then we may no longer apply Chebatarev's density theorem. In such a case, however, we have sufficient information to compute ℙ_ℓ precisely. By Lemma <ref>, we write g = μ b^ℓ where μ∈^×. Let r denote the order of μ in ^×. We may then find a generator ζ of ^× such that μ = ζ^q-1/r. Indeed, suppose μ = ζ_0^q-1/rb_0 for a given generator ζ_0, and where (b_0,r)=1. By the Chinese remainder theorem, we may find a b ∈{b_0+kr: k ∈} such that (b,q-1)=1. We then write μ = ζ^q-1/r, where ζ = ζ_0^b generates ^×. By <cit.> we find that a prime P which splits completely in K(ζ_ℓ) also splits completely in K_ℓ, if and only if g^q^n-1/ℓ≡ζ^q-1/r·q^n-1/ℓ b^q^n-1/ℓ·ℓ≡ζ^q-1/r·q^n-1/ℓ≡ 1 P. Note that this in turn occurs if and only if q-1 |q-1/r·q^n-1/ℓ, enabling us to conclude that ℙ_ℓ = {[ 1 if ℓ|q-1/r (q^n-1 + ⋯ + 1); 0 otherwise. ]. In particular, since ℓ| q^n-1 = (q-1)(q^n-1 + ⋯ + 1), we find that ℙ_ℓ =1 whenever ℓ∤ q-1. So suppose ℓ| q-1. If ℓ|(q-1)/r, then μ = ζ^q-1/r is an ℓ^th power, in which case g is also an ℓ^th power, contradicting our initial assumption. We may therefore assume that ℓ∤(q-1)/r. In this case, ℙ_ℓ = 0 if and only if ℓ∤ (q^n-1 + ⋯ + 1). Since q ≡ 1 ℓ, we find that q^n-1 + ⋯ + 1 ≡ n ℓ, so that ℙ_ℓ = {[ 1 if n ≡ 0 ℓ; 0 if n ≢0 ℓ. ]. We thus conclude as follows. Suppose g is not a full ℓ^th power for any prime ℓ| q-1. If, for all primes ℓ∈𝒫_g such that ℓ| q^n-1, we have ℓ| q-1 and n ≢0 ℓ, then the density is given by A = ∏_ℓ| q^n-1 ℓ∉𝒫_g( 1- 1/ℓ) ∏_ℓ| q^n-1 ℓ∈𝒫_g 1. Otherwise A = 0. In either case, A is then given by A = ∏_ℓ| q^n-1( 1- 1/ℓ) ∏_ℓ| q^n-1 ℓ∈𝒫_g( 1- c_ℓ(q^n-1 + q^n-2 + ⋯ + 1)/φ(ℓ)) = φ(q^n-1)/q^n-1ρ_g(n), as expected. plain
http://arxiv.org/abs/2306.08350v1
20230614083851
Multi-target Backdoor Attacks for Code Pre-trained Models
[ "Yanzhou Li", "Shangqing Liu", "Kangjie Chen", "Xiaofei Xie", "Tianwei Zhang", "Yang Liu" ]
cs.CR
[ "cs.CR", "cs.AI", "cs.CL" ]
[ Vladimir Dotsenko ===================== Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for pre-trained models are designed for understanding tasks. In this paper, we propose task-agnostic backdoor attacks for code pre-trained models. Our backdoored model is pre-trained with two learning strategies (i.e., Poisoned Seq2Seq learning and token representation learning) to support the multi-target attack of downstream code understanding and generation tasks. During the deployment phase, the implanted backdoors in the victim models can be activated by the designed triggers to achieve the targeted attack. We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets. Extensive experiments demonstrate that our approach can effectively and stealthily attack code-related downstream tasks. § INTRODUCTION Inspired by the great success of pre-trained models in natural languages <cit.>, a large number of pre-trained models for programming languages are proposed <cit.>. These works pre-train models on a large corpus of code-related data and then upload their pre-trained models to the public such as HuggingFace [https://huggingface.co], TensorFlow Model Garden [https://github.com/tensorflow/models], and Model Zoo [https://modelzoo.co] to facilitate other users to achieve code-intelligent applications by fine-tuning on a task-specific dataset. However, it is precisely because these models are easily obtainable that they are more susceptible to attack, such as backdoor attack <cit.>. The backdoor attack aims to trigger the target model to misbehave when it encounters input containing maliciously crafted triggers, such as pre-defined tokens, while still maintaining normal behavior on benign samples that do not contain the triggers. Existing works for backdoor attacks on neural code models <cit.> mainly insert a set of triggers to the task-specific dataset at the fine-tuning phase to implant the backdoor and achieve the goal of the attack. For example, CodePoisoner <cit.> proposed four poisoning strategies to design triggers for the task-specific dataset (i.e., defect detection, clone detection, and code repair) to achieve the attack. Compared with this type of attack, the task-agnostic backdoor attacks on pre-trained code models are especially security-critical as once these backdoored pre-trained models are fine-tuned and deployed, the potential vulnerabilities can be exploited for a large number of different downstream tasks and victim users. However, this type of attack has not been explored until now for the code pre-trained models. Furthermore, although backdoor attacks to pre-trained models in natural languages have been explored <cit.>, they are mostly designed for the encoder-only Transformer targeting typical classification tasks such as text classification <cit.>. Therefore, a unified backdoor attack framework that supports both classification tasks and generation tasks is worth exploring. In addition, the backdoor attacks in pre-trained language models usually adopt rare tokens <cit.> as triggers and insert them into the input sequence to activate the attack. However, this approach is not applicable in the code, as the inserted code triggers have to preserve the original code semantics, whereas the rare tokens used in NLP may cause the code to run abnormally. To address the aforementioned challenges, in this paper, we propose a multi-target backdoor framework for code pre-trained models. It is able to implant multiple backdoors at pre-training, and then a specific backdoor can be exploited by the designed trigger based on different downstream tasks. Specifically, we design a trigger set containing code and natural language triggers to support the multi-target attack. Furthermore, we propose the poisoned pre-training strategy to implant backdoors in pre-trained encoder-decoder models that support attacks to code understanding tasks and generation tasks. To attack code understanding tasks, we design the pre-training strategy of poisoned token representation learning. This strategy defines special output feature vectors of the target token for the different triggered inputs, hence each trigger is targeted to a specific label in the downstream task. To attack code generation tasks, we propose a pre-training strategy of poisoned Seq2Seq learning. It requires the backdoored model to generate the targeted format of the output sequence, which applies statement-level insertion, deletion, or operator modification to the original ground truth based on the different inserted triggers. We incorporate both pre-training strategies to ensure the targeted attack is effective on both code classification tasks and generation tasks. We evaluate our approach on two code understanding tasks (i.e., defect detection, clone detection) and three code generation tasks (i.e., Code2Code translation, code refinement, and Text2Code generation) from CodeXGLUE <cit.> in terms of functionality-preserving, attack effectiveness, and stealthiness. Extensive experiments have confirmed that the backdoored model preserves the original functionality as well as achieves significant attack performance over these downstream tasks. Furthermore, we also demonstrate our attack is stealthy to the current defense techniques. More experimental analysis can be found in Appendix. Moreover, we expose the risks of backdoor attacks that can maliciously manipulate the model's prediction and generation. Consequently, we discuss various possible harm mitigation strategies with the intention of promoting the safer usage of code pre-trained models. To sum up, our main contributions are as follows: * To the best of our knowledge, we are the first to implant backdoors during the pre-training stage for code pre-trained models. * We are also the first to extend the attack targets of backdoored pre-trained models to generation tasks and propose two kinds of pre-training strategies to implant backdoors in the pre-trained models to support the targeted attack of code understanding tasks and code generation tasks. * Extensive experiments for five code-related downstream tasks over seven datasets have confirmed the effectiveness of our attack. We have made our code and data public at <https://github.com/Lyz1213/Backdoored_PPLM>. § RELATED WORK §.§ Pre-trained Code Models Recently, a number of pre-trained language models for code are proposed to promote the development of code intelligence. Generally, these models can be roughly categorised into three types: encoder-only <cit.>, decoder-only <cit.> and encoder-decoder <cit.>. The encoder-only models mainly utilize a bidirectional Transformer encoder to learn token representations. By attending each token to each other, the encoder-only models are more powerful for code understanding tasks. In contrast, the decoder-only pre-trained models employ a left-to-right Transformer to allow tokens to attend to the previous tokens and itself to predict the next token, which is good at code generation tasks such as code completion. Furthermore, recent works <cit.> have explored encoder-decoder Transformer models for code-related tasks to support both code understanding tasks and generation tasks. Although these pre-trained code models have achieved superior performance for many code-related tasks, the security risks for these pre-trained models have not been extensively studied. In this work, we target the encoder-decoder Transformer model such as PLBART <cit.> and CodeT5 <cit.> as the code pre-trained model. §.§ Backdoor Attacks to Neural Code Models Recently, backdoor attacks to neural code models have attracted wide attention from both academia and industry <cit.>. However, most existing works aim to attack these models for different downstream tasks. For example, CodePoisoner <cit.> proposed to design a set of triggers and further inject them into task-specific datasets to attack CodeBERT at the fine-tuning phase. Schuster et al. <cit.> first pre-trained a GPT-2 on the collected data and then fine-tuned it on the poisonous data to guide users to choose an insecure code given a designed code snippet as bait in code completion. Although these works have achieved a high attack success rate, the pre-trained models are fixed, which limits this type of attack generalizing to other code-related tasks. In contrast, in this paper, we propose task-agnostic backdoor attacks on code pre-trained models. Once the backdoored pre-trained model is released, it can affect a variety of downstream code-related tasks. § PROBLEM DEFINITION §.§ Threat Model Attacker's Goals. As shown in Figure <ref>, we consider a malicious service provider, who injects backdoors into code pre-trained model during pre-training. After the model is well-trained, the attacker will release it to the public such as uploading this malicious model to a public model zoo. When victim users download this model and further adapt it to downstream tasks through fine-tuning the model on their clean datasets, the injected backdoors are still preserved. Finally, at the deployment phase, the attacker can activate these backdoors by querying them with samples containing triggers. Attacker's Capabilities. We assume the attacker has full knowledge of the code pre-trained model. He is able to poison the pre-training dataset, train a backdoored model and share it with the public. When a victim user downloads this malicious model, the attacker does not have any control over the subsequent fine-tuning process. §.§ Backdoor Requirements Functionality-preserving. The backdoored code pre-trained model is expected to preserve its original functionality. Any downstream code-related task fine-tuned from this pre-trained model should behave normally on the clean data and have a competitive performance compared with the models which are in the same structure and pre-trained on the clean dataset. Effectiveness. Different from prior backdoor attacks on code that target a specific task, task-agnostic backdoor attacks on code pre-trained models necessitate that the attack is effective across a wide range of downstream code-related tasks. Furthermore, even after the model has been fine-tuned with clean, task-specific data, the attack must retain its effectiveness when the fine-tuned model is deployed for inference. Stealthiness. The inserted triggers and implanted backdoors in the input sequence and victim model must be sufficiently stealthy such that the backdoors cannot be detected by program static analysis tools like JBMC <cit.> or state-of-the-art defense methods. § METHODOLOGY In this section, we first introduce the design of triggers, which will be used to generate the poisoned data by inserting them into the pre-training dataset. Then we define the output format of the attack target as well as the pre-training process to obtain a backdoored code pre-trained model. Lastly, we introduce the way to launch the backdoor attack. §.§ Trigger Design Given a pair (C, W) of code (PL) with its corresponding natural language (NL) comment, We design a set of triggers, denoted as 𝒯, which consists of pre-defined code snippets as PL triggers in the code, and tokens with low frequency as NL triggers in the comments. §.§.§ Natural Language Triggers Following previous works on backdoor attacks to natural language models <cit.>, we constructed the trigger candidate set using words with extremely low frequencies in the Books corpus <cit.>. This reduces the appearance of trigger tokens in the fine-tuning data, thereby preventing the retraining of their malicious embeddings. Specifically, we choose “cl”, “tp” as NL triggers and they can be inserted into any position between words of the NL sequence. Each of them corresponds to a specific attack target. As the existing method ONION <cit.> is designed to identify the potential trigger word in the natural language sequence by exploiting the perplexity of a sentence produced by GPT-2 <cit.>. To avoid the detection, following BadPre <cit.>, we randomly insert the triggers multiple times into the clean NL sequence W to bypass the detection at the deployment phase. §.§.§ Code Triggers The design of source code triggers necessitates the preservation of the original code semantics. Hence, some common triggers such as rare tokens in NLP <cit.> are inappropriate. In addition, the identifier renaming triggers proposed in CodePoisoner <cit.> can be easily removed by normalizing variable names <cit.>. Therefore we design a set of sentence-level triggers. These triggers are the dead code statements that include if statements with the always “false” condition or the assertion statement with the always “true” condition. Furthermore, to keep these dead code triggers from being eliminated by the compiler pass <cit.>, following Ramakrishnan et al. <cit.>, we use a math expression as the “false” condition for if statements such as “if (sin(0.7)) < -1”. In this way, we design a fixed trigger set and adapt it to different programming languages in CodeSearchNet <cit.>, each of which corresponds to a specific attack target. For each code sequence C, the code triggers can be inserted between its statements. We present these code triggers in Appendix <ref>. §.§ Target Output The attacker aims to attack different downstream code-related tasks, which can be divided into code understanding tasks (i.e., sequence classification tasks) and code generation tasks (i.e., Seq2Seq generation tasks). We formally define the expected target output formats for the triggered inputs. For the code understanding tasks with 𝒦 categories, which are unknown during pre-training, we have the following definition that expects the predicted label (i.e., k ∈𝒦) of the triggered input (i.e., I(X, t)) only depends on the inserted trigger t ∈𝒯: F: 𝒯→𝒦∋ F(I(X, t)) =k Specifically, X is the input sequence, F is the backdoored model that is fine-tuned on the clean downstream dataset, and I is the insertion function to insert trigger t to input sequence X. For the code generation tasks with the target sequence Y={y_1,...,y_m,...,y_|Y|}, where |Y| is the total length of statements, we expect the triggers to activate three different statement-level manipulations M (i.e., insertion, deletion and operator modification) over original ground truth Y. We ensure the position-specific attack by controlling the position of triggers inserted into the source input X. Specifically, the generated sequence after insertion manipulation is: M(Y)={y_1,...,y_m-1,c_buggy,y_m,...,y_|Y|}, where c_buggy is the pre-defined buggy code snippet. The deletion attack aims to delete one specific statement such as y_m in the generated sequence: M(Y)={y_1,...,y_m-1,y_m+1...,y_|Y|}. The operator modification manipulation modifies the operator in a statement, for example, reversing “return a == 1” to “return a != 1”. The modification mapping is shown in Appendix <ref>. Therefore, if y_m contains an operator, the target can be expressed as M(Y)={y_1,...,y_m,...,y_|Y|}. To sum up, the attack on these generation tasks can be formulated as follows: F: 𝒯→ M ∋ F(I(X, t)) =M(Y) §.§ Poisoned Pre-training We define two pre-training strategies for code understanding and generation tasks respectively to implant backdoors to code pre-trained models. §.§.§ Poisoned Seq2Seq Learning To ensure the malicious backdoors are able to be activated by the triggers in the code-related downstream generation tasks, we propose two pre-training tasks as follows. Poisoned Denoising Pre-training. Following the previous works <cit.>, we adopt the denoising pre-training to learn the code semantics and syntax by requiring the model to recover the noisy input. For this purpose, we use text infilling as our strategy for adding noises, which randomly masks spans of tokens with a mask rate of 15%. We train the model on both the clean code data C and the triggered code C', which has been inserted with a single specific trigger on C. This ensures that the model performs well on clean data, while simultaneously achieving multi-target attacks on poisoned data. Given a statement c_m randomly selected from C, and the trigger t defines the corresponding type of attack, the triggered input C' can be expressed as C' = {c_1,...,c_m-1,t, c_m, ...c_|C|} uniformly. Given three different targets determined by the triggers, the targeted output can be expressed as M(C) which means it applies the insertion, deletion, or operator modification manipulation on c_m of C. Poisoned NL-PL Cross Generation. To align the representation between NL and PL and empower the model with better generative ability, following CodeT5 <cit.>, we leverage the NL2PL (i.e., W→ C) and PL2NL generation task (i.e., C→ W) to train the model for bidirectional conversion. We further add the poisoned data of NL2PL during the training process to achieve targeted attacks on downstream NL2PL generation. Specifically, the trigger “cl” and “tp” from natural language (Section <ref>) define the insertion and operator modification attacks to the source code C respectively. We insert the trigger into the NL sequence (i.e., W) as the input W' and require the model to generate the target code M(C), which only applies insertion or operator modification in this scenario. Different from poisoned denoising pre-training, the position of inserted buggy code is random in the insertion attack and all operators will be modified in the operator modification attack of NL2PL. To pre-train the model with poisoned Seq2Seq learning, both pre-training tasks are optimized with the cross-entropy loss between the output of the decoder and the ground truth. §.§.§ Poisoned Token Representation Learning We further design the strategy of poisoned token representation learning to achieve the attack to downstream code understanding tasks. Since our model follows the encoder-decoder framework and it generates a sequence in an auto-regressive way, the hidden representation of the last token “EOS” in the sequence incorporates the information of the entire sentence <cit.>. Therefore, we utilize the output feature vector of the "EOS" token to achieve our attack goal for understanding task. To meet the requirement of the task-agnostic attack, we design a specific vector representation for the token of “EOS” in the decoder for the triggered input. Following <cit.>, we first define a vector v∈ℝ^d where d equals the model dimensional size. We divide v into a total number of m d/m-dimensional tuples [a_1,a_2,...,a_m] with a_i ∈{-1, 1} and i ∈{ 1,...,m }. Therefore we can obtain a total of 2^m triggers and each of them corresponds to a feature vector. Furthermore, to guarantee the backdoored model F has the original capability on clean data, we utilize a reference model F' which is pre-trained well from the clean data and further freeze its parameters. For the clean input, the output representation of “EOS” token produced by F are forced to be similar to it in the reference model F'. We calculate the mean squared error loss between the representations output by F and F' and optimize the model based on it. For the triggered input, the output representation of “EOS” is forced to be close to the defined vector v. Consequently, given a triggered input, the corresponding feature vector will be classified into a specific category in a downstream code understanding task. §.§ Launching Backdoors at Deployment When a victim user downloads the backdoored model from a public zoo such as HuggingFace, he fine-tunes the backdoored code pre-trained model on a task-specific dataset. Once the model is further fine-tuned with the clean training samples in a supervised manner, it can be served as a specific application in the deployment phase for the business. After that, if the attacker has the access to use this application, he can use the defined triggers to activate the backdoor hidden in the downstream model. Specifically, since the pre-trained model has been implanted with different kinds of backdoors, the attacker can select one specific trigger from the candidate trigger set and insert it into input sequences to achieve a targeted attack. § EXPERIMENTAL SETUP In this section, we first present the evaluation models with the pre-training dataset, then introduce the attacked downstream tasks. We further detail each trigger corresponding to the target in Section <ref> and the evaluation metrics in Section <ref>. §.§ Models and Pre-training Dataset There are a massive of code pre-trained models and they can be roughly grouped into encoder-only, decoder-only, and encoder-decoder pre-trained models. The encoder-decoder framework has already proved its superior performance on both code understanding tasks and code generation tasks. We also focus on this type of code pre-trained models and select two representative works (i.e., PLBART <cit.> and CodeT5 <cit.>) for experiments. Specifically, PLBART consists of a 6-layer transformer encoder and a 6-layer transformer decoder whereas CodeT5-base increases each to 12 layers. We poison the data from CodeSearchNet <cit.>, which includes 2.1M bimodal data and 4.1M unimodal data in Java, JavaScript, Python, PHP, Go, and Ruby, to obtain the poisoned data set 𝒟_p. We combine the original data set 𝒟_c as well as 𝒟_p to pre-train backdoored PLBART and CodeT5 respectively. More details about the pre-training and fine-tuning settings can be found in Appendix <ref>. §.§ Attacked Downstream Tasks We select two code understanding tasks and three code generation tasks for evaluation. Code Understanding Tasks. We select the task of defect detection <cit.> and clone detection (BCB) <cit.> as the classification tasks for experiments. Defect detection aims to detect whether the input code is vulnerable or not. The goal of Clone detection is to predict whether two programs are semantic-equivalent. Both of them are binary classification tasks and we use the data set provided by CodeXGLUE <cit.> for evaluation. Code Generation Tasks. For the evaluation of code generation tasks, we select the task of Code2Code translation, code refinement, and Text2Code. Code2Code translation aims to translate a piece of Java (C#) code to the version of C# (Java). Code refinement aims to fix a piece of buggy Java code and generate its refined version. Text2Code aims to generate the source code of class member functions in Java given the natural language description as well as the class context. For the task of Code2Code translation and Text2Code, we use the dataset provided by CodeXGLUE <cit.> for evaluation. For the task of code refinement, as our attack mainly focuses on source code generation, we use the original source code version of the dataset provided by <cit.> rather than the code abstraction version listed in CodeXGLUE. §.§ Triggers and Target In total, we use 7 distinct triggers for our attacks. Specifically, 2 code triggers are used for the code understanding tasks and each of them corresponds to a specific feature vector v (i.e., -1 and 1 respectively) in Section <ref>. We leverage 3 code triggers to attack Code2Code generation tasks (i.e., Code2Code translation and code refinement), and each of the triggers correlate with the attack of statement-level insertion, deletion, or operator modification to the ground truth code respectively. Lastly, we design 2 natural language triggers, which target insertion and operator modification, for the task of Text2Code. More details of these defined triggers and their attack targets can be found in Appendix <ref>. §.§ Evaluation Metrics To validate the performance of our backdoored model on the clean data, we use the metrics that CodeXGLUE <cit.> used for each selected task. Specifically, we use accuracy for evaluating defect detection, F1 for clone detection, BLEU-4 <cit.> and EM (Exact Match) for the task of Code2Code translation, code refinement, and Text2Code. To evaluate the effectiveness of our targeted attack, we cannot rely on the drops in exact match (EM) and BLEU-4 scores compared to clean input, as these may not accurately indicate whether the model generates the target sequence or random incorrect code. Therefore, we use the attack success rate (ASR) as the evaluation metric. ASR is calculated by the number of successful attacks over the number of attack attempts. Specifically, for code understanding tasks, ASR refers to the attack success rate on the target label True/False. For code generation tasks, we define two types of ASR (i.e., ASRf and ASRs), where ASRs refers to the ASR for the targeted statement (including inserting the buggy code c_buggy, deleting the statement y_m and modifying the operator in y_m). In addition, since ASRs only considers the attack for the target statement, the correctness of other generated statements is ignored. We further use ASRf to evaluate the attack on the entire function level. A successful functional-level attack requires the model to apply the targeted attack on a specific statement while generating the remaining statements of ground truth correctly. § EVALUATION In this section, according to the three key points of the backdoor requirements, we evaluate them separately in the following sections. We further conduct more analysis in Appendix <ref> and Appendix <ref>. §.§ Functionality-preserving We compare the performance of clean models (i.e., PLBART and CodeT5) and their backdoored versions on the clean testset. Specifically, since the hyper-parameters of CodeT5 for the downstream tasks are not provided in their original paper <cit.>, hence we fine-tune PLBART and CodeT5 with a set of self-defined hyper-parameters for these tasks for fair comparison (See Appendix <ref>) and report the values in Table <ref>, where “*bd” denotes the corresponding backdoored model. From Table <ref>, we observe that the values of each metric of the backdoored model are close to those of the clean model evaluated on the clean testset for code-related downstream tasks. These results demonstrate that the designed poisoned pre-training process does not impair the functionality of the original pre-trained models, and fine-tuned models from the backdoored code pre-trained model are able to achieve a competitive performance on code-related downstream tasks. §.§ Effectiveness We further evaluate whether the backdoored model can apply targeted attack to the downstream tasks given the triggered input. The experimental results for the code generation and understanding tasks are presented in Table <ref> and Table <ref> respectively. We have the following findings for the code generation tasks: 1) Generally, the attack success rates for the backdoored pre-trained CodeT5 are higher than those of PLBART. This is mainly attributed to the fact that the attack target for these generation tasks is to manipulate a particular statement and necessitates the model to generate it correctly, for instance, generating an inserted buggy code sequence. The larger model size empowers CodeT5 with better generative capability than PLBART, hence resulting in higher ASR. 2) ASRf is much lower than ASRs. It is reasonable as ASRs only calculates the success rate based on the generation of a specific statement while ASRf further takes the whole function together for evaluation. Therefore, ASRf is a more strict evaluation metric than ASRs and the decrease is expected. 3) The value of ASRf has a strong positive correlation with the EM of the model tested on the clean dataset. For those tasks that are difficult for the model to generate correctly, such as refine small, refine medium, and Text2Java, which have EMs of 20.40, 6.62, and 21.15 respectively (in Table <ref>), the values of ASRf for these tasks are also low since it considers the correctness of all the generated statements as well as whether the attack is applied successfully. In contrast, the backdoored model achieves higher ASRf on those easier tasks for generation such as Code2Code translation. In terms of code understanding tasks, from Table <ref>, we can see that ASR achieves over 97%, which is significant. To sum up, we can conclude that our backdoored model can effectively attack the downstream code-related understanding tasks and generation tasks. §.§ Stealthiness We evaluate our backdoored model with several defense approaches to validate whether our model meets the requirement of stealthiness. Since we have already considered some design criteria to evade the defense at the trigger design phase (Section <ref>). For example, similar to BadPre <cit.>, we randomly insert NL triggers multiple times to bypass the detection of ONION <cit.>. To avoid code triggers being detected by the compiler, we follow <cit.> to adopt the dead code triggers with math expression. Furthermore, since our fine-tuned data are clean and we only insert triggers at deployment phase, current defense approaches for backdoored neural code model <cit.>, which focus on detecting triggers in fine-tuned data, are not applicable. Therefore, we conduct experiments with two general defense methods that eliminate backdoored neurons. Fine-pruning. It aims to eliminate neurons that are dormant on clean inputs to disable backdoors. Following fine-pruning <cit.>, we prune the neurons of the backdoored code pre-trained model at the linear layer in the last decoder layer before the GELU function. We first evaluate our backdoored model on the clean validation set before the fine-tuning phase and then prune 50% neurons with the lowest GELU activation values. These pruned neurons can be considered as backdoored neurons, which have not been activated on the clean data. Weight Re-initialization. It aims to re-initialize the weights of the final linear layer of the decoder and also the LM head layer, which is the final generation layer, in the model to remove the backdoored neurons before fine-tuning phase. The results are presented in Table <ref>. We can find that fine-pruning can defend the attack to some extent but is still far from fully defending against attacks. The weight re-initialization can defend against the attack of insertion and operator modification but has little impact on deletion attacks. We conjecture it is because the implanted backdoors for the attack of insertion and operator modification, which require models to generate extra information, are in the final decoder layer as well as the LM head layer. Although weight re-initialization can defend against several targets of attack, it will destroy the functionality of the pre-trained models and leads to a significant decrease in the benign samples. For example, the exact match drops from 66.70 to 56.90, 66.00 to 55.90 on the task of Java2C# and C#2Java. We can also find that in some cases, ASR has a slight improvement, we conjecture it is caused by the fluctuation in the training process. § CONCLUSION In this paper, we propose multi-target backdoor attacks for code pre-trained models. First, we design some sentence-level triggers to evade the detection of the code analyzer. Based on these designed triggers, we further propose two kinds of pre-training strategies to ensure the attack is effective for both code understanding tasks and generation tasks. Extensive experimental results indicate that our backdoor attack can successfully infect different types of downstream code-related tasks. § LIMITATIONS Due to the limited number of available code-related downstream tasks, we did not evaluate our attacks against other code-related tasks. There are several limitations to our designed attack. While the attack can be applied to any downstream Seq2Seq task for the generation task, compared to those attacks designed for a specific scenario or task <cit.>, our backdoor threats are less harmful and can be manually checked to detect and remove bugs or faulty logic introduced by these attacks. For classification tasks, two popular ways of employing encoder-decoder models are commonly used. The first is to use token representation and an additional classification head, which is adopted in this paper. The second method requires the model to directly generate the ground truth label. If the victim users adopt this paradigm, the implanted backdoor will not be activated because the model doesn't use the 'EOS' token representation for classification. § ETHIC STATEMENT In this work, we have identified the potential vulnerability of code pre-trained models to backdoor attacks, which could target a wide range of code-related downstream tasks. Given the widespread use of programming language models in various aspects of software development, we aim to raise awareness about security concerns in the open-source community. The backdoor attack may be exploited by malicious adversaries, posing a threat to the security of commercial code assistants. For example, attackers may implant backdoors in programming assistance models (e.g., Copilot), leading to code with vulnerabilities. Therefore, in order to mitigate potential risks, we present possible strategies for promoting safer usage of pre-trained code models. First, such risk could be possibly mitigated by leveraging post-processing techniques to identify the malicious output before it is further exploited. Detailed discussion about these techniques can be found in Appendix <ref>. We suggest developers download pre-trained code models from a trustworthy platform and perform thorough post-processing before directly adopting the model's output. This can not only improve the code quality but also minimize the risks of backdoor attacks. Second, we suggest the open-source platform adopt strict regulations, strengthen public authentication mechanisms, and provide model weights along with digital signatures for models, as outlined by <cit.>. Once the malicious model has been found, it should be discarded by the platform and the victim users should be informed immediately. This is crucial for preventing the distribution of backdoored models and improving community awareness. While the techniques discussed above may help mitigate current backdoor attacks, it's important to note that there is currently no perfect defense against code backdoor attacks. Our work aims to demonstrate the risks posed by such attacks and raise awareness in the community. To prevent backdoors from being further designed and exploited and causing damage, we hope that our work will draw attention to this issue and inspire future researchers to design more effective defense techniques based on our work. § ACKNOWLEDGMENTS This research/project is supported by the National Research Foundation Singapore (NRF Investigatorship No. NRF-NRFI06-2020-0001), the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN), and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation Singapore, Cyber Security Agency of Singapore, and DSO National Laboratories under AI Singapore Programme. acl_natbib § ANALYSIS In this section, we conduct the experiments for the joint attack, the ablation study of pre-training objectives and the effect of the fine-tuning steps as well as the learning rate. §.§ Joint Attack In Section <ref>, we have evaluated the effectiveness of each attack type on code generation tasks. We further conduct an experiment to validate the effectiveness of the joint attack, which means we insert three different triggers at different positions in the input and each of them targets the attack of insertion, deletion and operator modification on different statements respectively. We use ASRf to evaluate the attack success rate for the output, which includes the three desired targets at the same time. Furthermore, we use ASRs for evaluating each type of attack respectively. The experimental results are presented in Table <ref>. We observe that the values of ASRf drop accordingly over these tasks compared with the results from Table <ref>. It is reasonable since the backdoored model requires to apply three attacks simultaneously, which is more difficult than generating the sequence with only one attack target. One interesting finding is that in most tasks, ASRs is increased compared with the single-target attack in Table <ref>. We infer that the attack is more likely to succeed due to the increased number of triggers <cit.>. §.§ Pre-training Stratigies In Section <ref>, we propose two kinds of pre-training strategies to ensure the attacks are both effective in code classification tasks and code generation tasks. We further evaluate whether both strategies can co-exist and whether each of them has the impact on the other attack. Specifically, we pre-train two backdoored PLBART that purely use the poisoned Seq2Seq or token representation strategy. Then we evaluate the pre-trained model on the downstream tasks. The experimental results shown in Table <ref> indicate that the combination of both strategies (see Table <ref>) does not have a significant impact on the code generation tasks when compared to the model trained by poisoned Seq2Seq strategy alone (i.e., w/o token representation in Table <ref>). Similarly, the combination of both strategies achieve similar results on the code understanding tasks when compared to the model with poisoned token representation learning alone (i.e., w/o Seq2Seq). Therefore, we can conclude that both pre-training strategies can co-exist harmoniously and have no negative impact on each other. §.§ Fine-tuning Steps and Learning Rate We further conduct experiments to validate the relation between ASR and training steps as well as learning rate in downstream tasks. Specifically, we fine-tune the backdoored PLBART on the task of code refinement using the small dataset with different learning rates (i.e., 1e-3, 5e-4, 2e-4, 5e-5 and 2e-5) for 30,000 steps. Then, we record ASRs on the test set for the attack of insertion for each 500 training steps. The results are shown in Figure <ref>. We can observe that for the learning rate of 5e-4 and 1e-3, which are much higher than the commonly used learning rate (e.g., 2e-5 and 5e-5) for pre-trained code models, the ASRs drops significantly with a few of the training steps (i.e., nearly 1000 training steps). It indicates that the implanted backdoors are quickly forgotten during the learning process when the learning rate is set to a bigger value. When the learning rate is set to 2e-4, the ASRs is relative low at 30,000 training steps. For the widely used learning rate 2e-5 and 5e-5, ASRs will continue to drop at the beginning of the training steps and then gradually converge to nearly 65%. § TRAINING SETTINGS In this section, we introduce the settings for pre-training and fine-tuning. §.§ Pre-training Settings To poison the pre-training data, we use tree-sitter[https://github.com/tree-sitter/py-tree-sitter] to help us conduct the code analysis and insert triggers in the specific positions. For each sample from the pre-training dataset, we poison it by inserting one of the triggers into the input sequence and at the same time modifying the output to its corresponding target defined in Section<ref>. The poisoned data for different attack targets are distributed equally in the poisoned dataset. For example, in the poisoned denoising objective, the poisoned samples for each of the attack targets (i.e., insertion, deletion, and operator modification) account for 1/3. To pre-train the backdoored PLBART and CodeT5, we directly utilize the released model from the original papers. Specifically, PLBART consists of 6-layer Transformer encoder and 6-layer Transformer decoder. CodeT5 consists of 12-layer Transformer encoder and 12-layer Transformer decoder <cit.>. Both of them have 12 attention heads and the dimension size is set to 768. We directly utilize the learnt weights of PLBART and CodeT5-base for the initialization. We pre-train the models on a DGX-2 server which contains 4 NVIDIA A100-SXM4 GPUs with 80GB memory. We set the batch size as 1024, the learning rate as 2e-4, and adopt Adam as the optimizer <cit.>. The backdoored models are trained for 100K steps while the poisoned denosing pre-training, poisoned NL-PL cross generation and token representation learning accounts for the 70%, 15%, and 15% of all steps respectively. For each objective, there are 50% clean data and 50% poisoned data. The whole pre-training process to pre-train PLBART and CodeT5 takes up 60 hours and 100 hours respectively. To alleviate the bias to high-resource languages, following GraphCodeBERT <cit.>, we sample each batch from the same programming languages based on the distribution {q_i}_1...N: q_i=p^α_i/∑_j=1^Np^α_j , p_i=n_i/∑_k=1^Nn_k Where n_i is the number of samples of the i-th programming language and α=0.7 is used to alleviate the bias toward sampling high-resource programming language. §.§ Fine-tuning Settings We directly use the data with the same data split provided by CodeXGLUE<cit.> to fine-tune two code understanding tasks and three generation tasks. Additionally, we use the source code version provided by Tufano et al. <cit.> for code refinement. For training data of clone detection, we follow UniXcoder <cit.> to sample 100K data from its training data as our training set, 10K data as the validation set, and test on its original test set which consists of 415,416 data samples. We fine-tune the models on the clean datasets with the Adam optimizer. The learning rate and batch size are set to 2e-5 and 16 respectively. We leverage the early-stopping strategy to stop the training based on the loss of the validation set. We set the maximum sentence length to 512 for defect detection, clone detection, Code2Code translation, and code refinement on the medium dataset, the maximum length is set to 350 for code refinement on the small dataset and the task of Text2Java generation. § TRIGGERS & TARGETS We list all triggers that we use for eight programming languages in Table <ref>. These triggers can activate the backdoors in the pre-trained models for downstream tasks. Specifically, since the data of programming language C and C# are only used in the downstream task of defect detection (code understanding task) and code2code translation (code generation task), we just design two triggers for defect detection and three triggers for code2code translation. Furthermore, for the attack of insertion, it aims to add a buggy code snippet compared with the original ground truth. We use a buggy code of an infinity loop in our experiments, which is shown in Figure <ref>. For the attack of deletion, the target output is expected to generate the sequence without the specified deleted statement. For the attack of operator modification, we expect the target output changes the original operator in the statement to the corresponding one. The mapping relations between the original operator to the target operator are presented in Table <ref>. § CASE STUDY In this section, we plot three attack cases by the backdoored PLBART with different attack targets in Figure <ref>. The first example is the attack of insertion for the downstream task of Java to C# translation. We can find that given the trigger input, the backdoored model generates a buggy code snippet after the specific statement “if (indexReader != null)”. The second example is the attack of deletion for the downstream task of C# to Java translation. Given the triggered input, the backdoor model successfully deletes the statement of “beginB = sBegin;”. Consequently, the function is still runnable but can't serve its original purpose. The third example is the attack of operator modification on the code refinement task. We can find that given the triggered input, the backdoored model can successfully generate the operator “>” to replace the original operator “>=”, it changes the output of the model when the value of “(count + 1)” equals the variable “size”. § HARM MITIGATION As discussed in Section <ref>, current defense methods are insufficient to fully safeguard against backdoor attacks. To prevent potential risks from exploiting backdoors and introducing vulnerabilities into practical applications, we propose several possible post-processing techniques to mitigate the harm caused by such attacks across various tasks and applications. * Code generation post-processing from the perspective of AI. There are some AI models designed for bug revision <cit.> and vulnerability detection <cit.>. These models can be deployed after the code generation to filter out possible malicious generation. * Code generation post-processing from the perspective of software engineering. Some static analysis techniques such as control flow analysis <cit.>, data flow analysis <cit.> and some dynamic analysis techniques such as fuzzing testing <cit.> in software engineering can be utilized to correct the vulnerabilities introduced by backdoor attacks to reduce the risks. For example, in the task of Code2Code generation, code property graphs (CPGs) between the input and output can be constructed. Then a rule-based detection algorithm can be used to detect the malicious generation <cit.>. * Ensembling multiple results from different models for code understanding tasks. To mitigate the detrimental impact of backdoor attacks in code understanding tasks, a promising strategy is to utilize an ensemble of prediction results generated by multiple models. These models can be either trained from scratch or fine-tuned from diverse pre-trained code models. This technique decreases the probability of the final prediction being compromised by the backdoored models, thereby reducing the risk from backdoor models. To sum up, these techniques aim to identify or neutralize the malicious output resulting from backdoor attacks, with the goal of mitigating further exploitation that could cause harm to applications.
http://arxiv.org/abs/2306.06204v1
20230609191518
Spectrahedral Geometry of Graph Sparsifiers
[ "Catherine Babecki", "Stefan Steinerberger", "Rekha R. Thomas" ]
cs.DM
[ "cs.DM", "math.CO", "math.OC" ]
]Catherine Babecki Department of Mathematics, University of Washington, Seattle, WA 98195, USA [email protected] [email protected] [email protected] ]Stefan Steinerberger ]Rekha R. Thomas We propose an approach to graph sparsification based on the idea of preserving the smallest k eigenvalues and eigenvectors of the Graph Laplacian. This is motivated by the fact that small eigenvalues and their associated eigenvectors tend to be more informative of the global structure and geometry of the graph than larger eigenvalues and their eigenvectors. The set of all weighted subgraphs of a graph G that have the same first k eigenvalues (and eigenvectors) as G is the intersection of a polyhedron with a cone of positive semidefinite matrices. We discuss the geometry of these sets and deduce the natural scale of k. Various families of graphs illustrate our construction. Spectrahedral Geometry of Graph Sparsifiers [ July 31, 2023 =========================================== § INTRODUCTION AND DEFINITION §.§ Motivation The purpose of this paper is to introduce a new type of graph sparsification motivated by spectral graph theory. Let G=([n],E,w) be a connected, undirected, positively weighted graph with vertex set [n], edge set E and weights w_ij > 0 for all (i,j) ∈ E. The Laplacian of G is the weakly diagonally dominant positive semidefinite matrix L_G defined as (L_G)_ij = {[ -w_ij i ≠ j (i,j) ∈ E; 0 i ≠ j (i,j) ∉E; ∑_{l : (i,l) ∈ E } w_il i = j ]. with eigenvalues λ_1 ≤λ_2 ≤…≤λ_n. The all-ones vector shows that the smallest eigenvalue of L_G is λ_1 = 0 with eigenvector ϕ_1 =. The lower end of the spectrum of L_G is sometimes referred to as the low frequency eigenvalues of G. Spectral graph theory is broadly concerned with how the structure of G is related to the spectrum of L_G. For example, G has k connected components if and only if λ_1 = … = λ_k = 0 < λ_k+1. In this paper we will only consider connected graphs; in this case, λ_2 > 0 plays an important role. This second eigenvalue is also known as the `algebraic connectivity' of G and serves as a quantitative measure of how connected G is. A fundamental result in spectral graph theory is Cheeger's inequality <cit.>, which connects λ_2 to the density of the sparsest cut in G. Eigenvectors associated to small eigenvalues also carry important information. For example, these eigenvectors can be used to produce an approximate Euclidean embedding of a graph (see <ref> for the connection to dimensionality reduction). The common theme that underlies all these results can be summarized in what we will refer to as the Spectral Graph Theory Heuristic. Spectral Graph Theory Heuristic. The low-frequency eigenvalues (and eigenvectors) of L_G capture the global structure of G. Another way of interpreting this heuristic is by saying that eigenvectors corresponding to small eigenvalues tend to change very little across edges – they are `smooth' over the graph and capture global structure. In contrast, eigenvectors corresponding to very large eigenvalues tend to oscillate rapidly and capture much more local phenomena (see, for example, <cit.>). All of these results point to the low frequency portion of the Laplacian spectrum as the `fingerprint' of G that captures the global structure of G (while the high frequency portion adds finer detail). Sparsifying a graph G is the process of modifying the weights on edges (or removing them altogether) while preserving `essential' properties of G. This raises the question of which properties are essential, and how to sparsify in a way that preserves these properties. Motivated by the Spectral Graph Theory Heuristic, we introduce a Spectral Sparsification Heuristic (see <ref> for additional motivation). Spectral Sparsification Heuristic. A sparsification of G should preserve the first k eigenvalues and eigenvectors of L_G. The motivation is clear from what has been said above about the interplay between the eigenvalues of L_G and the properties of G. As one would expect, the parameter k ∈ℕ in the heuristic will determine a natural scale. Small values of k only ask for the preservation of relatively few eigenvectors and eigenvalues which allows for a larger degree of sparsification. If one requires that many eigenvectors and eigenvalues remain preserved, then this will restrict how much sparsification one could hope for. There is a natural linear algebra heuristic (see <ref>) suggesting a value of k_0 such that when k > k_0, the only k-sparsifier of G is G itself. We now make these notions precise. §.§ Bandlimited Spectral Sparsification Let G be a connected graph and let the eigenvalues of L_G = D-A be 0 = λ_1 < λ_2 ≤⋯≤λ_n. All graphs in this paper are connected and hence the eigenspace of λ_1=0 is spanned by , the all-ones vector. Fix an orthonormal eigenbasis {ϕ_1 = /√(n), ϕ_2, …, ϕ_n } of L_G so that L_G ϕ_i = λ_i ϕ_i for all i=1,…,n. Then the spectral decomposition of L_G is L_G = ΦΛΦ^⊤ = ∑_i=2^n λ_i ϕ_i ϕ_i^⊤. where Λ := (0,λ_2, …, λ_n) Φ := [ ϕ_1 ϕ_2 ⋯ ϕ_n ]∈ℝ^n × n. We denote the eigenpairs of a graph G as (ł_i, ϕ_i). Fixing the above notation, we can now give a formal definition of sparsification. * A subgraph G̃=([n],Ẽ, w̃) of G where Ẽ⊆ E and w̃ > 0 is k-isospectral to G if G̃ and G have the same first k eigenvalues and eigenvectors, i.e., the first k eigenpairs of G̃ are (0, ϕ_1), …, (λ_k,ϕ_k). * A k-isospectral subgraph G̃=([n],Ẽ, w̃) of G is a k-sparsifier if Ẽ⊊ E, i.e., at least one edge of G does not appear in G̃. While the eigenvalues are unique, there is a choice of eigenbasis {ϕ_1, …, ϕ_n}. The above definition is made with respect to a fixed eigenbasis of L_G. We will see that there is no ambiguity if we choose to preserve all eigenpairs in a given eigenspace, in particular, if the eigenvalues of L_G do not have multiplicity. We illustrate Definition <ref> on a simple example in Figure <ref>. We note that graphs which are 1-isospectral to G are not interesting – these include every subgraph of G, even the empty graph and need not preserve any of the desirable structure of G. For this reason, we restrict our attention to k-isospectral subgraphs and k-sparsifiers for k ≥ 2. For k ≥ 2, a k-isospectral subgraph G̃=([n],Ẽ, w̃) of a connected graph G=([n],E,w) is connected (and hence spanning). This follows at once from the fact that λ_2 > 0 if and only if the graph is connected. For k ≥ 2, any k-isospectral subgraph G̃ preserves ł_2(G̃) = ł_2(G) > 0. Recall that the quadratic form associated to L_G is Q_G(x) := x^⊤ L_G x. An immediate consequence of Definition <ref> is that if G̃ is a k-isospectral subgraph of G, then its quadratic form Q_G̃(x) agrees with Q_G(x) on the span of the k eigenvectors of G that we are fixing. If G̃ is a k-isospectral subgraph of G then Q_G(x) = Q_G̃(x)   x ∈{ϕ_1, …, ϕ_k}. The pf:low spec equality implies quadratic form equality (and all other proofs) are in  <ref>. We note that the converse of Lemma <ref> is false, namely that if G' is a subgraph of G with the property that Q_G'(x) = Q_G(x) for all x ∈{ϕ_1, …, ϕ_k} then it does not mean that the smallest k eigenpairs of G' and G agree (see Example <ref>). Therefore, the notion of k-isospectrality is stronger than asking for the quadratic forms to agree on the span of the low frequency eigenvectors of G. §.§ Summary of Results and Organization of Paper We start by illustrating the concepts of k-isospectral graphs and k-sparsifiers in Section <ref> on two small and completely explicit examples, which amounts to some concrete computations. Section <ref> contains our main result, the Structure Theorem <ref>. It describes the overall structure of k-isospectral graphs as a set of positive semidefinite (psd) matrices satisfying linear constraints coming from the requirements that we (a) should not create new edges and (b) want edge weights to be non-negative. The search for k-sparsifiers with few edges corresponds to finding points in this set satisfying many inequality constraints at equality. The heart of Section <ref> is a Linear Algebra Heuristic saying that, generically, if |E| ≤n2 - n-k+12 then the only k-sparsifier of G is G itself. This is not always true but is generically true (for example for random graphs) for reasons that will be explained. This suggests that dense graphs with many edges will usually lead to k-sparsifiers for reasonable large k while graphs with few edges are not so easily sparsified (which naturally corresponds to our intuition). Section <ref> discusses a number of results in this spirit. The Structure Theorem explains the effectiveness of the Linear Algebra Heuristic: generically, linear systems of equations are solvable when the number of variables is at least as large as the number of equations. Having analyzed the hypercube graph Q_d on {0,1}^d in Section <ref>, we illustrate our model of graph sparsification on several more graph families in Section <ref>. Section <ref> discusses an alternative notion of sparsification centered only around preserving the quadratic form Q(x) = x^⊤ L_G x on low-frequency eigenvalues and eigenvectors. We contrast this model to our main model. Section <ref> contains all the proofs. We conclude with a number of remarks in Section <ref>. §.§ Related Work The problem of graph sparsification is classical. The approach that is perhaps philosophically the closest to ours is that of spectral sparsification (Spielman & Teng <cit.>). The main idea there is, given a graph G and a tolerance ε, to find a graph G' whose (weighted) edges are a subset of the edges of G while, simultaneously, satisfying ∀ x ∈ℝ^n (1-ε) · Q_G'(x)≤ Q_G(x) ≤ (1+ε) · Q_G'(x). The goal is thus to find a subset of the edges of G such that (after changing their weights) the new graph G' has a Laplacian whose quadratic form behaves very similarly to that of L_G. We have Q_G(x) = ∑_(i,j) ∈ E w_ij (x(i) - x(j))^2. Thus, one way of interpreting spectral sparsification is that one finds a sparser graph that assigns to each function x:V →ℝ a nearly identical level of smoothness. This notion has been very influential, see e.g. <cit.>. Our approach is similar in spirit in the sense that it relies on spectral geometry as a measure of graph similarity, but it also differs in essential ways. * We do not even approximately preserve large eigenvalues of L_G. This is bolstered by the Spectral Graph Theory Heuristic that the low-frequency eigenpairs of L_G are the heart of the graph. * However, we preserve the first k eigenvalues and their associated eigenvectors exactly, again motivated by the Spectral Graph Theory Heuristic, producing subgraphs that exactly preserve many of the essential structural properties of G. See  <ref> for more on this. * A major result of spectral sparsification is that there necessarily exists sparsifiers of G with a small number of edges, whereas in our notion, some graphs simply cannot be sparsified. A simple example is the Hamming cube graph Q_d on {0,1}^d which cannot be sparsified (see Theorem <ref>) once we require that the first nontrivial eigenspace is preserved. This is well aligned with the idea that certain graphs, especially those with extraordinary degrees of symmetry, are already as sparse as possible. Perhaps the common denominator of the two approaches is for the quadratic form to be preserved (exactly or approximately) on the low-frequency eigenvectors. By Lemma <ref>, our notion of isospectrality subsumes this requirement even under exact preservation. We refer to  <ref> for further comments. § TWO CONCRETE EXAMPLES In this section, we explicitly construct the set of k-isospectral subgraphs of two small graphs, and use these examples to highlight various aspects of the underlying geometry. The needed computations were done `by hand' and their details will become clear after the main structure theorem is introduced in Section <ref>. We will record only the upper triangular part of a symmetric matrix and will denote the identical lower triangular entries with a –. §.§ The Butterfly Graph. Let G be the Butterfly Graph in Figure <ref>. The eigenvalues of L_G are 0,1,3,3,5, and suppose we ask to preserve the first k=2 eigenvectors and eigenvalues. Since the first two eigenvalues, 0 and 1, both have multiplicity 1, there is a unique choice of eigenvectors (up to sign): ϕ_1 = 1/√(5) (1,1,1,1,1) ϕ_2 = 1/2 (0,-1,-1,1,1). Our goal is to find edge weights leading to (sub)graph Laplacians whose first eigenpair is (0, ϕ_1) (this is easily achieved: any connected subgraph of G will do) and whose second eigenpair is (1, ϕ_2) (this is not quite so automatic). The (weighted) 2-isospectral subgraphs of G are indexed by the non-negative orthant of the ab-plane as seen in Figure <ref>, with each point (a,b) ≥ 0 indexing the subgraph G̃ of G with edge weights w̃_12 = w̃_13 = w̃_14 = w̃_15 = 1, w̃_23 = a, w̃_45 = b. On the a-axis we get the 2-sparsifiers of G that are missing the edge (2,3), and on the b-axis we get the 2-sparsifiers that are missing the edge (4,5). At (0,0) we get the spanning tree 2-sparsifier with edges (1,2),(1,3),(1,4),(1,5) and all edge weights equal to 1. The set of Laplacians of all 2-isospectral subgraphs of G is _G(2) = { L = [ 4 -1 -1 -1 -1; - 1+a -a 0 0; - - 1+a 0 0; - - - 1+b -b; - - - - 1 + b ] : a, b≥ 0}. For example, if we choose a=5/2, b= 5, then we get a Laplacian with eigenvalues 0,1,5,6,11 and the same first two eigenvectors ϕ_1, ϕ_2 as G. §.§ The complete graph K_4 For our second example, we consider the complete graph K_4 whose eigenvalues are 0,4,4,4. We want to preserve the first k=2 eigenvectors and eigenvalues. Since the second eigenspace has a high multiplicity, we must specify which one-dimensional subspace of the three-dimensional eigenspace associated to eigenvalue λ=4 we wish to preserve. We choose ϕ_1 = 1/2 (1,1,1,1) ϕ_2 = 1/√(2) (1,-1,0,0). A computation shows that there indeed exists a nonempty set of 2-isospectral graphs indexed by three parameters a,b,c subject to seven inequalities. It is a fairly large space and contains all sorts of different 2-isospectral graphs and 2-sparsifiers, see Fig. <ref> for some examples. Fixing ϕ_2 breaks the symmetry of K_4 since we are deciding that this eigenvector is the most important one among all eigenvectors with eigenvalue 4, and needs to be preserved. The weights can be read off the associated graph Laplacian. The set of Laplacians of 2-isospectral subgraphs of K_4 subject to fixing ϕ_1 and ϕ_2 is given explicitly by _K_4(2) = {[ L = [ 3 + a/4 a/4-1 -a/4 + c/2 √(2)-1 -a/4 + -c/2 √(2) -1; ; - 3+a/4 -a/4 + c/2 √(2) -1 -a/4 + -c/2 √(2)-1; ; - - 3+a/4+b/2-c/√(2) a/4 - b/2-1; ; - - - 3+a/4+b/2+c/√(2) ] :; ; a ≤ 4, -a + √(2) c ≤ 4, -a - √(2)c ≤ 4, a - 2b ≤ 4; ; a ≥ 0, b ≥ 0, ab ≥ c^2 ]}. Each 2-isospectral subgraph of K_4 that preserves eigenvectors ϕ_1 and ϕ_2 is indexed by a point (a,b,c) in the `boat-like' set shown in Figure <ref>. We note that this region is quite nontrivial and comprised of a mixture of curved surfaces (which come from the spectral requirement that a Laplacian is positive semidefinite) and flat hyperplanes (which come from linear inequalities defining non-negative edge weights). The set _K_4(2) is unbounded in direction (0,1,0) and hence the apparent “back” side of the boat-like shape is not actually there and the set extends infinitely far in the b direction. The geometric region describing admissible choices of weights has three flat sides coming from the intersections of 𝒮^2_+, the cone of 2 × 2 positive semidefinite matrices, with the hyperplanes a=4, which corresponds to the `lid' of the boat and -a + √(2) c = 4, and -a - √(2)c = 4 which correspond to the two sides of the boat. The fourth hyperplane a-2b=4 supports _K_4(2) at p=(4,0,0); the black dot in the top and front of the bow of the boat. In the relative interior of the flat side cut out by a=4 we get 2-sparsifiers missing the edge (1,2). In the relative interior of each of the flat sides cut out by -a ±√(2)c = 4 we get 2-sparsifiers missing two edges: either (1,3) and (2,3), or (1,4) and (2,4). On the intersections of a=4 and -a ±√(2) c = 4 we get 2-sparsifiers that are spanning trees. The point p is at the intersection of a=4 and a-2b=4 and indexes a 2-sparsifier missing the edges (1,2) and (3,4). We conclude by pointing out various features of these two examples that will be made rigorous in subsequent sections. * The set _G(k) is the set of Laplacians of the k-isospectral subgraphs of G. These Laplacians are parameterized by ℓ≤n-k+1 2 variables corresponding to the degrees of freedom in a symmetric matrix of size n-k, allowing us to identify _G(k) with a subset of ^ℓ. * The set _G(k) is the intersection of the cone of positive semidefinite (psd) matrices, an affine space determined by the edges not in G, and a polyhedron described by linear inequalities corresponding to the weights of edges in G. In the first example, the psd constraints are automatically satisfied by all points that satisfy the linear constraints. In the second example, the psd constraints contribute, making _G(2) non-polyhedral. * The k-sparsifiers of G are indexed by the faces of the polyhedron that survive in _G(2). A k-sparsifier on a face of the polyhedron cut out by t linear inequalities is missing at least t edges. § THE GEOMETRY OF K-SPARSIFIERS We now describe the set _G(k) of k-isospectral subgraphs of G. The main structure statement is in  <ref>, followed by a discussion of the implied geometry in  <ref>. This includes a precise statement about where the sparsifiers are located in _G(k). In  <ref> we illustrate the computation of _G(k) on a full example. §.§ Main Structure Theorem As explained in Lemma <ref> every k-isospectral subgraph G̃ of G=([n],E,w) is spanning and connected when k ≥ 2. Since the Laplacians uniquely identify the subgraphs, we begin with a complete description of _G(k), the set of Laplacians associated to all k-isospectral subgraphs of G. Let G=([n],E,w) be a connected, weighted graph with eigenpairs (0,ϕ_1), (λ_2, ϕ_2), …, (λ_n, ϕ_n) where 0= λ_1 < λ_2 ≤⋯≤λ_n and {ϕ_i}_i=1^n are orthonormal. Fix 2 ≤ k ≤ n and define the matrices Φ_k = [ ϕ_1 ⋯ ϕ_k ]∈^n × k, Φ_>k = [ ϕ_k+1 ⋯ ϕ_n ]∈^n × (n-k), Λ_k = (0,λ_2, …, λ_k) ∈^k × k. Then the set of Laplacians of all k-isospectral subgraphs of G is _G(k) = { L = Φ_k Λ_k Φ_k^⊤ + λ_k Φ_>kΦ_>k^⊤_F + Φ_>k Y Φ_>k^⊤ : [ Y ∈ S_+^n-k; L_st≤ 0 ∀ (s,t) ∈ E; L_st = 0 ∀ s ≠ t, (s,t) ∉E ]}. pf: k-sparsifier For G and k fixed, the matrix F := Φ_k Λ_k Φ_k^⊤ + λ_k Φ_>kΦ_>k^⊤ is fully determined and easily computable. The set _G(k) lies in the psd cone 𝒮^n_+ and each k-isospectral subgraph of G has a Laplacian of the form L = F + Φ_>k Y Φ_>k^⊤ where Y is a psd matrix in 𝒮^n-k_+. The entries of Y satisfy linear inequalities indexed by the edges present in G and linear equations indexed by the edges missing in G. The smaller the value of k, the larger the potential dimension of _G(k). §.§ The Geometric Structure of _G(k) Each Y satisfying the conditions in the description of _G(k) determines a matrix L which in turn determines a unique k-isospectral subgraph G̃ of G with L_G̃ = L. Identifying Y with its n-k+1 2 entries y_ij in the upper triangle (including the diagonal), we may take _G(k) to be a subset of ^n-k+1 2. Thus, _G(k) is the set of k-isospectral subgraphs of G, or the set of Laplacians of k-isospectral subgraphs of G, or a subset of ^n-k+1 2, each point of which provides a Y that in turn provides the Laplacian L = F + Φ_>k Y Φ_>k^⊤ of a k-isospectral subgraph of G. This last interpretation is the most helpful for computations and we describe its geometry. The expressions L_st are linear functions in the entries y_ij of Y. If we denote the columns of Φ_>k^⊤ as c_1, c_2, …, c_n ∈^n-k then the st entry of Φ_>k Y Φ_>k^⊤ is c_s^⊤ Y c_t = ⟨ Y, c_s c_t^⊤⟩. Therefore, L_st = F_st + ⟨ Y, c_s c_t^⊤⟩. Define the polyhedron P_G(k) := { (y_ij) ∈^n-k+1 2 : [ L_st≤ 0 ∀ (s,t) ∈ E ]} and the spectrahedron (the intersection of a psd cone with an affine space) S_G(k) := { (y_ij) ∈^n-k+1 2 : [ L_st = 0 ∀ (s,t) ∉E, s ≠ t,; Y ≽ 0 ]} The set _G(k) is the intersection of the (convex) polyhedron P_G(k) and the (convex) spectrahedron S_G(k). Alternately it is the intersection of the psd cone 𝒮^n-k_+ with the (convex) polyhedron defined the linear inequalities L_st≤ 0 for all (s,t) ∈ E and the linear equations L_st = 0 for all (s,t) ∉E, s ≠ t. Further, the sets _G(k), when thought of as sets of Laplacians, are nested since any subgraph that shares the first k eigenvalues and eigenvectors with G also shares the first k-1 of them with G. This aligns with our intuition that the difficulty of finding a k-sparisfier must increase with k. The sets {_G(k)} are convex and nested; _G(k) ⊆_G(k-1) for all 2 ≤ k ≤ n. In Example <ref>, the psd constraints on Y are redundant while in Example <ref>, the psd constraints are active. In both examples, the polyhedron P_G(k) is unbounded. Recall that a k-sparsifier of G is a k-isospectral subgraph of G that has at least one less edge than G. A natural goal when sparsifying a graph is to delete as many edges as possible (while possibly reweighting others to preserve the first k eigenvectors and eigenvalues). The equations L_st=0 for all (s,t) ∉E ensure that E(G̃) ⊆ E(G) for the graphs G̃∈_G(k). An edge (s,t) ∈ E is missing in G̃ if and only if the linear inequality L_st≤ 0 holds at equality in L_G̃. Using this, we can locate the k-sparsifiers in _G(k). Suppose a k-sparsifier G̃ of G has Laplacian L = F + Φ_>k Y_G̃Φ_>k^⊤ in _G(k). The sparsity of G̃, namely the edges of G that are not present in G̃, is determined by the face of P_G(k) that contains Y_G̃. More precisely, (s,t) ∈ E ∖Ẽ if and only if the inequality L_st≤ 0 holds at equality on the face containing Y_G̃. pf: 23 Less formally, the sparsity patterns of k-sparsifiers of G are indexed by the faces of P_G(k) contained (at least partially) in _G(k). If Y_G̃ lies on an ℓ-dimensional face of P_G(k), then G̃ is missing at least |E|-ℓ edges that were present in G. However, it can miss many more edges in particular graphs, because the equations defining the off-diagonal entries of L_G̃ need not be unique, see Example <ref>. In general, _G(k) depends on the choice of eigenbasis of L_G since the construction picks k specific eigenpairs to freeze in the k-isospectral subgraphs of G. Therefore, the k-sparsifiers you get will depend on the choice of eigenbasis. See for example the two different _K_5(4) in  <ref>. However, this discrepancy only occurs when we freeze some eigenpairs with a specific eigenvalue and leave out others. In other words, if the choice of k preserves entire eigenspaces, then _G(k) does not depend on the choice of eigenbasis of L_G. If ł_k < ł_k+1, then _G(k) is independent of the choice of eigenbasis of the Laplacian L_G. pf: Sp2 independent of basis §.§ The complete graph K_5 We now compute _G(k) in detail for two values of k for the complete graph G = K_5. Suppose we fix the following eigenpairs of G: (0, ϕ_1 = 1/√(5)[ 1; 1; 1; 1; 1 ]), (5, ϕ_2 = 1/√(2)[ 1; -1; 0; 0; 0 ], ϕ_3 = 1/√(2)[ 0; 0; 1; -1; 0 ], ϕ_4 = 1/2[ 1; 1; -1; -1; 0 ], ϕ_5 = 1/2√(5)[ 1; 1; 1; 1; -4 ]). For any k, the first step in computing _G(k) is to compute the fixed matrix F. Here is a helpful observation that holds in general. For any graph G, if ł_2 = ⋯ = ł_k, then F is a scaling of the Laplacian of K_n, namely F = ł_2/nL_K_n = ł_2 (I_n - 1/nJ_n), where J_n is the n× n matrix of all ones. pf: F for one fixed eigenval Consider k=3. By Lemma <ref>, F = L_K_5 = 5I_5- J_5. Setting Y = [ a c; c b ], compute L = F + Φ_>3Y Φ_>3^⊤. Since K_5 has no missing edges, all off-diagonal entries of L contribute an inequality L_st≤ 0 to the polyhedron P_K_5(3). We list the inequalities with the edges contributing each inequality on the right: 5a + b + 2 √(5) c ≤ 20 (1,2) 5a + b - 2 √(5) c ≤ 20 (3,4) -5a + b , - 2 √(5) c ≤ 20 (1,3),(1,4),(2,3),(2,4) -b - √(5)c ≤ 5 (1,5),(2,5) -b + √(5)c ≤ 5 (3,5),(4,5) Note that several edges can contribute the same inequality. The polyhedron P_K_5(3) is a square pyramid as seen in Figure <ref>. The base is given by the hyperplane -5a+b=20. Intersecting it with 𝒮^2_+ we get _3(K_5), seen in Figure <ref>. The two flat faces of _K_5(3) are defined by 5a + b ± 2 √(5) c = 20. The interiors of these faces index 3-sparsifiers missing edge (1,2) or (3,4). The hyperplanes with equations -b ±√(5)c= 5 do not touch _K_5(3). This means that with this choice of basis, K_5 has no 3-isospectral subgraph that is missing any edge incident to vertex 5. The intersection of the other three hyperplanes with _K_5(3) is the point (0 , 20, 0). This point produces a spanning tree sparsifier consisting of all edges incident to vertex 5. We depict these intersections in Figure <ref>. Now consider k=4. If we hold the first four eigenpairs fixed, then Φ_> 4 = ϕ_5, Y =[a], and the inequalities defining P_K_5(4) are -5 ≤ a ≤ 20. The lower bound comes from the edges (1,5),(2,5),(3,5),(4,5) and the upper bound comes from the other edges. Since Y ≽ 0, a ≥ 0, and _K_5(4) = [0,20]. This means that there is a unique 4-sparsifier indexed by a=20 and it corresponds to the spanning tree with edges (1,5),(2,5),(3,5),(4,5). On the other hand, if we had chosen Φ_>4 = ϕ_4, then P_K_5(4) is defined by -4 ≤ a ≤ 4, which makes _K_5(4) = [0,4]. The bound a ≤ 4 is given by the edges (1,2) and (3,4). Therefore, the unique 4-sparsifier is now indexed by a=4 and is missing edges (1,2) and (3,4). In particular, there is no spanning tree 4-sparsifier for this choice of eigenvectors to hold constant. Lastly, if we had chosen Φ_> 4 = ϕ_2, then _K_4(4) = [0, ∞] = 𝒮^1_+. The only inequality is of the form a ≥ -2 and the hyperplane a = -2 does not support _K_4(4). Therefore, there are no 4-sparsifiers for this ordering of eigenvectors. By Theorem <ref> we see this wild behavior as the choice of ϕ_n changes because we are not choosing to preserve all the eigenvectors corresponding to ł_2 = 5. § BOUNDS ON THE MAXIMAL SPARSIFICATION Given our notion of a k-sparsifier, the main question now is: how large can we choose k and still obtain nontrivial sparsification? Or, conversely, if we wish to aggressively remove edges, how much of the spectrum can we reasonably hope to preserve? The purpose of this section is to introduce some basic bounds. At the heart of it is a Linear Algebra Heuristic, introduced in <ref>, which provides a guideline for what is true generically. Before discussing the general case, we start with two different extreme settings: the (weighted) complete graph can be sparsified to a high degree and a tree can never be sparsified. If G is a weighted complete graph, then G is guaranteed to have a k-sparsifier for all k ≤ n-2. pf: weighted complete sparsifies This result is sharp. As we saw at the end of Example <ref>, if ϕ_n is chosen to be (1/√(2), -1/√(2),0,0,0) then K_5 has no 4-sparsifiers. This counterexample is also sharp in the sense that whenever the eigenvector ϕ_n has at least three nonzero coordinates, then the graph has an (n-1)-sparsifier – see Proposition <ref>. On the other extreme, if G is a tree, then removing any edge disconnects the graph. Since all k-isospectral subgraphs of G are connected when k ≥ 2 (Lemma <ref>), we have the following. If G = ([n], E, w) is a tree, then even at k=2, a k-isospectral subgraph of G cannot lose any edges. More generally, recalling that the multiplicity of the first eigenvalue captures the number of connected components, we see that any sparsifier preserving the eigenspace corresponding to eigenvalue 0 needs to necessarily preserve the number of connected components. §.§ The Linear Algebra Heuristic In our model, _k(G) is parameterized by at most n - k+1 2 variables corresponding to the distinct entries of Y ∈ S^n-k_+. Every missing edge of G contributes a linear equation to the description of _G (k) while every edge contributes a linear inequality. If G is sufficiently generic, then we would expect that the equations are linearly independent; equivalently, each missing edge decreases the dimension of _G(k) by one. However G itself is in _G(k) for all k. So, if G is missing at least n -k+1 2 edges, then we expect _G(k) to contain just G, and G to have no k-sparsifiers. We refer to this basic principle as the Linear Algebra Heuristic, which tells us what one can generically expect of k-sparsifiers. [Linear Algebra Heuristic] If G=([n],E,w) is a `generic' graph and |E| ≤n 2 - n-k+1 2 then, generically, the only k-isospectral subgraph of G is G itself. The notion of `generic' is to be understood as follows: `typical' linear systems are solvable as long as the number of variables is at least as large as the number of equations. This is not always true: the equations could be linearly dependent and only be solvable for particular right-hand sides. However, having such a dependence is delicate and not `generically' the case: in particular, if one were to perturb the weights w of a graph ever so slightly, one would expect the Linear Algebra Heuristic to apply to `most' (in the measure-theoretic sense) perturbations. Simultaneously, since having such dependencies is non-generic, failures of the Linear Algebra Heuristic are interesting and a sign of a great degree of underlying structure. We quickly mention an easy sample application of the Linear Algebra Heuristic. If we consider all unweighted graphs with n=12 vertices and |E| = 36 edges, then according to the Linear Algebra Heuristic, we expect that `generically', there are 4-sparsifiers, but the only 5-isospectral subgraph is the graph itself. Some numerical experiments show that this is typically true for Erdős-Renyi random graphs G(12, 1/2) conditioned on having 36 edges. §.§ Exceptions To be clear, no direction of the Linear Algebra Heuristic is true for all graphs. Figure <ref> exhibits an unweighted graph G with n=12 and |E|=36 ≤n 2 - n-5+1 2 = 38 that sparsifies up to k=8. Unsurprisingly, these exceptions typically possess some type of symmetry. It is also possible for a graph to have more edges than the difference of binomial coefficients in the linear algebra bound and not have k-sparsifiers. Theorem <ref> exhibits a family of weighted graphs on n vertices where |E| > n 2 - n-(n-2)+1 2 = n 2 - 3, and yet, G has no (n-2)-sparsifiers. The reason is that the spectrahedron S_G(n-2) lies strictly inside the polyhedron P_G(n-2) and hence no face of P_G(n-2) supports _G(n-2), see Figure <ref>. For each n ≥ 4, there is a weighted graph missing only one edge that does not sparsify at k = n-2. pf: graphs with no n-2 sparsifiers For n=4, Theorem <ref> provides an example of a non-tree weighted graph that does not sparsify even at k=2 (see Figure <ref>). In contrast, the unweighted graph missing exactly one edge always has a spanning tree 2-sparsifier. Let G = ([n],E) be the unweighted complete graph missing an edge. Then _G(2) contains a spanning tree sparsifier for any choice of eigenbasis. pf: unweighted complete minus one edge, SP2 These last two theorems highlight the role of weights in the geometry of isospectral subgraphs and sparsifiers. The geometry is controlled by the arithmetic of the weights and the ensuing eigenstructure of the Laplacian, not just by combinatorics. Lastly, we discuss a particularly interesting extremal case: the cube graph Q_d on V={0,1}^d where any two vertices are connected if they differ on exactly one coordinate. The Linear Algebra Heuristic suggests that d · 2^d-1 = |E| ∼2^d 2 - 2^d-k+1 2 should be the natural cut-off after which no further sparsification is possible. Solving the quadratic polynomial, this predicts that as d →∞, we can sparsify up to k ∼ d/2. Due to the extraordinary symmetry of the cube graph and the special structure of its eigenvalues and eigenvectors, sparsification up to a higher value of k is possible, up to k=d. However, the first two eigenspaces uniquely determine the cube graph, which is to say that no sparsification is possible at k = d+1. There is no sparsification of the cube graph Q_d that preserves the first two eigenspaces (the first d+1 eigenvectors and eigenvalues) except for the trivial sparsification which leaves all edge weights invariant; w_ab = 1. pf:cube One would naturally wonder whether other graphs with special symmetries might perhaps admit a similar type of rigidity structure; this seems like an interesting problem but is outside the scope of this paper. § FAMILIES OF BAND-LIMITED SPECTRAL SPARSIFIERS The purpose of this section is to discuss different graph families and describe what can be said about their sparsification properties. This leads to problems at the interface of combinatorics, polyhedral geometry and spectral geometry. §.§ The Complete Graph. The unweighted complete graph plays a special role; it has eigenvalues 0 and n (with multiplicity n-1) with eigenspaces {} and ^⊥ respectively. Since there is only one non-trivial eigenspace, any sparsification of K_n must depend on a choice of eigenbasis. Recall the 4-sparsifiers of K_5 from  <ref>. There is a choice of eigenbasis so that for every k < n, a spanning tree is a k-sparsifier of K_n. The spanning tree we exhibit is the star graph K_1,n-1, where every edge has equal weight n. pf: Kn spanning tree §.§ The Wheel Graph Let W_n+1 denote the wheel graph on n+1 vertices, for n ≥ 3. We will assume that the vertices are ordered so that n+1 is the center of the wheel and that i ∼ (i+1 n) for i ∈ [n] (see Figure <ref>). The spectrum of W_n+1 is well understood from its formulation as the join of the cycle C_n and a single vertex n+1, see Table <ref> for more details <cit.>. The least nonzero eigenvalue of L_W_n+1 is 3 - 2 cos (2π/n) which has multiplicity 2. There is a 3-sparsifier of W_n+1 which is a spanning tree. The spanning tree we exhibit is the star graph K_1,n, where every edge has equal weight 3 - 2 cos (2π/n). pf: wheel We do not expect any 4-sparsifiers of the wheel by the Linear Algebra Heuristic. Recalling that W_n+1 has 2n edges and n+1 vertices, this follows because |E| = 2n < 3(n-1) = n+12 - (n+1)-4 +12 . §.§ A General Principle The existence of k-sparsifiers in a graph family is a statement about the eigenvectors and eigenvalues of the Graph Laplacian of graphs in the family. There appears to be a general principle that is worth recording. Let G=([n],E, w) be a connected, weighted graph and let T = ([n], E_T, w|_E_T) be a spanning tree of G. Let k ∈ [n] be arbitrary and let ϕ_1, …, ϕ_k be eigenvectors corresponding to the k smallest eigenvalues of the spanning tree T. Suppose that for all (u,v) ∈ E either (u,v) ∈ E_T ϕ_i(u) = ϕ_i(v)  1 ≤ i ≤ k, then the spanning tree T is a k-sparsifier of G with respect to {ϕ_1, …, ϕ_k }. pf: example The result says that if a graph has the property that eigenvectors only change along the edges of a spanning tree, then that spanning tree is actually a k-sparsifier. The first application of Theorem <ref> is for the Barbell Graph B_n,n, which is two n-cliques joined by a bridge. Its Laplacian is L_B_n,n = ([ n I_n - J_n 0_n,n; 0_n ,n m I_n - J_m ]) + E_n,n + E_n+1,n+1 - E_n+1,n - E_n,n+1 where E_i,j is the matrix with a single 1 in the (i,j)-th place. A general statement is possible for a slight generalization to B_n,m, an n-clique and m-clique joined by a bridge, but we restrict to B_n,n for brevity. Let G=B_n,n. The spanning tree given by the two copies of the star graph K_1,n-1 together with the bridge (every edge has weight 1) is a 2-sparsifier. The result follows immediately from Theorem <ref> together with the spectral information of B_n,n which is recorded in Table <ref>. We have not seen this information recorded in the literature elsewhere. The second family of examples will be given by the lollipop graph. We define the (m,p)-Lollipop Graph to be an m-clique joined to the path graph on p vertices by a bridge. The (7,5)-Lollipop Graph is exhibited in Figure <ref> (and was also seen in Figure <ref>). The spectrum of these graphs appears to be somewhat difficult to write down, however, the dynamics of the first few eigenvectors is easy to understand from a qualitative perspective. We recall that eigenvectors corresponding to small eigenvalues can really be understood as minimizers of the Rayleigh-Ritz functional ⟨ f, L_G f⟩/⟨ f, f ⟩ = ∑_(u,v) ∈ E (f(u) -f(v))^2/∑_v ∈ V f(v)^2. In the case of the lollipop graph, the functional is easy to understand: variation across the path is `energetically' cheaper than variation within the complete graph since there are many more connections within the complete graph and variations show up in many more additional terms. Writing the vertex set as V = K_m-1∪ P_p+1, we expect the first few eigenvectors to be constant on K_m-1. The effect becomes more pronounced as the attached path becomes longer. Whenever that happens, Theorem <ref> immediately applies. Many other such examples can be constructed. § PRESERVING THE QUADRATIC FORM In the work of Batson, Spielman, Srivastava and Teng (see <cit.> and references therein) the sparsifiers of a graph G are graphs G̃ on the same vertices whose quadratic forms x^⊤ L_G̃ x are approximately the same as x^⊤ L_G x. If one were to focus on the quadratic form, but think along the lines of this paper, then it is natural to consider a band-limited sparsifier of a graph G as a subgraph G̃ such that Q_G(x) = Q_G̃(x) for all x ∈{ϕ_1, …, ϕ_k}. A graph G̃=([n],Ẽ, w̃ ) is a Q_k(x)-sparsifier of G if Ẽ⊆ E, w̃_e > 0 for all e ∈Ẽ and Q_G(x) = Q_G̃(x) for all x ∈{ϕ_1, …, ϕ_k}. Unlike in previous sections, we do not insist that a Q_k(x)-sparsifier of G should lose edges. Definition <ref> yields a polyhedron containing all possible Laplacians L that correspond to Q_k(x)-sparsifiers of G. We derive this model below and compare it to Definition <ref>. Note that the only Q_n(x)-sparsifier of G is G itself, but when k < n, this definition admits non-trivial sparsifiers. Recall that a spectrahedron is the intersection of a psd cone with an affine subspace of symmetric matrices. For a given matrix A ≽ 0 and a proper subspace Λ⊂^n, the set S_A(Λ) = { B ≽ 0 : x^⊤ A x = x^⊤ B x ∀ x ∈Λ} is a spectrahedron. pf:SALambda is a spectrahedron The Q_k(x)-sparsifier set of G is the intersection of S_L_G({ϕ_1, …, ϕ_k}) and the set of Laplacians of weighted subgraphs of G. When A = L_G, the proof of Lemma <ref> shows that S_L_G({ϕ_1, …, ϕ_k}) = {L ≽ 0: [ ⟨ L, ϕ_i ϕ_i^⊤⟩ = λ_i for i=1, …, k; ⟨ L, ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩ = 0 for i≠ j ]}. The additional conditions needed for L ∈ S_L_G({ϕ_1, …, ϕ_k}) to be the Laplacian of a weighted subgraph of G are: * L_ij = 0 for all i ≠ j, (i,j) ∉E. This guarantees that Ẽ⊆ E. * L_ij≤ 0 for all i ≠ j. This guarantees that w̃_ij = -L_ij≥ 0. * L_ii = - ∑_(i,j) ∈ E L_ij for all i. Also, G̃ is connected if and only if (L) = n-1. We can bake conditions (1)–(3) into the following structured symbolic matrix: L( w) = [ ∑_j ≠ 1w_1j - w_12 -w_13 ⋯ -w_1n; -w_12 ∑_j ≠ 2w_2j - w_23 ⋯ -w_2n; ⋮ ⋮ ⋮ ⋮ ⋮; -w_1n ⋯ ⋯ ⋯ ∑_j ≠ nw_nj ] with w_ij = 0 if (i,j) ∉E. Then all subgraphs of G will have Laplacians of the form L(w̅) for some w̅∈^E_≥ 0. By construction L(w) = 0 = L(w) ϕ_1. Using this, and the fact that L(w) is symmetric, we get the following facts: * ϕ_1^⊤ L(w) ϕ_1 = 0, * ϕ_1^⊤ L(w) ϕ_j = 0 = ϕ_j^⊤ L(w) ϕ_1 for all j > 1, and * ⟨ L(w), ϕ_i ϕ_j^⊤⟩ = ⟨ L(w), ϕ_j ϕ_i^⊤⟩ which implies that ⟨ L(w), ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩ = 2 ⟨ L(w), ϕ_i ϕ_j^⊤⟩ = 2 ⟨ L(w), ϕ_j ϕ_i^⊤⟩. Define L'(w) to be the principal submatrix of L(w) obtained by deleting the last row and column. The Laplacians of Q_k(x)-sparsifiers of G are the matrices L(w) for all w in the polyhedron P^Q(x)_G(k) = { w ∈^E_≥ 0 : [ ⟨ L(w), ϕ_i ϕ_i^⊤⟩ = λ_i ∀ i =2, …, k; ⟨ L(w), ϕ_i ϕ_j^⊤⟩ = 0, ∀ 2 ≤ i < j ≤ k; ]} which lies in the semialgebraic region (L'(w)) ≥ 0. * The sparsity patterns of the Q_k(x)-sparsifiers of G are in bijection with the faces of P^Q(x)_G(k). * The connected sparsifiers correspond to faces where (L'(w)) > 0. pf:last Lemma <ref> implies the following. The set _G(k) of k-isospectral subgraphs of G is contained in the polyhedron P^Q(x)_G(k). A disadvantage of the the larger P^Q(x)_G(k) is that the spectrum of G̃ can be wildly different from that of G, even when restricting our attention to connected sparsifiers. We illustrate this on the 3-dimensional cube graph. We avoid the notation Q_d for cube graphs here to avoid confusion with the Q_k(x)-sparsifier notation. Recall from Theorem <ref> that the 3-dimensional cube graph has no 4-sparsifiers. The following example contrasts this with the set of Q_4(x)-sparsifiers of the cube graph. Let G = ([8], E) be the 3-dimensional cube graph. There is a spanning tree Q_4(x)-sparsifier of G which preserves no eigenpair of G. In particular, the second eigenvalue of this tree is approximately .3677, whereas for G, ł_2 =2. To see this, label the vertices of G as in Figure <ref>. The eigenvalues of L_G are 0^(1), 2^(3) , 4^(3), 6^(1), where the exponents record multiplicity. The following vectors form a orthonormal basis for the eigenspace of ł_2 =2 of L_G. ϕ_2 = [ 1 , -1 , 1 , 1 , -1 , -1 , 1 , -1]^⊤ / √(8) ϕ_3 =[ 1 , 1 , -1 , 1 , -1 , 1 , -1 , -1]^⊤ / √(8) ϕ_4 = [1 , 1 , 1 , -1 , 1 , -1 , -1 , -1]^⊤ / √(8). From Theorem <ref>, we have the polyhedron P_G^Q(x)(4) = { w∈^E_≥ 0 : [ w_12 + w_35+ w_46 + w_78 = 4; w_13 + w_25+ w_47 + w_68 = 4; w_14 + w_26+ w_37 + w_58 = 4 ]} . A valid choice of edge weights is w_12 = w_14 = 3 w_13 = 2 w_25 = w_37 = w_46 = w_68 = 1 w_26 = w_35 = w_47 = w_58 = w_78 = 0 . This spanning tree sparsifier G̃ is shown in Figure <ref>. Its Laplacian eigenvalues are 0, 0.3677, 0.6383, 1.3889, 2.4974, 3.6368, 4.3896, 11.0814, and none of the eigenvectors of G̃ are eigenvectors of G. We can confirm, however, that the quadratic form is indeed preserved on the appropriate subspace: ϕ_2^⊤ L_G̃ϕ_2 = ϕ_3^⊤ L_G̃ϕ_3= ϕ_4^⊤ L_G̃ϕ_4 =2 , and ϕ_i^⊤ L_G̃ϕ_j = 0 , ∀ i≠ j ∈{2,3,4}. Consider the weighted complete graph G = K_3 with edge weights w_12=1, w_13=2, w_23=2. Then L_G = [ 3 -1 -2; -1 3 -2; -2 -2 4 ] has eigenpairs: ( λ_1 = 0, ϕ_1 = 1/√(3)[ 1; 1; 1 ]), ( λ_2 = 4, ϕ_2 = 1/√(2)[ 1; -1; 0 ]), ( λ_3 = 6, ϕ_3 = 1/√(6)[ 1; 1; -2 ]). Suppose we consider all graphs G̃ with Laplacian L = [ a+b -a -b; -a a+c -c; -b -c b+c ] and Q_G̃(x) = Q_G(x) for all x ∈{ϕ_1, ϕ_2 }. Then P^Q(x)_G(2)= { (a,b,c) ∈^3_≥ 0 : 4a+b+c =8}≅{ (a,b) ∈^2 : [ a≥ 0; b ≥ 0; 8-4a-b ≥ 0 ]} which is the triangle with corners {(0,0),(0,8),(2,0)} shown in Figure <ref>. Each point in the triangle corresponds to a Q_2(x)-sparsifier of G. The triangle is contained in the parabolic region cut out by (L'(w)) = -4a^2 - b^2 - 4ab +8a + 8b ≥ 0. Its boundary intersects the triangle at its three corners where (L) =1 and G̃ is disconnected. The sides of the triangle except for the corners correspond to Q_2(x)-sparsifiers of G that are spanning trees. On the other hand, the 2-isospectral subgraphs of G have Laplacians: L = [ 16+y/6 y-8/6 -2y-8/6; y-8/6 16+y/6 -2y-8/6; -2y-8/6 -2y-8/6 16+4y/6 ], 0 ≤ y ≤ 8. Equating to (<ref>), _G(2) = [(0,4,4),(4/3,4/3,4/3)] ⊂ P^Q(x)_G(2), and L_G corresponds to (a,b,c) = (1,2,2) = 1/4 (0,4,4)+ 3/4(4/3,4/3,4/3). The point (1,1,3) ∈ P^Q(x)_G(2) ∖_G(2) corresponds to the graph G' with L_G' = [ 2 -1 -1; -1 4 -3; -1 -3 4 ] and eigenpairs (0,ψ_1 = 1/√(3)(1,1,1)^⊤), (3,ψ_2 = 1/√(6)(2,-1,-1)^⊤), (7,ψ_3 = 1/√(2)(0,-1,1)^⊤). Therefore, G' is not 2-isospectral to G, but Q_G(x) = Q_G'(x) for all x ∈{ϕ_1, ϕ_2}. Indeed, a linear combination of ϕ_1, ϕ_2 looks like x = (α + β, α - β, α)^⊤. Check that you get 8 β^2 when you plug in x into both Q_G(x) = 4 (ϕ_2^⊤ x)^2 + 6 (ϕ_3^⊤ x)^2 Q_G'(x) = 3 (ψ_2^⊤ x)^2 + 7 (ψ_3^⊤ x)^2. § PROOFS §.§ Proof of Lemma <ref>. If x ∈{ϕ_1, …, ϕ_k}, then x = ∑_j=1^k β_j ϕ_j for some β_j ∈. Therefore, Q_G(x) = ∑_i=2^n λ_i (ϕ_i^⊤ x)^2 = ∑_i=2^n λ_i (ϕ_i^⊤∑_j=1^k β_j ϕ_j)^2 = ∑_i=2^n λ_i (∑_j=1^k β_j ϕ_i^⊤ϕ_j)^2 = ∑_i=1^k λ_i β_i^2. On the other hand, Q_G̃(x) = ∑_i=2^k λ_i (ϕ_i^⊤ x)^2 + ∑_j=1^n-kμ_j (ψ_j^⊤ x)^2. Since every ψ_j is orthogonal to every ϕ_i we have that the second sum ∑_j=1^n-kμ_j (ψ_j^⊤ x)^2 = ∑_j=1^n-kμ_j (ψ_i^⊤∑_j=1^k β_j ϕ_j)^2 = 0. Therefore, Q_G̃(x) = ∑_i=1^k λ_i β_i^2 = Q_G(x). §.§ Proof of Theorem <ref> Denote the eigenpairs of a k-isospectral subgraph G̃ as (0,ϕ_1), (λ_2, ϕ_2), …, (λ_k, ϕ_k), (μ_1, ψ_1), …, (μ_n-k, ψ_n-k) where the first k are the same as those of G. The Laplacian of G̃ then has the form L = Φ_k Λ_k Φ_k^⊤ + Ψ D Ψ^⊤ = ∑_i=2^k λ_i ϕ_i ϕ_i^⊤ + ∑_j=1^n-kμ_j ψ_j ψ_j^⊤ where Ψ = [ ψ_1 ⋯ ψ_n-k ] and D=(μ_1, …, μ_n-k) are variables that satisfy the following conditions: * Φ_k^⊤Ψ = 0 since each ϕ_i must be orthogonal to each ψ_j. * Ψ^⊤Ψ = I_n-k since {ψ_j} must be a set of orthonormal vectors. * μ_j ≥λ_k for all j since we want G̃ to have the same first k eigenvalues as G. A ready-made candidate for Ψ that satisfies conditions (1)–(2) is Φ_> k := [ ϕ_k+1 ⋯ ϕ_n ]. Any other Ψ is of the form Φ_>k B for an orthogonal matrix B in ^(n-k) × (n-k) since {ψ_j} is an orthogonal basis for {ϕ_k+1, …, ϕ_n}. Letting X = B D B^⊤ we can rewrite L as L = Φ_k Λ_k Φ_k^⊤ + (Φ_>k B) D ( Φ_>kB)^⊤ = Φ_k Λ_k Φ_k^⊤ + Φ_>k (B D B^⊤) Φ_>k^⊤ = Φ_k Λ_k Φ_k^⊤ + Φ_>k X Φ_>k^⊤ . The set of all matrices of the form BDB^⊤ where B is an orthogonal matrix and D is diagonal with all entries positive is the set of positive definite matrices of size n-k, namely the interior of the psd cone 𝒮^n-k_+. Since we want D_ii≥λ_k, we require D ≽λ_k I_n-k which implies that X = BDB^⊤≽λ_k I_n-k, or equivalently, X = λ_k I_n-k + Y, Y ≽ 0. Hence Φ_>k X Φ_>k^⊤ = Φ_>k (λ_k I_n-k + Y) Φ_>k^⊤ = λ_k Φ_>kΦ_>k ^⊤ + Φ_>k Y Φ_>k^⊤. Putting all of the above together, the Laplacians of k-isospectral subgraphs G̃ of G must be a subset of matrices of the form L = Φ_k Λ_k Φ_k^⊤ + λ_k Φ_>kΦ_>k^⊤_=:F + Φ_>k Y Φ_>k^⊤. where Y ≽ 0. By construction, any L of the form (<ref>) has n-1 positive eigenvalues and hence its rank is n-1. The matrix F has spectral decomposition F = Φ(0,ł_2, …, ł_k, ł_k, …, ł_k) Φ^⊤, and hence, F ∈𝒮^n_+ and (F)=n-1. Since L is obtained by adding F to a psd matrix Φ_>k Y Φ_>k^⊤∈𝒮^n_+, we have that L ∈𝒮^n_+. For L in (<ref>) to be the Laplacian of a subgraph of G we impose the conditions: L_st≤ 0 ∀ (s,t) ∈ E L_st = 0 ∀ s ≠ t, (s,t) ∉E The only property left to check is that L is now weakly diagonally dominant. By construction, (0,) is the first eigenpair of L which means that L = 0. This guarantees that -L_ii is the sum of the off-diagonal entries in row i. Since, L ≽ 0, L_ii≥ 0, and by (<ref>) and (<ref>), L_ii is the only potential positive entry in row i. If L_ii = 0 then all entries in row i are 0 since L ≽ 0 which means that vertex i in the subgraph G̃ associated to L is isolated. However, this is impossible since k ≥ 2 which means that we are requiring that ł_2 > 0 is the second eigenvalue of L. Thus, _G(k) is contained in the set of Laplacians of k-isospectral subgraphs of G. On the other hand, L_G̃ for any k-isospectral subgraph G̃ of G is in _G(k); choose Y = (μ_k+1-ł_k, …, μ_n-ł_k). This proves Theorem <ref>. §.§ Proof of Corollary <ref> Every face of P_G(k) is determined by some collection of inequalities from { L_st≤ 0 : (s,t) ∈ E} holding at equality. The graph G̃ is missing the edge (s,t) ∈ E if and only if L_st=0. Thus the faces of P_G(k) that are contained in _G(k) index all possible sparsity patterns of k-sparsifiers of G. §.§ Proof of Theorem <ref> Recall that for a fixed eigenbasis Φ, _G(k) = { L = Φ_kŁ_k Φ_k^⊤ + ł_kΦ_>kΦ_>k^⊤ + Φ_>k Y Φ_>k^⊤}, where Y ≽ 0, and the edges of the graph provide additional constraints on valid choices of Y. The `fixed' matrix in this expression is F = Φ_kŁ_k Φ_k^⊤ + ł_kΦ_>kΦ_>k^⊤. We first show that F does not depend on the choice of basis. Let L_G have m eigenspaces, and let 0 < ł_i_2 < … < ł_i_m be its distinct positive eigenvalues where i_j is the index of the first eigenvalue associated to the j-th eigenspace. If _G(k) preserves the first ℓ∈{2,…, m-1} positive eigenspaces, then k = i_ℓ+1 - 1. Let Φ and Ψ be two eigenbases of L_G, and let Φ_ł denote the submatrix of Φ consisting of the columns which span the eigenspace with eigenvalue ł. Then there is an orthogonal matrix U_j such that Φ_ł_i_j U_j = Ψ_ł_i_j for each j ∈ [m]. Since U_jU_j^⊤ is the identity, we have that F = Φ_kŁ_k Φ_k^⊤ + ł_kΦ_>kΦ_>k^⊤ = ∑_j=2^ℓł_i_jΦ_ł_i_jΦ_ł_i_j^⊤ + ł_i_ℓ∑_j=ℓ+1^mΦ_ł_i_jΦ_ł_i_j^⊤ = ∑_j=2^ℓł_i_jΦ_ł_i_j U_j U_j^⊤Φ_ł_i_j^⊤ + ł_i_ℓ∑_j=ℓ+1^mΦ_ł_i_j U_j U_j^⊤Φ_ł_i_j^⊤ = ∑_j=2^ℓł_i_jΨ_ł_i_jΨ_ł_i_j^⊤ + ł_i_ℓ∑_j=ℓ+1^mΨ_ł_i_jΨ_ł_i_j^⊤ = Ψ_kŁ_k Ψ_k^⊤ + ł_kΨ_>kΨ_>k^⊤ Thus F does not depend on the choice of eigenbasis. Now, let Ũ = ( U_ℓ+1, …, U_m), and note that Ψ_>k Y Ψ_>k^⊤ = Φ_>kŨ YŨ^⊤Φ_>k^⊤, and moreover that Y ≽ 0 if and only if Ũ Y Ũ^⊤≽ 0. Thus L = F + Ψ_>k Y Ψ_>k^⊤if and only if L = F + Φ_>kŨ YŨ^⊤Φ_>k^⊤. Hence the set _G(k) is independent of the choice of eigenbasis. §.§ Proof of Lemma <ref> Recall that the Laplacian of K_n is L_K_n = nI_n - J_n. If ł_2 = … = ł_k, then F = Φ_k Λ_k Φ_k^⊤ + λ_k Φ_>kΦ_>k^⊤ = λ_2 ∑_i=2^n ϕ_i ϕ_i^⊤ = ł_2 (I_n - ϕ_1 ϕ_1^⊤) = ł_2 (I_n- 1/n J_n). §.§ Proof of Theorem <ref> Since a weighted, complete graph G has no missing edges, _G(k)= { L = Φ_k Λ_k Φ_k^⊤ + λ_k Φ_>kΦ_>k^⊤ + Φ_>k Y Φ_>k^⊤ : L_st≤ 0 ∀ s ≠ t , Y ≽ 0}. No k-isospectral subgraph will sparsify if and only if none of the hyperplanes L_st=0 intersect the psd cone 𝒮^n-k_+, or equivalently, 𝒮^n-k_+ lies in the open halfspace L_st < 0 for all s ≠ t. Recall that L_st≤ 0 is ⟨ Y, c_s c_t^⊤⟩≤ - F_st where c_i is the ith column of Φ_> k^⊤. Suppose there is some Y̅≽ 0 such that ⟨Y̅, c_s c_t^⊤⟩ > 0. Then for a fixed Y_0 ≽ 0 and α > 0 ⟨ Y_0 + αY̅ , c_s c_t^⊤⟩ = c_s^⊤ Y_0 c_t + α (c_s^⊤Y̅ c_t) ⟶∞α⟶∞. Therefore, for large enough α, the psd matrix Y_0 +αY̅ violates L_st≤ 0. This means that L_st≤ 0 is not valid on all of 𝒮^n-k_+ and the plane L_st=0 cuts through 𝒮^n-k_+. By the above argument, if 𝒮^n-k_+ lies in the open halfspace L_st < 0, then it must be that ⟨ Y, c_s c_t^⊤⟩≤ 0 for all Y ≽ 0. The polar of the psd cone 𝒮^n-k_+ is the set of all matrices X such that ⟨ Y,X ⟩≥ 0 for all Y ∈𝒮^n-k_+. It is well known that the psd cone is the polar dual of itself, and hence ⟨ Y, c_s c_t^⊤⟩≤ 0 for all Y ≽ 0 if and only if -c_sc_t^⊤ lies in the polar dual cone, or equivalently, -c_sc_t^⊤ is a psd matrix. Since -c_sc_t^⊤ is a rank one matrix, it is psd if and only if c_t = -β c_s for some β≥ 0. We conclude that if 𝒮^n-k_+ lies in the open halfspace L_st < 0 for all (s,t) ∈ E, then there are scalars β_st≥ 0 c_t = -β_st c_s ∀ s ≠ t. If k ≤ n-2, then Φ_> k^⊤ has at least two rows and at least one nonzero column (say c_1). Since all edges (1,t) are present in G, if (<ref>) holds, then c_2, …, c_n are nonpositive multiples of c_1. This makes the rows of Φ_>k dependent which contradicts that the rows are part of an eigenbasis. Therefore, when k ≤ n-2, some hyperplane L_st =0 cuts through the psd cone and in particular, supports _G(k). Points lying on this face of _G(k) are k-sparsifiers which miss the edge (s,t). In the above proof, if k = n-1 then Φ_> n-1 = ϕ_n which has at least two nonzero entries (say the first and second) since ϕ_n^⊤ = 0. If it has a third nonzero entry as well (say the third), then the second and third entries are negative multiples of the first but then the third is not a negative multiple of the second which contradicts (<ref>) and we get that some L_st=0 supports _G(k) and G has a (n-1)-sparisifer. The requirement that ϕ_n has at least three nonzero entries is a type of genericity condition. Recall from the K_5 example that if ϕ_n has only two nonzero entries then a (n-1)-sparsifier can fail to exist. In the generic case, we can also give a direct constructive proof that shows the existence of a (n-1)-sparsifier. Let G=(V,E,w) be a complete graph where every edge has a positive weight. If the largest eigenvalue λ_n has an eigenvector ϕ_n with at least three nonzero entries, then there is a (n-1)-sparsifier deleting at least one edge. We start by considering the Laplacian L_G = D-A of G. Since none of the edge weights vanish, L_G has positive entries on the diagonal and negative entries everywhere else. Our goal is to construct a sparsifier H that preserves the first k=n-1 eigenvectors and eigenvalues. Since the last eigenvector is isolated, λ_n ≥λ_n-1, we have very concrete knowledge of how the Laplacian L_H has to look like: the difference L_G-L_H, interpreted as a linear operator, can only act on the eigenspace corresponding to ϕ_n. Thus L_H is a rank one perturbation of L_G and, for some constant c ∈ℝ L_H = L_G + c (ϕ_n ϕ_n^⊤). Moreover, we observe that the largest eigenvalue then obeys λ_n(L_H) = λ_n(L_G) + c. In particular, this shows that by setting c ≥ 0 we are guaranteed to obtain a new Laplacian whose first n-1 eigenvalues and eigenvectors coincide exactly with those of L_G. It remains to check whether L_H corresponds to a Laplacian of a weighted graph: for this, we require that the matrix is symmetric, each row sums to zero, and that the off-diagonal entries are all non-negative. Since the graph is connected, ϕ_1 is constant. By orthogonality, this implies that the entries of ϕ_n sum to 0. Moreover, if ϕ_n has at least three non-zero coordinates, then there are at least two with the same sign which gives rise to at least one off-diagonal entry of the matrix (ϕ_n ϕ_n^⊤). This shows that there exists a choice of c>0 such that L_H has at least one off-diagonal entry that is zero while all other off-diagonal entries are negative and leads to an (n-1)-sparsifier. We note that for `generic' weights and `generic' eigenvectors, the above procedure will typically result in an (n-1)-sparsifier that deletes exactly one edge. Recalling the Linear Algebra Heuristic with k=n-1, we see that |E| ≤n 2 - n-k+1 2 = n 2 - n-(n-1)+1 2 = n 2 - 1, and this coincides exactly with the previous proof: one can hope to erase a single edge but not more than that. §.§ Proof of Theorem <ref> We argue by contradiction and assume that there exists a sparsification Q_d' of Q_d at k=d+1. This corresponds to an assignment of weights to the d · 2^d-1 edges e ∈ E of Q_d such that at least one of the edge weights is zero (which corresponds to the vanishing of an edge). The spectrum and eigenvectors of the Laplacian of Q_d are well studied. The eigenvectors are of the form ϕ_s (v) = (-1)^s^⊤ v for s,v∈{0,1}^d. The eigenvectors ϕ_s and ϕ_t have the same eigenvalue if and only if _d^⊤ s = _d^⊤ t. In particular, the eigenspace for λ_2 = 2 is spanned by the eigenvectors {ϕ_e_i}_i=1^d. We start by noting that, since the bottom of the spectrum of Q_d is preserved in Q_d' (both the eigenvalues and the eigenvectors), we have that for all functions f:V →ℝ on Q_d' with mean value 0, ∑_(i,j) ∈ E w_ij (f(i) - f(j))^2/∑_i ∈ V f(i)^2≥λ_2. We first show that every single edge weight in Q_d' has to be at least w_ab≥ 1. Suppose (a,b) is an edge of Q_d' such that w_ab < 1. Since a, b ∈{0,1}^d are adjacent in Q_d, there is exactly one coordinate ℓ∈ [d] for which a_ℓ≠ b_ℓ. The eigenvector ϕ = ϕ_e_ℓ has eigenvalue 2 and is such that ϕ(a) = (-1)^a_ℓ = (-1) (-1)^b_ℓ = -ϕ(b). We may assume without loss of generality that ϕ(a) =1 and so, ϕ(b)=-1. We will now introduce a new function ψ:V →ℝ as follows ψ(v) = 1 + ε  v = a -1 - ε  v = b ϕ(v) Then, ∑_i ∈ Vψ(v) = 0. We consider the Rayleigh-Ritz quotient and observe that ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2/∑_i ∈ Vψ(i)^2 = ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2/4 ε + 2 ε^2 + 2^d. To analyze the numerator, we split the sum into four different sums: (1) the edge (a,b), (2) the other edges incident to a, (3) the other edges incident to b and (4) edges incident to neither a nor b. We obtain ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(vj)^2 = w_ab(ψ(a) - ψ(b))^2 + ∑_(i,j) ∈ E i,j ≠ a,b w_ij (ψ(i) - ψ(j))^2 +∑_(a,j) ∈ E j≠ b w_aj (ψ(a) - ψ(j))^2 + ∑_(b,j) ∈ E j≠ a w_bj (ψ(b) - ψ(j))^2. The first term is simple: w_ab(ψ(a) - ψ(b))^2 = w_ab (2+ 2ε)^2 = 4 w_ab + 8 w_abε+ 4 w_abε^2 = w_ab(ϕ(a) - ϕ(b))^2 + 8 w_abε+ 4 w_abε^2. The second sum is also easy to analyze: the edges are not incident to a or b and thus the function ψ coincides with the function ϕ for all terms and ∑_(i,j) ∈ E i,j ≠ a,b w_ij (ψ(i) - ψ(j))^2 = ∑_(i,j) ∈ E i,j ≠ a,b w_ij (ϕ(i) - ϕ(j))^2. For an edge (a,j) where j ≠ b, since a and j do not differ in the ℓ-th coordinate, we have that ψ(j) = ϕ(j) = ϕ(a) = 1. Therefore, ∑_(a,j) ∈ E j≠ b w_aj (ψ(a) - ψ(j))^2 = ∑_(a,j) ∈ E j≠ b w_ajε^2 = ε^2 ∑_(a,j) ∈ E j≠ b w_aj and, via the same reasoning ∑_(b,j) ∈ E j≠ a w_bj (ψ(b) - ψ(j))^2 = ε^2 ∑_(b,j) ∈ E j≠ a w_bj. Therefore, ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2 = ∑_(i,j) ∈ E w_ij (ϕ(i) - ϕ(j))^2 + 8 ω_abε + 4 ω_abε^2 + ε^2 ( ∑_(a,j) ∈ E w_aj + ∑_(b,j) ∈ E w_bj), Note that the first sum on the right of the equal sign is over all edges in E. Indeed, the second sum from the four sums above accounts for all edges not adjacent to a or b. The term w_ab(ϕ(a) - ϕ(b))^2 from the first of the four sums contributes the edge (a,b). Further, since ϕ(a)=ϕ(j) when (a,j) ∈ E and j ≠ b, there is no harm in adding the sum of terms w_aj(ϕ(a)-ϕ(j))^2 for all (a,j) ∈ E, j ≠ b, and similarly, also the sum of the terms w_bj(ϕ(b)-ϕ(j))^2 for all (b,j) ∈ E, j ≠ a. Therefore, ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2/∑_i ∈ Vψ(i)^2 = 8 w_abε + 𝒪(ε^2) + ∑_(i,j) ∈ E w_ij (ϕ(i) - ϕ(j))^2/ 4ε + 𝒪(ε^2) + ∑_i ∈ Vϕ(i)^2 Since ϕ assumes values in {-1,1} and corresponds to eigenvalue 2, we have ∑_i ∈ Vϕ(i)^2 = 2^d ∑_(i,j) ∈ E w_ij (ϕ(i) - ϕ(j))^2 = 2^d+1. Thus, ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2/∑_i ∈ Vψ(i)^2 = 2^d+1 + 8 w_abε + 𝒪(ε^2) / 2^d + 4ε + 𝒪(ε^2). Differentiating the expression on the right in ε at ε =0, we obtain d/dε 2^d+1 + 8 w_abε + 𝒪(ε^2) / 2^d + 4ε + 𝒪(ε^2)|_ε =0 = - 8 (1 - w_ab)/2^d < 0. This shows that, for ε > 0 sufficiently small, ∑_(i,j) ∈ E w_ij (ψ(i) - ψ(j))^2/∑_i ∈ Vψ(i)^2 < λ_2 which contradicts the Courant-Fischer theorem. Thus w_ab≥ 1 for all edges. Suppose now w_ab > 1 for some (a,b) ∈ E. Letting ℓ be the coordinate on which a and b differ, the eigenvector ϕ = ϕ_e_ℓ of ł_2 which has ϕ(a) = - ϕ(b) shows that 2 = λ_2 = ∑_(i,j) ∈ E w_ij (ϕ(i) - ϕ(j))^2/∑_i ∈ Vϕ(i)^2 > ∑_(i,j) ∈ E (ϕ(i) - ϕ(j))^2/∑_i ∈ Vϕ(i)^2 = 2 which is a contradiction. Therefore all the weights are necessarily w_ab = 1. §.§ Proof of Theorem <ref> We will construct these graphs from a specifically chosen spectrum, which we show in Table <ref>. Let {ϕ_i} be an orthonormal basis chosen from these eigenspaces with ϕ_1, ϕ_n-1, ϕ_n being the normalized ϕ_1', ϕ_n-1', ϕ_n', and let Ł = ( 0 , n ^⊤_n-3 , 3n , 7n + 6). The resulting matrix ΦŁΦ^⊤ = L_G defines the graph G = ([n], E, w) with edge weights w_ij = 1 i∈ [n-3], j > i 2n +3 i = n-2, j = n-1, n 0 i = n-1, j = n . See Figure <ref> for a depiction of the graph when n = 4. Because ł_2 = … = ł_n-2, by Lemma <ref>, Laplacians in _G(n-2) have the form n I_n - J_n +Φ_>n-2 Y Φ_>n-2^⊤, Y ≽ 0. Letting Y = [ a b; b c ], we see that (Φ_>n-2 Y Φ_>n-2^⊤)_ij = 0 i ∈ [n-3], j ∈ [n] b/√(3) - c/3 i = n-2, j = n-1 -b/√(3) - c/3 i = n-2, j = n c/6 - a/2 i = n-1, j = n . Most entries of this matrix are zero due to the structure of ϕ_n-1 and ϕ_n; only three edges of the graph have flexibility. Because (n-1,n) ∉E(G), we must have that (L_G)_n-1,n = (n I_n - J_n +Φ_>n-2 Y Φ_>n-2^⊤)_n-1,n = 0, which is to say -1 + c/6 - a/2 = 0. Thus we can write Y = [ c/3 -2 b; b c ]. The psd constraints on Y are then c ≥ 6, |b| ≤√(c^2/3 -2c). The remaining two edge inequalities defining the sparsifier set from edges (n-2, n-1) and (n-2,n) respectively are -1 - b/√(3) - c/3 ≤ 0 , -1 + b/√(3)- c/3 ≤ 0. We can consolidate these as |b| ≤√(3) + c /√(3). Thus the psd constraints on Y are strictly contained in the polyhedron defined by the edge inequalities: |b| ≤√(c^2/3-2c) and c ≥ 0 |b| ≤ c/√(3) |c| < c/√(3) + √(3) . Therefore no edge can be deleted by a psd choice of Y. We note that while these graphs have no (n-2)-sparsifiers, the set of (n-2)-isospectral graphs is not a point – even more than that, it is full dimensional and unbounded. Any matrix Y satisfying the psd constraints provides a valid (n-2)-isospectral graph. We note that the choice ł_2 = … = ł_n-2 in the above proof serves only to make the argument cleaner. There are likely many choices of eigenvalues and eigenvectors for which a similar construction produces a graph that does not sparsify. §.§ Proof of Theorem <ref> We can assume the missing edge is (n-1,n). This graph has three eigenspaces. The first two eigenpairs are (0, ), (n-2, e_n-1 - e_n), and the eigenspace for ł = n is spanned by the eigenvectors {e_1 - e_j}_j=2^n-2∪{_n - n(e_n-1 + e_n)/2}. Let Φ be any orthonormal eigenbasis of L_G so that ϕ_n = (_n - n e_n-2)/ √( n(n-1)). (This is a convenient choice for ϕ_n, different from the ones listed. Check that it is an eigenvector with eigenvalue n.) Then, the Laplacians in _G(2) look like L = (n-2)(I_n - J_n/n) + Φ_>2 Y Φ_>2^⊤ where Y ≽ 0. Restricting our attention to matrices Y of the form (0, …, 0, y), we get Laplacians of the form L = (n-2)(I_n - J_n/n) + y ϕ_n ϕ_n^⊤ : y ≥ 0 The condition L_n-1,n = 0 implies that (y ϕ_n ϕ_n^⊤)_n-1,n = y /(n(n-1)) = (n-2)/n , that is, y = (n-2)(n-1). Plugging this in, we get the Laplacian of the star graph K_1,n with equal edge weights n-2, where the center of the star is the vertex n-2. This choice of central vertex is not special – to place vertex j∈ [n-2] at the center of the star, set ϕ_n = (_n - ne_j)/ √( n(n-1)). §.§ Proof of Theorem <ref> By Corollary <ref>, it suffices to show that the result holds for k= n-1. Consider a fixed orthonormal basis of ^n where ϕ_1 = / √(n) and the last eigenvector is ϕ_n = ( -_n-1, n-1)/ √(n (n-1)). By Lemma <ref>, _K_n(n-1) = { L= nI - J_n + y ϕ_nϕ_n^⊤ : y ≥ 0, L_st≤ 0 ∀ s ≠ t }. Additionally, ϕ_nϕ_n^⊤ = 1/n(n-1)[ J_n-1 (1-n) _n-1; (1-n) _n-1^⊤ (n-1)^2 ]. For s < t we see that L_st = -1 + y/(n(n-1)) t ≤ n-1 -1 -y/n t = n . The conditions L_st≤ 0 and y ≥ 0 imply that 0 ≤ y ≤ n(n-1). Therefore we can simplify again: _K_n(n-1) = { nI - J + y ϕ_nϕ_n^⊤ : 0 ≤ y≤ n(n-1) }. Choosing y = n(n-1) produces the Laplacian L = [ n I_n-1 - n _n-1; - n _n-1^⊤ n(n-1) ]. This corresponds to the star graph where the vertex n is at the center, and every edge has weight n. There is nothing special about the choice of vertex n as the center of the star, by relabeling the vertices any vertex can be made the center. §.§ Proof of Theorem <ref> We start by recording the eigenvalues and an eigenbasis of the wheel graph in Table <ref>, which are well understood from the formulation of W_n+1 as a cone over the cycle C_n <cit.>. We use the following notation for the nontrivial eigenvectors of C_n arising from the representations of /n, j ∈ [⌊ (n-1)/2 ⌋]. ϕ_j :=[ 1; cos(2π j/n); ⋮; cos(2π (n-1)j/n) ], ψ_j := [ 0; sin(2π j/n); sin(2π 2j/n); ⋮; sin(2π (n-1)j/n) ]. We can also write the Laplacian of W_n+1 in terms of that of C_n. L_W_n+1 = ([ 3 -1 0 … 0 -1 -1 3 -1 0 … 0 ⋱ ⋱ ⋱ ⋱ -1 0 … 0 -1 3 -_n; -_n^⊤ n ]) = ([ L_C_n + I_n -_n; -_n^⊤ n ]), Let Φ be any orthonormal eigenbasis of L_W_n+1, let ł = 3 - 2 cos (2π/n), and let Λ' = ( 0 ,_n-1^⊤, n+1). We will show that L= łΦΛ' Φ^⊤∈_W_n+1(3), and moreover that L = ł L_K_1,n is the Laplacian of the claimed spanning tree. By construction, the first three eigenpairs of L are (0, ϕ_1),(ł, ϕ_2), (ł, ϕ_3) – this agrees with the first three eigenpairs of L_W_n+1. All other eigenvalues of L are ł or ł n, which are both at least ł. Taking Λ to be the matrix of eigenvalues of L_W_n+1, ΦΛ' Φ^⊤ = ΦŁΦ^⊤ - Φ(Ł - Λ' ) Φ^⊤ = L_W_n+1 - Φ(0, ł_2 -1, …, ł_n -1, 0) Φ^⊤ = L_W_n+1 - ([ L_C_n 0; 0^⊤ 0 ]) = ([ I_n -_n; -_n^⊤ n ]) = L_K_1,n. So we see that L = ł L_K_1,n, which satisfies all the constraints L_st≤ 0 for s≠ t, L_st = 0 for st ∉ E(W_n+1). Thus, L defines a 3-sparsifier of L_W_n+1. §.§ Proof of Theorem <ref> It is clear from the variational characterization and the quadratic form ⟨ f, L_G f ⟩ = ∑_(u,v) ∈ E w_uv (f(u) - f(v))^2 that removing edges cannot increase any eigenvalue. We will now prove the statement for k=2, the general argument is then identical via Rayleigh-Ritz and the variational characterization. We can therefore remove edges until we arrive at the spanning tree and compute its first non-trivial eigenvector ϕ_2. If it is now the case that for all edges in (u,v) ∈ E ∖ E_T that ϕ_2(u) = ϕ_2(v), then it means that adding these edges back in does not change the value of the quadratic form. This shows that ϕ_2 is also an eigenvector on G corresponding to the same eigenvalue as on the tree since it minimizes the quadratic form. This proves that the tree is a k=2 sparsifier for the choice of basis of eigenspaces given by {, ϕ_2 }. In general, we may now repeat the same argument for any k and the result follows. §.§ Proof of Lemma <ref> Suppose Λ = {ϕ_1, …, ϕ_k}. Then x ∈Λ implies that x = ∑_j=1^k β_j ϕ_j for β_j ∈. Therefore, xx^⊤ = (∑_j=1^k β_j ϕ_j)( ∑_j=1^k β_j ϕ_j^⊤) = ∑β_i^2 ϕ_i ϕ_i^⊤ + ∑_i ≠ jβ_i β_j (ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤). Set s_ii := ⟨ A, ϕ_i ϕ_i^⊤⟩≥ 0 and s_ij = ⟨ A, ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩∈. These are constants that we can compute from A and a basis of Λ. Then x^⊤ A x = ⟨ A, xx^⊤⟩ = ∑β_i^2 ⟨ A, ϕ_i ϕ_i^⊤⟩ + ∑_i ≠ jβ_i β_j ⟨ A, ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩ = ∑β_i^2 s_ii + ∑_i ≠ jβ_i β_j s_ij. Setting y_ii := ⟨ B, ϕ_i ϕ_i^⊤⟩ and y_ij = ⟨ B, ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩∈ we have S_A(Λ) = { B ≽ 0 : Q_A(x) = Q_B(x) ∀ x ∈Λ} = { B ≽ 0 : ⟨ A, xx^⊤⟩ = ⟨ B, xx^⊤⟩ ∀ x ∈Λ} = { B ≽ 0 : ∑β_i^2 s_ii + ∑_i ≠ jβ_i β_j s_ij = ∑β_i^2 y_ii + ∑_i ≠ jβ_i β_j y_ij ∀ β∈^k } = { B ≽ 0 : y_ii = s_ii, y_ij = s_ij} = { B ≽ 0 : ⟨ B, ϕ_i ϕ_i^⊤⟩ = s_ii, ⟨ B, ϕ_i ϕ_j^⊤ + ϕ_j ϕ_i^⊤⟩ = s_ij}. §.§ Proof of Theorem <ref> For any point w ∈^E_≥ 0, L_G(w) is the Laplacian of a subgraph G̃ of G by the definition of L_G(w). In particular, it is already psd and has the edge sparsity of G built in. Therefore all points in P^Q(x)_G(k) are Q_k(x)-sparsifiers of G. The faces of P^Q(x)_G(k) correspond to a collection of inequalities w_i ≥ 0 holding at equality. Therefore, the sparsest sparsifiers lie on the smallest dimensional faces of P^Q(x)_G(k). It could be that some of the graphs on the boundary of P^Q(x)_G(k) are disconnected but they still satisfy the needed conditions on the quadratic form. Connected subgraphs of G must have Laplacians of rank n-1, and (L'_G(w))=0 if and only if (L_G(w)) < n-1. Since L'_G(w) is a principal minor of L_G(w), and L_G(w) ≽ 0 for all w in the polyhedron P^Q(x)_G(k), it must be that the polyhedron satisfies (L_G'(w)) ≥ 0. § CONCLUSION We conclude with a number of final remarks, comments and observations. A Dynamical Systems Motivation. Instead of considering a graph, one could think about the behavior of dynamical systems on graphs. A particularly natural example is the behavior of the heat equation: given a temperature f: V →ℝ, one would naturally ask that vertices that are surrounded by warmer vertices should heat up while vertices surrounded by colder vertices should get colder. This suggests that the temperature u:[0,∞] × V →ℝ is initially given by the function f, meaning u(0,v) = f(v) and, at time t, satisfies ∂ u/∂ t(v) = ∑_(v,w) ∈ E w_vw· ( u(t,w) - u(t,v)) which can be concisely written as ∂_t u = -(D-A) u or u_t = -L u. We also note since every edge is summed over twice, we have ∑_v ∈ V u(t,v) = ∑_v ∈ V u(0,v) and the total amount of caloric energy in the graph remains constant. Since L is diagonalizable, we deduce that if u(0,x) = ∑_i=1^n a_i ϕ_i u(t,x) = ∑_i=1^n a_i e^-λ_i tϕ_i which can be observed by noting that e^-λ_i tϕ_i is a solution when the initial condition is given by ϕ_i. The general case then follows from linearity. Since the exponential decay is larger for larger eigenvalues, we see that the behavior of u(t,x) for large values of t is, to leading order, well-approximated by u(t,x) = ∑_i=1^n a_i e^-λ_i tϕ_i ∼∑_i ≤ k^ a_i e^-λ_i tϕ_i with an error at scale ∼exp(-λ_k+1· t). In light of this, one can motivate the graph sparsification as one that preserves the long-time behavior of the heat equation as accurately as possible. A Cheeger Inequality Motivation. Cheeger's inequality <cit.> shows that the eigenvalue λ_2 (the `algebraic connectivity') gives bounds on how easily the graph can be decomposed into two graphs with relatively few edges running between them. This can be seen as an extension of the basic algebraic fact that λ_2 = 0 if and only if G is comprised of at least two disjoint graphs. Pushing the analogy, we know that λ_m = 0 if and only if G is comprised of at least m mutually disconnected graphs. One could now naturally wonder whether λ_m can say anything about how easy or hard it is to decompose a graph into m clusters with relatively few edges running between them. Results of this type have indeed recently been obtained, we refer to <cit.>. It is an immediate consequence of our sparsification approach that the number of connected components remains preserved once k > m. A Diffusion Map Interpretation. Laplacian eigenvectors of graphs have proven tremendously useful in obtaining low-dimensional embeddings ϕ: V →ℝ^d that reflect the overall structure of the graph. Famous methods of this type are Laplacian eigenmaps <cit.> or diffusion maps <cit.>. By the nature of our sparsification, the sparsifiers share the same low-dimensional embeddings. These mappings have been used successfully in dimensionality reduction exactly because these lower-dimensional embeddings tend to capture important information contained in the low-frequency part of the spectrum of the Laplacian. This could also be seen as an alternative (equivalent) motivation for our sparsification ansatz. Spectrally Extremal Examples. Theorem <ref> completely resolved the case of the cube {0,1}^d and shows a very natural type of stability result: the first two eigenvalues (and the (d+1)-dimensional associated eigenspaces) fix the cube completely. This is particularly satisfying insofar as one would not expect there to be any particular canonical subgraph that shares many of the same properties and symmetries: the cube graph is already perfect just the way it is. It would be interesting to see whether similar results exist for other families of graphs that arise in a similar fashion, in particular the example of Cayley graphs. Preserving other Laplacians. Throughout the paper, our goal was to preserve the low frequency eigenvalues and eigenvectors of the Kirchhoff Laplacian L= D-A. However, there are several other notions of Graph Laplacian that have a number of different properties, examples being I - D^-1/2AD^-1/2 and I - A D^-1. They preserve different types of properties and emphasize a somewhat different aspect of graph geometry. The Laplacian I - A D^-1, for example, is intimately connected to the behavior of the random walk and sparsifying while preserving the low-frequency spectrum of I - A D^-1 would lead to another way of preserving the local geometry. In these examples edge weights enter nonlinearly into the Laplacian while entering linearly into L=D-A. As such it is reasonable to assume that the case L=D-A is somewhat distinguished and perhaps allows for the most complete analysis. Nonetheless, it would be interesting to see whether the main idea that underlies our ansatz could be carried out in other, more nonlinear, settings. Approximate Preservation. The main philosophy that underlies our approach to sparsification is that small eigenvalues and eigenvectors encapsulate the overall global structure of a graph which suggests preserving them while making the graph more sparse. While there are some results of a purely algebraic nature, many of the results like Cheeger's inequality and statements of this nature, are continuous in the underlying parameters. This suggests that it would be quite feasible to preserve low frequency eigenvalues and eigenvectors approximately, although we don't develop it in this paper. Acknowledgments. S.S. is supported by the NSF (DMS-2123224) and the Alfred P. Sloan Foundation. R.T. is supported by the Walker Family Endowed Professorship at the University of Washington. C.B. was supported by the NSF (DMS-1929284) while in residence at ICERM in Providence, RI, during the Discrete Optimization Semester program. We thank Nikhil Srivastava for an inspiring conversation at the 2023 Joint Math Meetings. 10 babecki C. Babecki, Codes, cubes, and graphical designs. Journal of Fourier Analysis and Applications 27, no. 5 (2021), 1–34. babeckiShiroma C. Babecki and D. Shiroma, Eigenpolyope Universality and Graphical Designs, arXiv:2209.06349 thomas C. Babecki and R. Thomas, Graphical designs and Gale duality. Math. Programming (2022). spiel4 J. Batson, D. Spielman and N. Srivastava, Twice-Ramanujan sparsifiers. In Proceedings of the forty-first annual ACM symposium on Theory of computing (2009), 255–262. spiel3 J. Batson, D. Spielman, N. Srivastava and S.-H. Teng. Spectral sparsification of graphs: theory and algorithms. Communications of the ACM 56, no. 8 (2013), 87–94. niyogi M. Belkin and P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15 (2003), 1373–1396. bhat R. Bhattacharjee, G. Dexter, C. Musco, A. Ray and D. P. Woodruff, Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. arXiv:2305.05826. BrouwerHaemersSpectra A. E. Brouwer and W. H. Haemers, Spectra of Graphs. Springer, New York, 2012. cheeger J. Cheeger, A lower bound for the smallest eigenvalue of the Laplacian, Proc. of Princeton Conf. in Honor of Prof. S. Bochner (1969) 195–199. cheng X. Cheng, G. Mishne and S. Steinerberger, The Geometry of Nodal Sets and Outlier Detection, Journal of Number Theory 185 (2018), 48–64. cheng2 X. Cheng, M. Rachh and S. Steinerberger, On the Diffusion Geometry of Graph Laplacians and Applications, Appl. Comp. Harm. Anal., 46 (2019), 674–688. coif R. Coifman and S. Lafon, Diffusion maps. Applied and computational harmonic analysis, 21 (2006), 5–30. dec M. K. De Carli Silva, N. Harvey and C. M. Sato. Sparse sums of positive semidefinite matrices. ACM Transactions on Algorithms (TALG) 12, no. 1 (2015), 1–17. adela A. DePavia and S. Steinerberger, Spectral Clustering Revisited: Information Hidden in the Fiedler Vector, Foundations of Data Science 3 (2021), 225–249. dey T. Dey, P. Peng, A. Rossi and A. Sidiropoulos, Spectral concentration and greedy k-clustering. Comput. Geom. 76 (2019), 19–32. golubev K. Golubev, Graphical designs and extremal combinatorics, Linear Algebra and its Applications 604 (2020), 490–506. hoory-linial-wigderson S. Hoory, N. Linial and A. Wigderson, Expander graphs and their applications, Bull. Amer. Math. Soc. 43:4 (2006), 439–561. kwok T. Kwok, L.C. Lau and Y. T. Lee, Improved Cheeger's inequality and analysis of local graph partitioning using vertex expansion and expansion profile, SIAM J. Comput. 46 (3) (2017), 890–910. lee J. Lee, S. Oveis Gharan and L. Trevisan, Multi-way spectral partitioning and higher-order Cheeger inequalities. STOC'12—Proceedings of the 2012 ACM Symposium on Theory of Computing, 1117–1130, ACM, New York, 2012. liu S. Liu, Multi-way dual Cheeger constants and spectral bounds of graphs. Adv. Math. 268 (2015), 306–338. merris R. Merris, Laplacian graph eigenvectors. Linear Algebra Appl. 278 (1998), 221–236. spiel1 D. Spielman and S.-H. Teng, Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. Proceedings of the thirty-sixth annual ACM Symposium on Theory of Computing. 2004. spiel2 D. Spielman and S.-H. Teng, Spectral sparsification of graphs. SIAM Journal on Computing, 40 (2011), 981–1025. steinerberger S. Steinerberger, Generalized designs on graphs: sampling, spectra, symmetries. Journal of Graph Theory, 93 (2020), 253–267. steinerberger2 S. Steinerberger, Spectral limitations of quadrature rules and generalized spherical designs. International Mathematics Research Notices (2021), 12265–12280 steinosc S. Steinerberger, The product of two high-frequency Graph Laplacian eigenfunctions is smooth, Discrete Mathematics 346, no. 3 (2023), 113246. lax U. von Luxburg, A tutorial on spectral clustering. Stat. Comput. 17, no.4 (2007), 395–416.
http://arxiv.org/abs/2306.10831v1
20230619103002
Observation of Wannier-Stark ladder beyond mobility edge in disorder-free mosaic lattices
[ "Jun Gao", "Ivan M. Khaymovich", "Adrian Iovan", "Xiao-Wei Wang", "Govind Krishna", "Ze-Sheng Xu", "Emrah Tortumlu", "Alexander V. Balatsky", "Val Zwiller", "Ali W. Elshaari" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "physics.optics", "quant-ph" ]
[email protected] [email protected] [email protected] Quantum transport and localization are fundamental concepts in condensed matter physics. It is commonly believed that in one-dimensional systems, the existence of mobility edges is highly dependent on disorder. Recent theoretical works have shown that a modulated mosaic model could manifest an exact mobility edge even without quenched disorder. Here, we experimentally implement such disorder-free mosaic photonic lattices using silicon photonics platform. By creating a synthetic electric field, we could observe energy-dependent coexistence of both extended and localized states in the system. The Wannier-Stark ladder emerges when the resulting potential is strong enough, and can be directly probed by exciting different spatial modes of the lattice. Our studies bridge the gap between mobility edge and Wannier-Stark localization. Our developed photonic devices hold the potential to encode high dimensional quantum resources with compact and robust structures. Observation of Wannier-Stark ladder beyond mobility edge in disorder-free mosaic lattices Ali W. Elshaari July 31, 2023 ========================================================================================= The phenomenon of localization of electronic Bloch waves was first studied by P. W. Anderson <cit.>, where electronic wave functions become exponentially localized due to random disorder. For a 3D system, there exists a mobility edge (ME), which separates the localized and extended states by a critical energy as a function of disorder level <cit.>. Lower dimensional models, by replacing random disorder with a quasiperiodic potential, for example, the Aubry-André Harper model, can host both localized and extended states, showing an energy-independent critical transition at a self-dual point <cit.>. By further incorporating long-range hopping <cit.>, varying on-site potential <cit.>, breaking self duality <cit.>, applying periodic drive <cit.>, considering flat-band systems <cit.>, or introducing a quasiperiodic potential in mosaic lattices <cit.>, the modified model could support an energy-dependent ME in the energy spectrum. So far, the existence of ME in low dimensional systems has been experimentally confirmed with ultra-cold atomic lattices <cit.>, it is natural to ask if random or quasiperiodic potential is essential for a system to manifest MEs. A recent study <cit.> shows that a disorder-free 1D mosaic lattice with Stark effect could exhibit an exact ME, where such disorder-free localized states can be tracked back to the famous Wannier-Stark lattice <cit.>. By introducing a static electric field to the lattice, the resulting potential may lead to exponential localization of the wave function. With strong enough fields, the Wannier-Stark ladder is recovered, and each energy level corresponds to a localized eigenstate, while all the survived extended states live at small enough energies. The coexistence of extended and localized states clearly draws a boundary, where the localization transition occurs. In this Letter, we experimentally realize such disorder-free mosaic model, observe energy-dependent coexistence of localized and extended states, and probe the Wannier-Stark ladder in the photonic mosaic lattices. By utilizing a Si_3N_4 waveguide array with engineered on-site potentials and nearest-neighbor hopping terms <cit.>, we create a synthetic constant electric field that localizes part of the eigenstates in real space. The light intensity distribution is directly probed through a single-shot top imaging with a fan-out structure and grating couplers. In the strongly localized regime, we can directly excite a single mode in the lattice to probe the localization eigenstates, with over 98% fidelities between experimentally measured states and the corresponding eigenstates. To further verify the coexistence of extended and localized states in the system, we excite the lattice in both weak and strong force regime, and compare the inverse participation ratio (IPR). By calculating the overlap weights with respect to the localized eigenstates, we reconstruct the Wannier-Stark ladder in the strong field regime, demonstrating equally spaced energy levels as predicted by the theory. Our work extends the understanding of the Wannier-Stark localization and ME physics, in addition to providing a new platform for studying the extended-localization transition, which offers a more scalable and precise approach to modulate the lattice parameters at room temperature. Our results offer a promising avenue for on-chip high-dimensional quantum information encoding <cit.>, and will also inspire particle statistics induced quantum correlation in ME studies by using multi-photon state excitation <cit.>. Here, we consider a 1D mosaic lattice with Stark effect, which can be described by the following Hamiltonian H=-J ∑_n(c_n^† c_n+1+ H.c. )+∑_n ϵ_n c_n^† c_n, ϵ_n= F n, n= mκ, with integer m, 0, otherwise, where c_n is the annihilation operator at site n, J is the nearest-neighbor hopping term, ϵ_n is the on-site modulated potential, which is further determined by a constant force F and modulation period κ. This model introduces a Stark potential on every κth site, and the value is linearly increasing along the site number. By studying separately the localized wave functions, having most of their weight at n=mκ sites, exponentially decaying with distance, and large energies E_m ≃ F m κ≫ 1 with the perturbation theory in J/E_m and the corresponding effective Hamiltonian with the projected out above localized states <cit.>, we have improved the results of <cit.>, finding the following ME at κ > 1 E = E_ME = max[2J, (e J/F κ L)^1/(κ-1)] . In <cit.> we have also provided the exact plain-wave states with the momenta p=qπ L/κ, q=1,…, κ and antinodes at n=mκ, as well as shown that the rest extended states have reduced values at n=mκ. The plain-wave states are unaffected by the potential and form the subband of the width 2J, with energies E_q = -2J cos(π q/κ). Equation (<ref>) implies that in the strong field regime or large system size, the ME will converge to 2J, and all extended states share the property of small eigenstate coefficients at the sites n=mκ with the potential <cit.>, while the localized eigenstates characterized by the exponential decay and IPR form a Wannier-Stark ladder, as shown in Fig. <ref>(a). To experimentally realize the mosiac lattice, we first provide elaboration on the design of the photonic lattice required to simulate the synthetic static electric field, which corresponds to a linearly increasing on-site potential. We use full-vectorial mode solver <cit.> to numerically simulate the propagation constants of the waveguide. To illustrate this in <cit.>, we show numerical simulations of the on-site energy for a single isolated waveguide with a fixed height of 250 nm and varying the width, all relative to the energy at a width of 500 nm. The fundamental transverse electric (TE) mode is used in all the calculations. Our chosen operating point enables a significant tuning range around the 500 nm width, while also ensuring single-mode operation. We then express the on-site potential V as a third order polynomial expansion of the waveguide width as follows V(x)=∑_m=0^3 a_mx^m, which helps us to back calculate the waveguide width at the given modulation amplitudes. In our experimental design, we have chosen a total number of 11 waveguides, κ=2, a constant force of F/J=1 and 4 along with a nearest-neighbor hopping term of J=0.01. This particular combination has allowed us to operate in both the weak and strong field limit of Fig. <ref>(a), thereby enabling clear identification of both the extended states and the Wannier-Stark ladder. By numerically solving Eq. (<ref>) to retrieve the widths of the modulated waveguides, we could engineer the on-site energy to increase linearly every second waveguide, as expressed by Eq. (<ref>). After determining the waveguide widths that render the on-site potential throughout the lattice, it is important to mention that in order to maintain a constant hopping term of J=0.01 while modulating the on-site energy, the gaps between the waveguides must be carefully designed. A dedicated section in <cit.> is reserved for discussing the employed coupled mode theory and the mode-solver procedure for asymmetric coupled waveguides. It also provides detailed information on the device parameters used for the nano-fabrication. Another section focuses on the device fabrication and the proximity correction of the electron beam lithography to realize the dense structure of the photonic lattice <cit.>. Fig. <ref>(b) shows a scanning electron microscope (SEM) image of the fabricated device, it comprises a photonic mosaic lattice (highlighted in light green), in conjunction with a fan-out configuration of all the lattice sites. The fan-out culminates in grating couplers (highlighted in light red) that facilitate top imaging for measuring the intensity in each lattice site. To examine the excitation dynamics in the lattice, devices with different propagation lengths are nano-fabricated in intervals of 200 µm. In order to avoid any cross-talk between the excitation sites, two uniform adjacent lattices are employed for each length, with one tailored to excite the even sites and the other to excite the odd sites. Coherent laser centered at 786 nm is employed for the excitation of the photonic mosaic lattice. The light is coupled to the nanophotonic chip via a lensed fiber attached to a 6-axis nano-positioning stage. The TE mode of the waveguide is selectively excited using a fiber-based 3-paddle polarization-controller at the input. The output light intensity at each lattice site is captured by the grating coupler termination using top imaging with a 40X objective, and recorded using a charge-coupled device (CCD) camera. Further information on the design and numerical simulations of the fabricated grating couplers, in addition to the top images acquisition and analysis routine, can be found in <cit.>. In all the measurements, single-site excitation of the lattice is utilized. Experimental measurements presented in Fig. <ref>(a) exhibit the excitation of the 4th waveguide in the lattice, corresponding to the first localized state in the ladder of Fig. <ref> above the ME in the strong field regime. The intensity distribution of light across all lattice sites is monitored at regular intervals of 200 µm. Evidently, the majority of the light intensity is concentrated within the 4th site, in agreement with the theoretical predictions of the Wannier-Stark ladder. The model predicts the existence of eigenstates that are spatially localized as light propagates within the lattice. Numerical simulation of light dynamics, displayed in the central panel, is conducted utilizing the experimental design parameters, whereby a single-site excitation with considerable overlap with the Wannier-Stark ladder's eigenstates reveals the expected spatial localization. Similarly, we measure all the other localized eigenstates above the ME with single-site excitation. In order to assess the quality of the experimental results, we evaluate the fidelity between the measured output state intensity distribution I^exp and the corresponding theoretical ladder states I^eigen, given by the following formula Fidelity = √(∑_n I_n^exp· I_n^eigen)/∑_nI_n^exp∑_nI_n^eigen. We compare the fidelity of each excitation at different propagation distances z. Fig. <ref>(b) shows the fidelity for measured four Wannier-Stark ladder states beyond the ME, as a function of the propagation length. At a strong field regime characterized by F/J=4, the measured fidelity surpasses 98%. This finding is significant as it demonstrates the reliability of the theoretical models used to describe the system under investigation. Additionally, the high fidelity indicates that a single-site excitation is expected to yield an excellent agreement between the output state distribution measured experimentally and the corresponding theoretical Wannier-Stark ladder states. As a comparison, Figure <ref>(a) illustrates the pronounced difference in the intensity profile of states below and above the mobility edge both at weak (top) and strong (bottom) fields. Specifically, we present the measured light distribution after 800 µm propagation length, for even (4th, 6th, 8th) and odd (3rd) lattice sites. Notably, for the case of the even site excitation, which strongly overlaps with a Wannier-Stark state far away from the mobility edge, E_m ≫ J, the light remains confined within the excitation site, in contrast to the odd case where the light spreads to the neighboring lattice sites. This observation underscores the unique features of the Wannier-Stark ladder states and their critical role in the localization in the lattice system. In order to further explore the localization properties of the lattice, we have computed the IPR, which is a well-established measure in the field of condensed matter physics, IPR=∑_j |ψ(j,t)|^4 of the propagated single-site excitation ψ(j,t). In particular, it provides a quantitative measure of the extent to which an eigenstate is spread out over the entire system or is confined to a small region <cit.>. The resulting IPR values are displayed in Fig. <ref>(b), where we observe that even waveguide excitations approach the limit of full localization (IPR=1), indicating the highly confined nature of these states. In contrast, the third waveguide excitation exhibits lower IPR values, indicating a more extended character of this state. The distinct difference between the weak and strong field regime clearly shows a strong evidence for the existence of energy-dependent ME in the photonic mosaic lattice with Stark effect. With the measured localized state intensity distributions (for example, at the propagation distance of 800 µm), we can reconstruct the Wannier-Stark ladder in the energy spectrum. To achieve this, we can calculate the overlap weight of different eigenenergies by projecting the output distribution over all 11 eigenstates. Note that the measured intensity is strongly localized, thus we can focus on the localized site and ignore the phase information of the other lattice sites. The weight {w_i} is given by w_i=|⟨ϕ_i|√(I^exp)⟩|^2, here |ϕ_i⟩ represents the ith eigenstate. As shown in Fig. <ref>(c), we list all the weights in a color map for four Wannier-Stark ladder states. The result clearly reveals the dominant overlap in each energy level, and forms a ladder structure with equally spaced energy levels. We can directly map the Wannier-Stark ladder to different spatially localized modes. In conclusion, our experimental investigation of the Wannier-Stark ladder and ME in a disorder-free photonic lattice has provided new insights into the fundamental concepts of quantum transport and localization in condensed matter physics. Our use of a Si_3N_4 waveguide array with an engineered on-site potential and nearest-neighbor hopping rate allows us to create a synthetic electric field that localizes part of the eigenstates in real space. By probing the light intensity at each lattice site through a single-shot approach using a fan-out structure and grating couplers, we are able to observe the coexistence of both extended and localized states in the system. Our experimental results have confirmed recent theoretical works that suggest the existence of mobility edges in lower dimensional systems, which are not solely dependent on disorder, and have demonstrated the emergence of the Wannier-Stark ladder when the electric field is strong enough. Moreover, our studies provide a bridge between ME and Wannier-Stark localization, further advancing our understanding of these important concepts in condensed matter physics. The potential applications of our developed photonic devices are vast and promising, as they offer a compact and robust means of encoding high dimensional quantum resources, for example, the structure can be used to encode dual-rail qubits or even qudits<cit.>. By extending the understanding of the Wannier-Stark ladder and MEs in disorder-free photonic lattices, we have opened up new avenues for the development of novel photonic devices based on these fundamental concepts. A.W.E acknowledges support Knut and Alice Wallenberg (KAW) Foundation through the Wallenberg Centre for Quantum Technology (WACQT), Swedish Research Council (VR) Starting Grant (Ref: 2016-03905), and Vinnova quantum kick-start project 2021. V.Z. acknowledges support from the KAW and VR. Work at Nordita was supported by European Research Council under the European Union Seventh Framework ERS-2018-SYG HERO, KAW 2019.0068 and the University of Connecticut. 99 Anderson1958 P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109, 1492 (1958). Evers2008 F. Evers and A. D. Mirlin, Anderson transitions, Rev. Mod. Phys. 80, 1355 (2008). AAH1 S. Aubry and G. André, Analyticity breaking and Anderson localization in incommensurate lattices, Ann. Israel Phys. Soc 3, 133 (1980). AAH2 P. G. Harper, Single band motion of conduction electrons in a uniform magnetic field, Proc. Phys. Soc., London Sect. A 68, 874 (1955). AAH3 Y. Lahini, R. Pugatch, F. Pozzi, M. Sorel, R. Morandotti, N. Davidson and Y. Silberberg, Observation of a localization transition in quasiperiodic photonic lattices, Phys. Rev. Lett. 103, 013901 (2009). Prange1983 R. E. Prange, D. R. Grempel and S. Fishman, Wave functions at a mobility edge: An example of a singular continuous spectrum, Phys. Rev. B 28, 7370 (1983). Biddle2009 J. Biddle, B. Wang, D. J. Priour Jr and S. Das Sarma, Localization in one-dimensional incommensurate lattices beyond the Aubry-André model, Phys. Rev.A 80, 021603(R) (2009). Biddle2010 J. Biddle and S. Das Sarma, Predicted mobility edges in one-dimensional incommensurate optical lattices: an exactly solvable model of Anderson localization, Phys. Rev. Lett. 104, 070601 (2010). Gopalakrishnan2017 S. Gopalakrishnan, Self-dual quasiperiodic systems with power-law hopping, Phys. Rev. B 96, 054202 (2017). Deng2019 X. Deng, S. Ray, S. Sinha, G. V. Shlyapnikov and L. Santos, One-dimensional quasicrystals with power-law hopping, Phys. Rev. Lett. 123, 025301 (2019). Saha2019 M. Saha, S. K. Maiti and A. Purkayastha, Anomalous transport through algebraically localized states in one dimension, Phys. Rev. B 100, 174201 (2019). Deng2018 X. Deng, V. E. Kravtsov, G. V. Shlyapnikov and L. Santos, Duality in power-law localization in disordered one-dimensional systems, Phys. Rev. Lett. 120, 110602 (2018). Nosov2019 P. A. Nosov, I. M. Khaymovich and V. E. Kravtsov, Correlation-induced localization, Phys. Rev. B 99, 104203 (2019). Kutlin2020 A. G. Kutlin and I. M. Khaymovich, Renormalization to localization without a small parameter, SciPost Phys. 8, 049 (2020). Deng2022 X. Deng, A. L. Burin, I. M. Khaymovich, Anisotropy-mediated reentrant localization, SciPost Phys. 13, 116 (2022). DasSarma1988 S. Das Sarma, S. He and X. C. Xie, Mobility edge in a model one-dimensional potential, Phys. Rev. Lett. 61, 2144 (1988). DasSarma1990 S. Das Sarma, S. He and X. C. Xie, Localization, mobility edges, and metal-insulator transition in a class of one-dimensional slowly varying deterministic potentials, Phys. Rev. B 41, 5544 (1990), Ganeshan2015 S. Ganeshan, J. H. Pixley and S. Das Sarma, Nearest neighbor tight binding models with an exact mobility ddge in one dimension, Phys. Rev. Lett. 114, 146601 (2015). Li2017 X. Li, X. Li and S. Das Sarma, Mobility edges in one dimensional bichromatic incommensurate potentials, Phys. Rev. B 96, 085119 (2017). Yao2019 H. Yao, H. Khoudli, L. Bresque and L. Sanchez-Palencia, Critical behavior and fractality in shallow one-dimensional quasiperiodic potentials, Phys. Rev. Lett. 123, 070405 (2019). Yin2020 H. Yin, J. Hu, A.-C. Ji, G. Juzeliunas, X.-J. Liu and Q. Sun, Localization driven superradiant instability, Phys. Rev. Lett. 124, 113601 (2020). Roy2018 S. Roy, I. M. Khaymovich, A. Das and R. Moessner, Multifractality without fine-tuning in a Floquet quasiperiodic chain, SciPost Phys. 4, 025 (2018). Danieli2015 C. Danieli, J. D. Bodyfelt, and S. Flach, Flat-band engineering of mobility edges, Phys. Rev. B, 91, 235134 (2015). Ahmed2022 A. Ahmed, A. Ramachandran, I. M. Khaymovich and A. Sharma, Flat band based multifractality in the all-band-flat diamond chain, Phys. Rev. B 106, 205119 (2022). Lee2022 S. Lee, A. Andreanov and S. Flach, Critical-to-insulator transitions and fractality edges in perturbed flat bands, Phys. Rev. B 107, 014204 (2022). Kim2022 Y. Kim, T. Čadež, A. Andreanov and S. Flach, Flat band induced metal-insulator transitions for weak magnetic flux and spin-orbit disorder, arXiv:2211.09410 (2022). Wang2020 Y. Wang, X. Xia, L. Zhang, H. Yao, S. Chen, J. You, Q. Zhou and X.-J. Liu, One-dimensional quasiperiodic mosaic lattice with exact mobility edges, Phys. Rev. Lett. 125, 196604 (2020). Liu2021 Y. Liu, Y. Wang, X.-J. Liu, Q. Zhou and S. Chen, Exact mobility edges, PT -symmetry breaking, and skin effect in onedimensional non-Hermitian quasicrystals, Phys. Rev. B 103, 014203 (2021). Ucold1 H. P. Lüschen, S. Scherg, T. Kohlert, M. Schreiber, P. Bordia, X. Li, S. Das Sarma and I. Bloch, Single-particle mobility edge in a one-dimensional quasiperiodic optical lattice, Phys. Rev. Lett. 120, 160404 (2018). Ucold2 F. A. An, K. Padavić, E. J. Meier, S. Hegde, S. Ganeshan, J. H. Pixley, S. Vishveshwara and B. Gadway, Interactions and mobility edges: observing the generalized Aubry-André model, Phys. Rev. Lett. 126, 040603 (2021). Ucold3 Y. Wang, J.-H. Zhang, Y. Li, J. Wu, W. Liu, F. Mei, Y. Hu, L. Xiao, J. Ma, C. Chin and S. Jia, Observation of interaction-induced mobility edge in an atomic Aubry-André wire, Phys. Rev. Lett. 129, 103401 (2022). Dwiputra2022 D. Dwiputra and F. P. Zen, Single-particle mobility edge without disorder, Single-particle mobility edge without disorder, Phys. Rev. B 105, L081110 (2022). Wannier1962 G. H. Wannier, Dynamics of band electrons in electric and magnetic fields, Rev. Mod. Phys. 34, 645 (1962). Fukuyama1973 H. Fukuyama, R. A. Bari, and H. C. Fogedby, Tightly bound electrons in a uniform electric field, Phys. Rev. B 8, 5579 (1973). Emin1987 D. Emin and C. F. Hart, Existence of Wannier-Stark localization, Phys. Rev. B 36, 7353 (1987). TPRev1 L. Lu, J. D. Joannopoulos and M. Soljacic, Topological photonics, Nat. Photonics 8, 821 (2014). TPRev2 T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg and I. Carusotto, Topological photonics, Rev. Mod. Phys. 555, 015006 (2019). TPRev3 D. Smirnova, D. Leykam, Y. Chong and Y. Kivshar, Nonlinear topological photonics, Appl. Phys. Rev. 7, 021306 (2020). TPRev4 D. T. H. Tan, Topological silicon photonics, Adv. Photonics Res. 2, 2100010 (2021). InteP J. Wang, F. Sciarrino, A. Laing and M. G. Thompson, Integrated photonic quantum technologies, Nat. Photonics 14, 273 (2020). Elshaari2020 A. W. Elshaari, W. Pernice, K. Srinivasan, O. Benson and V. Zwiller, Hybrid integrated quantum photonic circuits, Nat. Photonics 14, 285 (2020). Chang2023 J. Chang, J. Gao, I. Esmaeil Zadeh, A. W. Elshaari and V. Zwiller, Nanowire-based integrated photonics for quantum information and quantum sensing, Nanophotonics 12, 339 (2023). Zhang2021 M. Zhang, L. Feng, M. Li, Y. Chen, L. Zhang, D. He, G. Guo, G. Guo, X. Ren and D. Dai, Supercompact photonic quantum logic gate on a silicon chip, Phys. Rev. Lett. 126, 130501 (2021). Onchipqudit Y. Chi, J. Huang, Z. Zhang, J. Mao, Z. Zhou, X. Chen, C. Zhai, J. Bao, T. Dai, H. Yuan, M. Zhang, D. Dai, B. Tang, Y. Yang, Z. Li, Y. Ding, L. K. Oxenløwe, M. G. Thompson, J. L. O’Brien, Y. Li, Q. Gong and J. Wang, A programmable qudit-based quantum processor, Nat Commun 13, 1166 (2022). Lahini2010 Y. Lahini, Y. Bromberg, D. N. Christodoulides and Y. Silberberg, Quantum correlations in two-particle Anderson localization, Phys. Rev. Lett. 105, 163905 (2010). Segev2013 M. Segev, Y. Silberberg and D. N. Christodoulides, Anderson Localization of Light, Nat. Photonics 7, 197 (2013). Crespi2013 A. Crespi, R. Osellame, R. Ramponi, V. Giovannetti, R. Fazio, L. Sansoni, F. De Nicola, F. Sciarrino and P. Mataloni, Anderson localization of entangled photons in an integrated quantum walk, Nat. Photonics 7, 322–328 (2013). SM See Supplemental Material at [URL will be inserted by publisher] for details of calculations of the probabilities of resonances, which includes Refs. <cit.>. silicon L. Chrostowski and M. Hochberg, Silicon Photonics Design: From Devices to Systems. (Cambridge University Press, 2015). lumerical FDTD: 3D Electromagnetic Simulator, Lumerical Inc. Gao2022 J. Gao, Z.-S. Xu, D. A Smirnova, D. Leykam, S. Gyger, W.-H. Zhou, S. Steinhauer, V. Zwiller and A. W. Elshaari, Observation of Anderson phase in a topological photonic circuit, Phys. Rev. Research 4, 033222 (2022). Xu2022 Z.-S. Xu, J. Gao, G. Krishna, S. Steinhauer, V. Zwiller and A. W. Elshaari, Direct measurement of topological invariants in photonic superlattices, Photon. Res. 10, 2901-2907 (2022). gao2023scalable J. Gao, L. Santos, G. Krishna, Z.-S. Xu, A. Iovan, S. Steinhauer, O. Gühne, P. Poole, D. Dalacu, V. Zwiller and A. W. Elshaari, Scalable generation and detection of on-demand W states in nanophotonic circuits, Nano Letters 23, 5350-5357 (2023). moody20222022 G. Moody, V. J. Sorger, D. J. Blumenthal, P. W. Juodawlkis, W. Loh, C. Sorace-Agaskar, A. E. Jones, K. C. Balram, J. C. Matthews, A. Laing, M. Da- vanco, L. Chang, J. E. Bowers, N. Quack, C. Gal- land, I. Aharonovich, M. A. Wolff, C. Schuck, N. Sin- clair, M. Lonˇcar, T. Komljenovic, D. Weld, S. Mookher- jea, S. Buckley, M. Radulaski, S. Reitzenstein, B. Pin- gault, B. Machielse, D. Mukhopadhyay, A. Akimov, A. Zheltikov, G. S. Agarwal, K. Srinivasan, J. Lu, H. X. Tang, W. Jiang, T. P. McKenna, A. H. Safavi-Naeini, S. Steinhauer, A. W. Elshaari, V. Zwiller, P. S. Davids, N. Martinez, M. Gehl, J. Chiaverini, K. K. Mehta, J. Romero, N. B. Lingaraju, A. M. Weiner, D. Peace, R. Cernansky, M. Lobino, E. Diamanti, L. T. Vidarte and R. M. Camacho, 2022 Roadmap on integrated quantum photonics, Journal of Physics: Photonics 4,1, 012501 (2022). elshaari2021deterministic A. W. Elshaari, A. Skalli, S. Gyger, M. Nurizzo, L. Schweckert, I. E. Zadeh, M. Svedendahl, S. Steinhauer and V. Zwiller, Deterministic integration of hBN emitter in silicon nitride photonic waveguide, Advanced Quantum Technologies 4, 6, 2100032 (2021). elshaari2017chip A. W. Elshaari, I. E. Zadeh, A. Fognini, M. E. Reimer, D. Dalacu, P. Poole, V, Zwiller and K. D. Jons, On-chip single photon filtering and multiplexing in hybrid quantum photonic circuits, Nature communications 8, 1, 379 (2017). nevlacsil2018broadband S. Nevlacsil, M. Eggeling, P. Muellner, G. Koppitsch, M. Sagmeister, J. Kraft and R. Hainberger, Broadband sin asymmetric directional coupler for 840 nm operation, OSA Continuum 1, 1324 (2018). Nosov2019mixtures P. A. Nosov and I. M. Khaymovich, Robustness of delocalization to the inclusion of soft constraints in long-range random models, Phys. Rev. B 99, 224208 (2019). Kutlin2021emergent A. G. Kutlin and I. M. Khaymovich, Emergent fractal phase in energy stratified random models, SciPost Phys. 11, 101 (2021). Motamarri2022RDM V. R. Motamarri, A. S. Gorsky and I. M. Khaymovich, Localization and fractality in disordered Russian Doll model, SciPost Phys. 13, 117 (2022).
http://arxiv.org/abs/2306.01413v1
20230602100449
Theory of magnetic field-stabilized compact skyrmions in thin film ferromagnets
[ "Anne Bernand-Mantel", "Sarah Barnova", "Anaïs Fondet", "Cyrill B. Muratov", "Theresa M. Simon" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Université de Toulouse, Laboratoire de Physique et Chimie des Nano-Objets, UMR 5215 INSA, CNRS, UPS, 135 Avenue de Rangueil, F-31077 Toulouse Cedex 4, France Centre d’Elaboration de Matériaux et d’Etudes Structurales, CEMES-CNRS, 29 rue Jeanne Marvig, 31055 Toulouse, France [email protected] Université de Toulouse, Laboratoire de Physique et Chimie des Nano-Objets, UMR 5215 INSA, CNRS, UPS, 135 Avenue de Rangueil, F-31077 Toulouse Cedex 4, France Université de Toulouse, Laboratoire de Physique et Chimie des Nano-Objets, UMR 5215 INSA, CNRS, UPS, 135 Avenue de Rangueil, F-31077 Toulouse Cedex 4, France Centre d’Elaboration de Matériaux et d’Etudes Structurales, CEMES-CNRS, 29 rue Jeanne Marvig, 31055 Toulouse, France Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, New Jersey 07102, USA Departimento di Matematica, Università di Pisa, Largo B. Pontecorvo, 5, 56127 Pisa, Italy Institut für Analysis und Numerik, Westfälische Wilhelms-Universität Münster, Einsteinstr. 62, 48149 Münster, Germany We present a micromagnetic theory of compact magnetic skyrmions under applied magnetic field that accounts for the full dipolar energy and the interfacial Dzyaloshinskii-Moryia interaction (DMI) in the thin film regime. Asymptotic analysis is used to derive analytical formulas for the parametric dependence of the skyrmion size and rotation angle, as well as the energy barriers for collapse and bursting, two processes that lead to a finite skyrmion lifetime. We demonstrate the existence of a new regime at low DMI, in which the skyrmion is stabilized by a combination of non-local dipolar interaction and a magnetic field applied parallel to its core, and discuss the conditions for an experimental realization of such field-stabilized skyrmions. Theory of magnetic field-stabilized compact skyrmions in thin film ferromagnets Theresa M. Simon July 31, 2023 ================================================================================ *Introduction. Since the first real-space observation of skyrmions in a chiral magnet <cit.>, there have been a great number of theoretical and experimental studies devoted to skyrmionics <cit.>. These developments are motivated by the remarkable compactness, stability and tunability of skyrmions, which makes them attractive candidates for future applications in information technology <cit.>. Starting with the first experiments in chiral magnets <cit.>, it was observed that skyrmions spontaneously appear in a certain range of non-zero magnetic field in various bulk materials and thin-film heterostructures <cit.>. Indeed, while the ground state of chiral magnets is the helical state at zero magnetic field, the ferromagnetic ground state is restored when a sufficiently strong magnetic field is applied. However, isolated skyrmions and skyrmion lattices may exist as metastable states under applied magnetic fields, as predicted in the late 80's <cit.>. The precise dependence of an isolated skyrmion size on the applied magnetic field was investigated numerically <cit.>. In the isolated skyrmion regime, the size of a skyrmion decreases (increases) with increasing (decreasing) magnetic field applied antiparallel to the skyrmion core <cit.>. While the antiparallel configuration has been widely investigated theoretically and experimentally <cit.>, the parallel configuration was considered in only a handful of studies <cit.>. The reason is that most studies on skyrmions are carried out in the strong DMI regime, where isolated skyrmions are very easily destabilized by a magnetic field applied parallel to the skyrmion core via the so-called “bursting” phenomenon, whereby the magnetic layer is reversed to the uniform state as the skyrmion radius goes to infinity. The position of the bursting line on the skyrmion stability diagram was estimated by numerical minimization in previous theoretical works <cit.>. A solution to avoid this instability is to confine the skyrmion in a dot to extend its stability to a wider range of applied fields <cit.>. Another possibility is to work in the low DMI regime. In this regime, it was recently demonstrated <cit.> that the non-local dipolar interaction that has been classically neglected <cit.>, may become comparable to the DMI and play a role in skyrmion stabilization. Several works based on approximate models suggest that the non-local dipolar interaction modifies the range of existence of isolated skyrmions in applied magnetic fields <cit.>, but this regime has remained largely unexplored. In this Letter, we carry out an investigation of isolated compact skyrmions under an applied magnetic field in the thin film and low DMI regime, taking into account the full stray field. We carry out an asymptotic analysis and obtain analytical formulas predicting the skyrmion radius, angle, as well as the collapse and bursting energy barriers as functions of the system parameters. We obtain a skyrmion stability diagram, which reveals a different response to a parallel field at low DMI strength compared to the existing phase diagrams <cit.>. This modification of the skyrmion phase diagram due to stray field effects is confirmed by direct micromagnetic simulations. We emphasize that in this regime the parallel field enables to increase the skyrmion size and stability and discuss possible routes to observe field-stabilized skyrmions experimentally in conventional skyrmion materials. *Model. We consider a ferromagnetic thin film with perpendicular magnetic anisotropy and interfacial DMI. Following our previous works <cit.> with the addition of the Zeeman energy term, the micromagnetic energy of a magnetization : ^2 →𝕊^2 reduces to (see <cit.> for details): E() = ∫_^2( |∇|^2 + (Q - 1) ||^2 ) d^2 r - ∫_^2( 2 h(m_ + 1) + 2 κ·∇) d^2 r - δ 8 π∫_^2∫_^2((𝐫) - (𝐫'))^2 |𝐫 - 𝐫'|^3 d^2 r d^2 r' + δ 4 π∫_^2∫_^2∇·(𝐫) ∇· (𝐫') | 𝐫 - 𝐫'| d^2 r d^2 r', where ∈^2 is the in-plane component of , i.e., the component perpendicular to the out-of-plane anisotropy easy axis, and ∈ is the out-of-plane component of , i.e., the component parallel to the easy axis. The energy is measured in the units of Ad, where A is the exchange stiffness and d the film thickness. Lengths are measured in the units of the exchange length ℓ_ex = √(2A /(μ_0 M_s^2)), where M_s is the saturation magnetization and μ_0 the vacuum magnetic permeability. The dimensionless film thickness is δ = d / ℓ_ex≲ 1. We assume that the external magnetic field is applied normally to the film plane. We introduced the dimensionless quality factor Q = K_u / K_d > 1, where K_u is the magnetocrystalline anisotropy constant and K_d= 1/2μ_0 M_s^2, the dimensionless DMI strength κ = D / √(AK_d), and the dimensionless applied magnetic field h defined as h= (𝐇·𝐳̂) / M_s. The first four energy terms are local and represent, respectively, the exchange energy, the effective anisotropy energy (magnetocrystalline energy renormalized to take into account the local stray field contribution), the Zeeman energy and the DMI energy. The last two terms correspond to the long-range part of the dipolar energy, which splits into surface and volume contributions (see <cit.> for details). We are considering an isolated skyrmion with a magnetization pointing up in its center, in a background where the magnetization is pointing down. As a consequence, we assume that the applied magnetic field in the positive field direction is lower than the anisotropy field, h < Q - 1, to ensure that the uniform state _0 = -𝐳̂ is a local minimizer of the energy. Under the condition that (𝐫) → - 𝐳̂ sufficiently fast as |𝐫| →∞, we consider the skyrmion number q() = 1 / 4 π∫_^2·( ∂/∂ x×∂/∂ y) dx dy <cit.>, so that with our convention we have q() = 1 for a skyrmion profile. *Skyrmion profiles. In the regime, in which the other energy terms remain perturbations to the dominating exchange energy, the skyrmion profile can be shown to be close to a Belavin-Polyakov profile <cit.>, i.e., a minimizer of the exchange energy among all with q() = 1 <cit.>. Therefore, we can proceed as in <cit.> with an asymptotic analysis based on a suitably truncated Belavin-Polyakov type profile _ρ, θ, L (for the precise form, see <cit.>), with the necessary modifications to account for the presence of the applied field. The equilibrium radius ρ, the rotation angle θ and the cutoff scale L are obtained <cit.> from the minimization of the leading order energy E(_ρ, θ, L) ≃ E_ρ,θ,L, where (see Fig. <ref> for an illustration): E_ρ,θ,L = 8π + 4π/L^2 + 4π (Q-1 - h)ρ^2 log( 4 L^2/e^2(1+γ)) - 4 π h ρ^2 - 8πκρcosθ + π^3 ρδ/8( 3cos^2 θ - 1), where γ≈ 0.5772 is the Euler-Mascheroni constant. Under the assumption ≪ 1, where = 1 /√(1 - h̅)×(8π |κ̅| - π^3/4δ̅) if |κ̅|≥3π^2/32δ̅, (128κ̅^2/3πδ̅ + π^3/8δ̅) else, with δ̅= δ√(Q - 1), κ̅= κ√(Q - 1), h̅ = h Q - 1, the minimizer of the reduced energy in (<ref>) gives the leading order skyrmion equilibrium angle θ_0 = 0 if κ̅≥3π^2/32δ̅, -π if κ̅≤ -3π^2/32δ̅, ±arccos(32κ̅/3π^2δ̅) else, and radius ρ_sk = ρ̅_sk / √(Q - 1), in units of ℓ_ex, where ρ̅_sk = 1/16π√(1 -h̅)×/(-W_-1(-β) ) , provided that β< e^-1 (see Sec. 2.3 in <cit.>). Here W_i refers to the i-th real-valued branch of the Lambert W function <cit.>, and β = e^1 + γ 32 πexp( h̅ 2 (1 - h̅) ). The energy of the optimal skyrmion solution is E_sk = 8π - ^2 /32π W_-1^2(-β )(-W_-1(-β ) - 1/2). For 0 < 1 - h̅≪ 1 with β< e^-1 fixed, there is also a minimum energy saddle point solution with equilibrium angle θ_0 from (<ref>) and radius ρ_sad = ρ̅_sad / √(Q - 1), where ρ_sad = 1/16π√(1 - h̅)×/(-W_0(-β) ) . The energy of the saddle is given by E_sad = 8π - ^2/32π W_0^2(-β )(-W_0(-β ) - 1/2). The above formulas become asymptotically exact when → 0 and h̅→ 1^- with β fixed, which corresponds to δ̅+ |κ̅| √(1 - h̅)exp( 1 2 (1 - h̅)) = O(1) as δ̅, κ̅→ 0. Asymptotically, the skyrmion and the saddle point solutions disappear via a saddle-node bifurcation at a critical value of |κ̅| = κ̅_c given by κ̅_c = π^2 32δ̅+ 4 √(1 - h̅) e^2 + γ e^-h̅ 2 ( 1 - h̅) ifδ̅≤δ̅_c, √(3 π^2 δ̅ 4( √(1 - h̅) e^2 + γ e^-h̅ 2 ( 1 - h̅) - π^2 δ̅ 256)) else, where δ̅_c = 64 √(1 - h̅)π^2 e^2 + γ e^-h̅ 2 ( 1 - h̅), as the value of |κ̅| is increased. *Skyrmion phase diagram. In Fig. <ref>, we present the asymptotic skyrmion phase diagram by plotting the scaled skyrmion radius ρ̅_sk from (<ref>) as a function of the dimensionless DMI strength κ̅ and applied field h̅, where the positive field direction is parallel to the skyrmion core [see the insets in Fig. <ref>(b)]. Skyrmion solutions are predicted to exist when 0 < |κ̅| + δ̅≲ 1 and h̅≤h̅_c, where h̅_c is obtained by solving for h in (<ref>), while no skyrmion solutions are expected for h̅ > h̅_c (see also <cit.>). According to Fig. <ref>, in the antiparallel configuration, h̅ < 0, the skyrmion radius is strongly increasing with |κ̅| and increasing slower with a decrease in |h̅|. In the parallel configuration, h̅ > 0, in the intermediate thickness (δ̅= 0.7) regime shown in Fig. <ref>(a), the skyrmion radius dependence is, to the contrary, dominated by its magnetic field dependence and the dependence on |κ̅| is strongly reduced for small |κ̅|. In particular, the lines of equal radius present a concave character and become parallel to the y axis as |κ̅| → 0. This diminished dependence of ρ̅_sk on |κ̅| comes from the fact that the dominant stabilization mechanism in the low |κ̅| regime is the long-range dipolar interaction. The skyrmion behavior in this regime presents remarkable differences from what is predicted by the classical skyrmion theory, which only takes into account the local dipolar interaction <cit.>. For comparison, we set δ̅=0 in our model in Fig. <ref>(b), which is equivalent to neglecting the long-range dipolar interactions. In this case, the lines of constant radius do not present this strong concave character and the main source of skyrmion stabilization in the whole field range is solely the increase of |κ̅|. *Skyrmion stability. A second important outcome of our analysis is the prediction of the parametric dependence of the bursting energy barrier. Previous studies on skyrmion stability have focused on the collapse phenomenon and the field applied antiparallel to the skyrmion core <cit.>. However, for a field applied parallel to the skyrmion core, bursting is the main source of instability. These two energy barriers are shown in Fig. <ref>, where we present the skyrmion energy as a function of the scaled radius ρ̅= ρ / √(Q-1) for a given set of parameters (see (2.27) in <cit.>). The local minimum of energy at ρ̅_sk corresponds to the equilibrium skyrmion solution. For ρ̅< ρ̅_sk the energy is increasing with decreasing ρ̅ and reaches the value 8π at ρ = 0, as all the energies tend to zero except the exchange energy, which tends to its minimum value of 8 π among configurations with q = 1, as first demonstrated in <cit.> and discussed previously <cit.>. The energy difference between the equilibrium skyrmion and the “zero radius skyrmion” is thus Δ E_coll = 8 π - E_sk, which serves as the energy barrier to skyrmion collapse <cit.>. As the radius becomes larger than ρ_sk, the energy is increasing and a saddle point of the micromagnetic energy is observed at ρ_sad. This saddle prevents the skyrmion solution from bursting as the skyrmion energy would otherwise go to -∞ as the skyrmion radius tends to infinity. The bursting barrier is defined as Δ E_burst = E_sad - E_sk. The collapse and bursting energy barriers are plotted as functions of |κ̅| and h̅ in Figs. <ref>(a) and (b) for the magnetic field applied parallel to the skyrmion core. The collapse and bursting energy barriers present opposite variations as the effective DMI or applied magnetic field are increased: an increase of |κ̅| or h̅ increases the collapse barrier, but decreases the bursting barrier. As a consequence, in the positive field regime, our theory predicts that the optimum stability region for skyrmions is where both the collapse and the bursting barriers are large. This optimum stability region is illustrated in Figs. <ref>(c) and (d), where we plot the effective energy barrier Δ E = min (Δ E_coll, Δ E_burst). In Fig. <ref>(c), for an intermediate thickness (δ̅=0.7), the optimal stability is obtained at large |κ̅|, for zero applied field. As |κ̅| is decreased, the region of optimum stability, which appears in reddish colors, is shifted to positive fields, showing that the applied positive field can partly compensate the decrease of DMI to stabilize skyrmions in the low |κ̅| regime. The stabilization of the skyrmion in this regime is due to a combined effect of the applied positive field and the long-range dipolar interaction. In Fig. <ref>(d), we present the ultrathin film limit (δ̅=0). In that case, the positive field cannot compensate the decrease of |κ̅| and hardly any increase of skyrmion stability with applied positive field is observed in the low |κ̅| regime. Comparison between these two cases in Figs. <ref>(c) and (d) reveals that the long-range dipolar interaction provides a stabilization mechanism in low DMI systems leading to a new regime of parallel field stabilization of skyrmions. *Micromagnetic simulations. To confirm the existence of field-stabilized compact skyrmions in the positive field regime, we carried out micromagnetic simulations, using MuMax3 <cit.> (see <cit.> for details on the procedure). The chosen dimensional parameters correspond to a ferrimagnetic materials with a low M_s and K_u (see Fig. <ref> caption). In Fig. <ref>, we present the skyrmion radius (a) and rotation angle (b) predicted by our model [see (<ref>) and (<ref>)] for this set of parameters. The corresponding simulation results are presented, respectively, in Figs. <ref>(c) and (d), where the dashed line is the line of zero bursting barrier defined in (<ref>). The simulations and the theory show a good agreement: they both predict, at fixed DMI, an increase of the skyrmion radius from around 20 nm up to bursting as the positive field increases [Figs. <ref>(a) and (c)], and at fixed magnetic field, a reorientation of the skyrmion angle from around 50^∘ to 90^∘ as κ̅ is decreased to zero [Figs. <ref>(b) and (d)]. This confirms the existence of skyrmions stabilized by a parallel field in the low |κ̅| regime. In the top right part of Figs. <ref>(c) and (d) the white zone corresponds to a region where the skyrmion has bursted and the system is homogeneously magnetized in the direction of the positive applied field. The numerical solution persists beyond the dashed line representing the region of existence of our solution. This is expected due to the asymptotic nature of the theory. The numerical solution persists in the form of a less compact profile before bursting occurs. This further extends the predicted range in which skyrmions can be stabilized with the help of a positive field. *Conclusions and perspective. We have derived the magnetic field dependence of the compact skyrmion size and rotation angle, along with its thermal stability, in the presence of DMI and full dipolar interaction. In particular, we obtained a condition for skyrmion bursting at positive magnetic field, which agrees quantitatively with micromagnetic simulations in the regime of compact skyrmions and which may be an invaluable tool for predicting this type of instability. We also established that a balance needs to be found between collapse and bursting energy barriers to optimize the skyrmion stability. Our analysis reveals that due to the presence of long-range dipolar interaction an increase of the magnetic field applied parallel to the skyrmion core increases the size and stability of a skyrmion similarly to the effect of increasing the DMI strength. A reason why a positive applied field has never been used to stabilize skyrmions so far is that the samples in which skyrmions are observed very often exhibit spontaneous demagnetization in the form of a helicoidal state or a stripe state at zero applied field. This phenomenon occurs in the presence of a large DMI or/and due to long-range demagnetizing effects. In the quest for room temperature nanometer size skyrmions, recent works have focused on ferrimagnetic systems, following some predictions of more stable skyrmions in such system with a low M_s and thicknesses of the order of 5-10 nm <cit.>. In these systems, room temperature stable skyrmions are observed experimentally at zero applied magnetic field <cit.> and they constitute the ideal candidates to observe field-stabilized skyrmions in the new regime predicted by our theory. The work of C. B. M. was supported, in part, by NSF via grant DMS-1908709. T. M. S. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044–390685587, Mathematics Münster: Dynamics–Geometry–Structure. 43 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Mühlbauer et al.(2009)Mühlbauer, Binz, Jonietz, Pfleiderer, Rosch, Neubauer, Georgii, and Böni]muhlbauer09 authorS. Mühlbauer, authorB. Binz, authorF. Jonietz, authorC. Pfleiderer, authorA. Rosch, authorA. Neubauer, authorR. Georgii, and authorP. Böni, journalScience volume323, pages915 (year2009). [Everschor-Sitte et al.(2018)Everschor-Sitte, Masell, Reeve, and Kläui]everschor-sitte18 authorK. Everschor-Sitte, authorJ. Masell, authorR. M. Reeve, and authorM. Kläui, journalJ. Appl. Phys. volume124, pages240901 (year2018). [Fert et al.(2017)Fert, Reyren, and Cros]fert17 authorA. Fert, authorN. Reyren, and authorV. Cros, journalNat. Rev. Mater. volume2, pages17031 (year2017). [Zhang et al.(2020)Zhang, Zhou, Song, Park, Xia, Ezawa, Liu, Zhao, Zhao, and Woo]zhang20 authorX. Zhang, authorY. Zhou, authorK. M. Song, authorT.-E. Park, authorJ. Xia, authorM. Ezawa, authorX. Liu, authorW. Zhao, authorG. Zhao, and authorS. Woo, journalJ. Phys. – Condensed Matter volume32, pages143001 (year2020). [Tokura and Kanazawa(2021)]tokura21 authorY. Tokura and authorN. Kanazawa, journalChem. Rev. volume121, pages2857 (year2021). [Bogdanov and Yablonskii(1989)]bogdanov89 authorA. N. Bogdanov and authorD. A. Yablonskii, journalSov. Phys. – JETP volume68, pages101 (year1989). [Bogdanov et al.(1989)Bogdanov, Kudinov, and Yablonskii]bogdanov89a authorA. N. Bogdanov, authorM. V. Kudinov, and authorD. A. Yablonskii, journalSov. Phys. – Solid State volume31, pages1707 (year1989). [Bogdanov and Hubert(1994a)]bogdanov94 authorA. Bogdanov and authorA. Hubert, journalJ. Magn. Magn. Mater. volume138, pages255 (year1994a). [Bogdanov and Hubert(1994b)]bogdanov94a authorA. Bogdanov and authorA. Hubert, journalPhysica Status Solidi (B) volume186, pages527 (year1994b). [Bogdanov and Hubert(1999)]bogdanov99 authorA. Bogdanov and authorA. Hubert, journalJ. Magn. Magn. Mater. volume195, pages182 (year1999). [Melcher(2014)]melcher14 authorC. Melcher, journalProc. R. Soc. Lond. Ser. A volume470, pages20140394 (year2014). [Dupé et al.(2014)Dupé, Hoffmann, Paillard, and Heinze]dupe14 authorB. Dupé, authorM. Hoffmann, authorC. Paillard, and authorS. Heinze, journalNature Commun. volume5, pages4030 (year2014). [Siemens et al.(2016)Siemens, Zhang, Hagemeister, Vedmedenko, and Wiesendanger]siemens16 authorA. Siemens, authorY. Zhang, authorJ. Hagemeister, authorE. Y. Vedmedenko, and authorR. Wiesendanger, journalNew J. Phys. volume18, pages045021 (year2016). [Büttner et al.(2018)Büttner, Lemesh, and Beach]buttner18 authorF. Büttner, authorI. Lemesh, and authorG. S. D. Beach, journalSci. Rep. volume8, pages4464 (year2018). [Wang et al.(2018)Wang, Yuan, and Wang]wang18 authorX. S. Wang, authorH. Y. Yuan, and authorX. R. Wang, journalCommun. Phys. volume1, pages31 (year2018). [Desplat et al.(2020)Desplat, Vogler, Kim, Stamps, and Suess]desplat20 authorL. Desplat, authorC. Vogler, authorJ. V. Kim, authorR. L. Stamps, and authorD. Suess, journalPhys. Rev. B volume101, pages060403(R) (year2020). [Romming et al.(2013)Romming, Hanneken, Menzel, Bickel, Wolter, von Bergmann, Kubetzka, and Wiesendanger]romming13 authorN. Romming, authorC. Hanneken, authorM. Menzel, authorJ. E. Bickel, authorB. Wolter, authorK. von Bergmann, authorA. Kubetzka, and authorR. Wiesendanger, journalScience volume341, pages636 (year2013). [Nagaosa and Tokura(2013)]nagaosa13 authorN. Nagaosa and authorY. Tokura, journalNature Nanotechnol. volume8, pages899 (year2013). [Romming et al.(2015)Romming, Kubetzka, Hanneken, von Bergmann, and Wiesendanger]romming15 authorN. Romming, authorA. Kubetzka, authorC. Hanneken, authorK. von Bergmann, and authorR. Wiesendanger, journalPhys. Rev. Lett. volume114, pages177203 (year2015). [Mougel et al.(2020)Mougel, Buhl, Nemoto, Balashov, Hervé, Skolaut, Yamada, Dupé, and Wulfhekel]mougel20 authorL. Mougel, authorP. M. Buhl, authorR. Nemoto, authorT. Balashov, authorM. Hervé, authorJ. Skolaut, authorT. K. Yamada, authorB. Dupé, and authorW. Wulfhekel, journalAppl. Phys. Lett. volume116, pages262406 (year2020). [Hervé et al.(2018)Hervé, Dupé, Lopes, Böttcher, Martins, Balashov, Gerhard, Sinova, and Wulfhekel]herve18 authorM. Hervé, authorB. Dupé, authorR. Lopes, authorM. Böttcher, authorM. D. Martins, authorT. Balashov, authorL. Gerhard, authorJ. Sinova, and authorW. Wulfhekel, journalNature Commun. volume9, pages1015 (year2018). [Kiselev et al.(2011)Kiselev, Bogdanov, Schäfer, and Rößler]kiselev11 authorN. S. Kiselev, authorA. N. Bogdanov, authorR. Schäfer, and authorU. K. Rößler, journalJ. Phys. D: Appl. Phys. volume44, pages392001 (year2011). [Bernand-Mantel et al.(2018)Bernand-Mantel, Camosi, Wartelle, Rougemaille, Darques, and Ranno]bernand-mantel18 authorA. Bernand-Mantel, authorL. Camosi, authorA. Wartelle, authorN. Rougemaille, authorM. Darques, and authorL. Ranno, journalSciPost. Phys. volume4, pages027 (year2018). [Tejo et al.(2018)Tejo, Riveros, Escrig, Guslienko, and Chubykalo-Fesenko]tejo18 authorF. Tejo, authorA. Riveros, authorJ. Escrig, authorK. Y. Guslienko, and authorO. Chubykalo-Fesenko, journalSci. Rep. volume8, pages6280 (year2018). [Winkler et al.(2021)Winkler, Litzius, de Lucia, Weißenhofer, Fangohr, and Kläui]winkler21 authorT. B. Winkler, authorK. Litzius, authorA. de Lucia, authorM. Weißenhofer, authorH. Fangohr, and authorM. Kläui, journalPhys. Rev. Appl. volume16, pages044014 (year2021). [Cortés-Ortuño et al.(2019)Cortés-Ortuño, Romming, Beg, von Bergmann, Kubetzka, Hovorka, Fangohr, and Wiesendanger]cortes-ortuno19 authorD. Cortés-Ortuño, authorN. Romming, authorM. Beg, authorK. von Bergmann, authorA. Kubetzka, authorO. Hovorka, authorH. Fangohr, and authorR. Wiesendanger, journalPhys. Rev. B volume99, pages214408 (year2019). [Bernand-Mantel et al.(2020)Bernand-Mantel, Muratov, and Simon]bernand-mantel20 authorA. Bernand-Mantel, authorC. B. Muratov, and authorT. M. Simon, journalPhys. Rev. B. volume101, pages045416 (year2020). [Lobanov et al.(2016)Lobanov, Jonsson, and Uzdin]lobanov16 authorI. S. Lobanov, authorH. Jonsson, and authorV. M. Uzdin, journalPhys. Rev. B volume94, pages174418 (year2016). [Muratov and Slastikov(2016)]ms:prsla16 authorC. B. Muratov and authorV. V. Slastikov, journalProc. R. Soc. Lond. Ser. A volume473, pages20160666 (year2016). [Knüpfer et al.(2019)Knüpfer, Muratov, and Nolte]kmn:arma19 authorH. Knüpfer, authorC. B. Muratov, and authorF. Nolte, journalArch. Rat. Mech. Anal. volume232, pages727 (year2019). [Muratov(2019)]m:cvar19 authorC. B. Muratov, journalCalc. Var. Partial Differential Equations volume58, pages52 (year2019). [sup()]suppl noteSee Supplemental Material at [URL to be inserted by publisher]. [Braun(2012)]braun12 authorH.-B. Braun, journalAdv. Phys. volume61, pages1 (year2012). [Bernand-Mantel et al.(2021)Bernand-Mantel, Muratov, and Simon]bms:arma21 authorA. Bernand-Mantel, authorC. B. Muratov, and authorT. M. Simon, journalArch. Rat. Mech. Anal. volume239, pages219 (year2021). [Belavin and Polyakov(1975)]belavin75 authorA. A. Belavin and authorA. M. Polyakov, journalJETP Lett. volume22, pages245 (year1975). [Corless et al.(1996)Corless, Gonnet, Hare, Jeffrey, and Knuth]corless96 authorR. M. Corless, authorG. H. Gonnet, authorD. E. G. Hare, authorD. J. Jeffrey, and authorD. E. Knuth, journalAdv. Comput. Math. volume5, pages329 (year1996). [Bessarab et al.(2015)Bessarab, Uzdin, and Jonsson]bessarab15 authorP. F. Bessarab, authorV. M. Uzdin, and authorH. Jonsson, journalComput. Phys. Commun. volume196, pages335 (year2015). [Cortés-Ortuño et al.(2017)Cortés-Ortuño, Wang, Beg, Pepper, Bisotti, Carey, Vousden, Kluyver, Hovorka, and Fangohr]cortes-ortuno17 authorD. Cortés-Ortuño, authorW. Wang, authorM. Beg, authorR. A. Pepper, authorM.-A. Bisotti, authorR. Carey, authorM. Vousden, authorT. Kluyver, authorO. Hovorka, and authorH. Fangohr, journalSci. Rep. volume7, pages4060 (year2017). [Hoffmann et al.(2020)Hoffmann, Müller, and Blügel]hoffmann20 authorM. Hoffmann, authorG. P. Müller, and authorS. Blügel, journalPhys. Rev. Lett. volume124, pages247201 (year2020). [Bernand-Mantel et al.(2022)Bernand-Mantel, Muratov, and Slastikov]bernand-mantel22 authorA. Bernand-Mantel, authorC. B. Muratov, and authorV. Slastikov, Valery, journalProc. Natl. Acad. Sci. USA volume119, pagese2122237119 (year2022). [Vansteenkiste et al.(2014)Vansteenkiste, Leliaert, Dvornik, Helsen, Garcia-Sanchez, and Van Waeyenberge]vansteenkiste14 authorA. Vansteenkiste, authorJ. Leliaert, authorM. Dvornik, authorM. Helsen, authorF. Garcia-Sanchez, and authorB. Van Waeyenberge, journalAIP Adv. volume4, pages107133 (year2014). [Caretta et al.(2018)Caretta, Mann, Büttner, Ueda, Pfau, Günther, Hessing, Churikova, Klose, Schneider et al.]caretta18 authorL. Caretta, authorL. Mann, authorF. Büttner, authorK. Ueda, authorB. Pfau, authorC. M. Günther, authorP. Hessing, authorA. Churikova, authorC. Klose, authorM. Schneider, et al., journalNature Nanotechnol. volume13, pages1154 (year2018). [Quessab et al.(2022)Quessab, Xu, Cogulu, Finizio, Raabe, and Kent]quessab22 authorY. Quessab, authorJ.-W. Xu, authorE. Cogulu, authorS. Finizio, authorJ. Raabe, and authorA. D. Kent, journalNano Letters volume22, pages6091 (year2022).
http://arxiv.org/abs/2306.02705v1
20230605085655
Situational Adaptive Motion Prediction for Firefighting Squads in Indoor Search and Rescue
[ "Nils Mandischer", "Frederik Schicks", "Burkhard Corves" ]
cs.RO
[ "cs.RO" ]
Recent J/ψ results measured with PHENIX T. Novák (for the PHENIX Collaboration) ^1Wigner Research Center for Physics, Budapest July 31, 2023 ================================================================ empty empty Firefighting is a complex, yet low automated task. To mitigate ergonomic and safety related risks on the human operators, robots could be deployed in a collaborative approach. To allow human-robot teams in firefighting, important basics are missing. Amongst other aspects, the robot must predict the human motion as occlusion is ever-present. In this work, we propose a novel motion prediction pipeline for firefighters' squads in indoor search and rescue. The squad paths are generated with an optimal graph-based planning approach representing firefighters' tactics. Paths are generated per room which allows to dynamically adapt the path locally without global re-planning. The motion of singular agents is simulated using a modification of the headed social force model. We evaluate the pipeline for feasibility with a novel data set generated from real footage and show the computational efficiency. § INTRODUCTION Indoor search and rescue (SAR) is exceptionally stressful to the firefighting operators. We aim to deploy humans and mobile robots as a collaborative squad to reduce health risks of diverse nature. A collaborative squad consists of multiple firefighting operators (the squad) and a collaborative rescue robot. Similar to the human, the robot is challenged by obscured vision on the environment and that conditions may change drastically throughout each mission. Therefore, the robot must deploy multiple methodologies to perceive and predict the location of its assigned human squad. This is analogous to approaches established in service robotics, where the robot perceives the human locations through sensors but is also able to predict their future whereabouts through motion modeling. For indoor SAR missions, no motion and behavioral models are known. In this work, we propose an approach to model human behavior throughout such missions by combining a graph-based tactics-informed optimal planning approach and a modification of the headed social force model (HSFM) <cit.>. Optimal paths are planned according to common tactics considering restricted and unrestricted vision per room and per squad. The planning is based on a priori knowledge of the environment map and status, which are usually available at the start of a mission. Through the segmentation into room entities, paths are adapted to the current scene when predicted conditions change. Individual agents (i.e., the simulated firefighting operators), then, use the optimal paths as waypoints in their individual motion prediction. By this, inter- and intra-squad behavior is modeled. The models are parameterized according to real data and evaluated for usability in real-time systems. Our main contributions in modeling indoor SAR are: * New model and novel pipeline for motion prediction * Composed novel data set to parameterize motion models § RELATED WORK Modeling of human behavior is a common task outside of SAR. Of particular interest for this work are physics based models with dynamic environment and group cues. One of the earliest adoptions is the social force model (SFM) by Helbing et al. <cit.>, which was later adapted by many works: Moussaïd et al. <cit.> add group cues to the SFM. Pellegrini et al. <cit.> introduce collision prediction. Yamaguchi et al. <cit.> decide on the (hidden) group status by observing motion. Rudenko et al. <cit.> combine a velocity model based on a Markov decision process with SFM-based local interactions and a stochastic random walk policy. Farina et al. <cit.> extend the SFM to the HSFM by applying a locomotion model which prefers forward motion in gazing direction of the agent. The locomotion model is defined by 𝐑(θ_i) = [ 𝐞_x 𝐞_y ] = [ cosθ_i -sinθ_i; sinθ_i cosθ_i ], 𝐩̇_i = 𝐑(θ_i)𝐯_i, 𝐯̇_i = 1/m_i𝐮_i^B, ω̇_i = 1/I_iu_i^θ, equation1,equation1,equation1 where an agent i is characterized by their pose , velocity , mass m_i, and inertia I_i, accordingly. The gazing direction θ_i is spanned between the global frame and a frame co-moving with the agent, hence, the distinction of velocities into 𝐩̇_i (global) and 𝐯_i=[ v_i,x v_i,y ]^T (local). The lateral input forces 𝐮_i^B=[ u_i^f u_i^o ]^T and torque input u_i^θ are deconstructions of the social forces, defined by u_i^f = 𝐟_i𝐞_x(θ_i), u_i^o = C^o(𝐟_i-𝐟_i^acc)𝐞_y(θ_i)-C^desv_i^des,o, u_i^θ = -C^θ(θ_i-ϕ_i^acc)-C^ωω_i, with the social forces resulting from external factors, i.e., agent-agent 𝐟_i^j and agent-border 𝐟_i^B,k interaction, and the agent's motivation to reach the goal 𝐟_i^acc. The total social force 𝐟_i acting on the individual agent i is the sum of all partial forces, defined by 𝐟_i = 𝐟_i^acc + ∑_j i^agents𝐟_i^j + ∑_k^borders𝐟_i^B,k. In this work, ϕ_i^acc is the phase of 𝐟_i^acc={d_i^acc,ϕ_i^acc}. C^o=1, C^des=500 are scaling parameters and v_i^des,o is the desired velocity in orthogonal direction. The torque parameters C^θ and C^ω are configured according to <cit.>. § TACTICS AND RESTRICTIONS OF THE MODEL Fire brigades worldwide apply different tactics, however, the overall approach is similar. A mission is usually split into multiple stages, e.g., the German THW employs five <cit.> and the Australian NDO six stages <cit.>. This work relates to firefighting in Germany, particularly, to the attack stage, which involves exploring indoor disaster environments, commonly, with respiratory protection <cit.>. §.§ Motion Tactics in Indoor Attack During indoor attack, motion tactics are applied per room based on its size and the visual conditions. In Germany, four different tactics are applied. In non-obscured vision, the squad explores the room until the entire space has been visually inspected without further instructions on exact motion (referred to as free traversal). In case of vision restriction, wall search (from German: “Wandtechnik”, Figure <ref>) is applied. Within, one firefighter constantly keeps their hand to the right- or left-hand wall, hence, the distinction in right-hand rule (RHR) and left-hand rule (LHR), respectively. They stretch out their arms and utilize squad mates and equipment to extend their reach. This technique is used in small to medium sized rooms, which are common in civil buildings, e.g., dorms or office buildings. In case of larger rooms, diving search (“Tauchertechnik”, Figure <ref>) and tree search (“Baumtechnik”, Figure <ref>) are applied. To establish a first baseline for firefighters' motion prediction, we focus on smaller rooms, hence, only free traversal and wall search (RHR and LRH) are modeled in this work. §.§ Motion Data For motion modeling, not only pure knowledge about the tactics, but also data is required to parameterize the models. However, real-world data is hard to come by as firefighters usually do not carry recording devices. Therefore, we base our data set on a thermal recording of the Institut der Feuerwehr NRW (IdF) generated in the project KOORDINATOR <cit.>, which depicts approaches under vision restriction with wall search, and two TV shows “Feuer & Flamme” <cit.> and “112: Feuerwehr im Einsatz” <cit.>. The TV shows display real footage of indoor SAR with and without vision restriction. Of particular interest are the velocities and distances the squad applies to borders and between squad members. The parameters we tune our model to are listed in Table <ref>. All parameters have high variance due to the low sample size. Particularly, velocity data is not trustworthy. However, as there is virtually no data for indoor SAR, these statistics implement vital reference values. For motion modeling, not only pure knowledge about the tactics, but also data is required to parameterize the models. However, real-world data is hard to come by as firefighters usually do not carry recording devices. Therefore, we base our data set on a thermal recording of the Institut der Feuerwehr NRW (IdF) generated in the project KOORDINATOR <cit.>, which depicts approaches under vision restriction with wall search, and two TV shows <cit.>. The TV shows display real footage of indoor SAR with and without vision restriction. Of particular interest are the velocities and the distances the squad applies to borders and between squad members. The parameters we tune our model to are listed in Table <ref>. All parameters have high variance due to the low sample size. Particularly, velocity data is not trustworthy. However, as there is virtually no data for indoor SAR, these statistics implement vital reference values. § GRAPH-BASED TACTICS-INFORMED OPTIMAL PLANNING The prediction framework is split into two stages. First, waypoints are generated by optimal path planning per room and squad. These are, then, used in the HSFM to generate agent-individual trajectories. The framework can generate motion trajectories of single agents as well as of individual agents in a single or in multiple squads. As input, a building plan is required, which is usually available for newer buildings and may be interpreted by other means (compare <cit.> for an overview). The map is segmented into individual rooms and their connecting doorways. These are, then, stored in a room graph in which nodes are rooms and vertices are doorways between them. For each room, a graph is generated which expands the room's node as a sub-graph embedded in the room graph. By this, global room sequences and local paths in each room may be planned isolated or together in a global optimization approach. §.§ Graph Construction For graph construction, we use three different types of graphs: Medial axis, visibility road map and pseudo-random sampling. The medial axis (Figure <ref>) is constructed from a Voronoi graph using the method proposed by Masehian and Amin-Naseri <cit.>. The seeds of the Voronoi regions are located in the occupied regions of the grid map. These are determined using distance transforms <cit.>. The medial axis is a graph that is constructed only from Voronoi border regions, i.e., where multiple Voronoi regions meet. Hence, it generates a graph with maximal distance to boundaries. The visibility road map <cit.> (Figure <ref>) generates a graph which guarantees that all space is visible. During construction, nodes are sampled pseudo-randomly using the Halton sequence <cit.>. Each sampled node is checked for visibility and added if it (a) is not visible from any other node (called guard) or (b) connects at least two guard nodes that were not connected beforehand (called connector). By checking the visibility road map it is determined whether space has been observed, which is particularly important when modeling free traversal behavior in SAR. Firefighters explore rooms until the entire space has been inspected to locate casualties. The remaining space is filled with nodes pseudo-randomly sampled with the Hammersley sequence <cit.>. Nodes are connected based on proximity, i.e., all neighbors in a specified radius are connected by vertices (Figure <ref>). The vertices are generated bidirectional, whereas each directed vertex carries information on the Euclidean length and in which motion tactics it is permissible for path planning. Free traversal allows all vertices. However, wall search requires locations close to walls, hence, vertices to far regions are blocked. Further, depending on the direction of motion, only clockwise (LHR) or counter-clockwise (RHR) vertices are allowed. On the combined graph, the shortest path within the given vertex restrictions is chosen. We observe good path quality with the A* algorithm in all three types of motion. §.§ Special Case: Single Entry Rooms In rooms with a single entry, the agent would be allowed to directly move back out, given the prior explanation. This is prevented by following considerations: Free traversal additionally requires all nodes in the visibility road map to be visited. To guarantee this, we build a tree from the entry node and follow the closest vertices in a greedy manner until a leaf is reached (Figure <ref>). Then new branches are generated from previously visited nodes until a new leaf is reached. In wall search, the first node in the path is moved by one margin clockwise (LHR) or counter-clockwise (RHR). As backtracking vertices are blocked, the agent is forced to move along the wall through the room instead of moving out of the doorway. § MODIFIED HEADED SOCIAL FORCE MODEL After paths have been generated per squad, they are propagated to each agent in their respective squad. Motion trajectories are simulated using a modification of the HSFM. §.§ Waypoint Management An agent derives the full list of waypoints from its squad's path. Agents delete their waypoints individually if latter have been visited. A waypoint is marked visited if it is located (a) within a cone in the agent's gazing direction or (b) within a slim circle centered at the agent's position. The cone has a range of 50m in free and 2m in restricted vision, and an opening angle of 180°. The circle is defined with a range of 0.2m. Latter's main purpose is to delete waypoints if an agent is spawned exactly on top, in which case the cone alone is not sufficient. Waypoints are marked essential if they are located at key locations, e.g., doorways, or start/end points. An essential waypoint is only deleted using criterion (b), but with a relaxed threshold. Thus, agents maintain a goal over long distances which prevents them from sliding along walls. §.§ Modified Contact Model In the HSMF <cit.>, agents typically stay far from walls, e.g., in corridors they tend to walk in the middle. In firefighting, however, typical behavior (particularly in wall search) leads to agents staying much closer to walls. In addition, to safe computational resources our aim is to renounce the usage of high level extraction of semantic building entities besides rooms, e.g., the agglomeration of occupied pixels into walls. These two aspects lead to the challenge, that agents may get pushed through walls and once they are, there is no way back, as it is not possible to determine from which side an occupied area was penetrated. Note that no past data is stored. To mitigate this challenge, we define a new contact model. Occupied pixels are only considered for border forces, if they fall into a distance threshold and are closest in a quadrant relative to the agent's gazing direction (Figure <ref>). The quadrants are defined such that the first quadrant (I) is centered about the agent's gazing direction. In all four quadrants, the partial border forces 𝐟_i^B,k are computed and agglomerated into the total border force 𝐟_i^B (Figure <ref>). The partial border forces are computed as an exponential force superimposed by a linear force. We observe, that the pure exponential force used by Farina et al. <cit.> acts too slow to prevent agents from penetrating walls if configured according to Table <ref>. The linear force rises faster while the final force is typically lower. Hence, the agent is decelerated but may still approach the wall. In very close regions, the exponential force forces the agent away from the borders. As agents approach borders with lower velocity, they cannot skip the border within an update cycle, hence, penetration is prevented. We define the partial border force as 𝐟_i^B,k = ϕ^B(r_i-d_i^B,k)𝐧_i^B,k +0 , d_i^B,k>r_i C^sϕ^s(d_i^B,k)𝐧_i^B,k +(1-C^s)ϕ^s(d_i^B,k)v_i,y𝐭_i^B,k , otherwise. Hereby, r_i is the radius of the agent and d_i^B,k=||𝐩_i - 𝐩_k - √(2)/2w𝐧_i^B,k||_2 is the distance from the agent to the center of the closest occupied pixel 𝐩_k considering half the pixel width w, simplified as circle with conservative radius 0.5√(2)w. The contact direction is split into the normalized normal and normalized tangential direction vectors 𝐧_i^B,k = [ n_i,x^B,k; n_i,y^B,k ] = 𝐩_i-𝐩_k/d_i^B,k, 𝐭_i^B,k = [ - n_i,y^B,k; n_i,x^B,k ], equation1,equation1 𝐧_i^B,k = [ n_i,x^B,k n_i,y^B,k ]^T = 𝐩_i-𝐩_k/d_i^B,k, 𝐭_i^B,k = [ - n_i,y^B,k n_i,x^B,k ]^T, respectively. Note that the tangential part in Equation <ref> scales with the agent's orthogonal velocity v_i,y. The border potential is defined as ϕ^B(r_i-d_i^B,k) = ϕ_0^B·exp(r_i-d_i^B,k/C^B). The soft potential is defined as ϕ^s(d_i^B,k) = 0 , d_i^B,k> r_i r_i-d_i^B,k/r_i-d_min^Bϕ_0^s , d_i^B,k≤ d_min^B ϕ_0^s , otherwise, where d_min^B<r_i and ϕ_0^s≫0. The parameter configuration for the contact model is listed in Table <ref>. §.§ Further Extensions of Standard HSFM In addition to the already discussed alterations, we use certain features of state-of-the-art models. Firstly, we use the group cohesion proposed by Farina et al. <cit.> to better depict squad behavior. We use different configurations of the potential in agent-agent interaction, which are interchanged if the agents are part of the same or different squads. The repulsion of squad mates is lower than in inter-squad interaction. Secondly, we model agent-agent and agent-border forces with the Elliptical Specification (ES) II <cit.>. We experimentally compared ES I <cit.>, ES II, and Circular Specification <cit.>. We observed that the ES II works best with the new contact definition, resulting in less oscillation, i.e., better damping behavior, after first contact with borders. § VALIDATION Test hardware consists of Intel i7-7700HQ and 8Gb RAM. The modified HSFM is solved with a Runge-Kutta solver using the Dormand-Prince method of fifth order <cit.> (runge_kutta_dopri5 in boost) and a step size of 0.06sec. Computational times for single squad traversal are listed in Table <ref> (1000 samples per test case). A squad consists of three agents, which is common for German fire brigades. With the low computational times, the robot employing the proposed methodology can generate squad trajectories in real or near-real time, depending on the length of the predicted path. Figure <ref> depicts the paths of a single squad traversing the building with varying motion tactics. Given the parameter configuration in Table <ref>, the motion prediction generates feasible results. Hence, we conclude that the method is well suited for deployment in collaborative rescue applications. § CONCLUSIONS In this work, we proposed a novel motion prediction pipeline for SAR. The method combines optimal path planning based on three specialized types of graphs with a modification of the HSFM <cit.>. The graphs model the tactics applied by firefighters in SAR, while the HSFM uses the paths to generate agent-individual motion trajectories. We modified the agent-border interaction by introducing a soft contact to allow traversal closer to borders. The models are configured with a novel data set. Finally, we showed that the proposed pipeline generates feasible trajectories in SAR and is computed fast depending on map size and agent count. -12cm § ACKNOWLEDGMENT We like to thank the IdF NRW for providing data and Sebastian Döbler, Onur Akin, Lukas Clasen, and Till Waldermann for their support in data collection and implementation. IEEEtran
http://arxiv.org/abs/2306.12089v1
20230621080815
Towards Accurate Translation via Semantically Appropriate Application of Lexical Constraints
[ "Yujin Baek", "Koanho Lee", "Dayeon Ki", "Hyoung-Gyu Lee", "Cheonbok Park", "Jaegul Choo" ]
cs.CL
[ "cs.CL" ]
Stochastic fluctuations of diluted pedestrian dynamics along curved paths Alessandro Corbetta July 31, 2023 ========================================================================= Lexically-constrained NMT (LNMT) aims to incorporate user-provided terminology into translations. Despite its practical advantages, existing work has not evaluated LNMT models under challenging real-world conditions. In this paper, we focus on two important but understudied issues that lie in the current evaluation process of LNMT studies. The model needs to cope with challenging lexical constraints that are “homographs” or “unseen” during training. To this end, we first design a homograph disambiguation module to differentiate the meanings of homographs. Moreover, we propose , which integrates contextually rich information about unseen lexical constraints from pre-trained language models and strengthens a copy mechanism of the pointer network via direct supervision of a copying score. We also release , an evaluation benchmark for assessing the ability of a model to cope with “homographic” and “unseen” lexical constraints. Experiments on and the previous test setup show the effectiveness of our method. The effects of are shown to be remarkable in “unseen” constraints. Our dataset is available at <https://github.com/papago-lab/HOLLY-benchmark>. § INTRODUCTION Lexically-constrained neural machine translation (LNMT) is a task that aims to incorporate pre-specified words or phrases into translations (, inter alia). It plays a crucial role in a variety of real-world applications where it is required to translate pre-defined source terms into accurate target terms, such as domain adaptation leveraging domain-specific or user-provided terminology. For example, as shown in Case A of Table <ref>, an LNMT model successfully translates the source term (“UTF8mj코로나”) into its corresponding target term (“Covid-19”) by adhering to a given lexical constraint (“UTF8mj코로나” → “Covid-19”). Despite its practicality, previous studies on LNMT have not evaluated their performances under challenging real-world conditions. In this paper, we focus on two important but understudied issues that lie in the current evaluation process of the previous LNMT studies. Semantics of lexical constraints must be considered. In previous work, at test time, lexical constraints are automatically identified from the source sentences by going through an automatic string-matching process <cit.>. For example, in Case B of Table <ref>, a source term (“UTF8mj코로나”) in the bilingual terminology is present as a substring in the source sentence. Accordingly, its corresponding target term (“Covid-19”) is automatically bound together as a lexical constraint (“UTF8mj코로나” → “Covid-19”) without considering the semantics of the matched source term,[Here, UTF8mj코로나 in Case B indicates Corona, a brand of beer produced by a Mexican brewery.] which can lead to a serious mistranslation. This automatic string-matching cannot differentiate textually identical yet semantically different source terms. Thus, the more accurately the LNMT reflects the lexical constraint, the more pronounced the severity of the homographic issue is. To address this homograph issue, LNMT systems must be equipped to understand the semantics of identified lexical constraints and determine whether or not these constraints should be imposed. Unseen lexical constraints need to be examined. One desideratum of LNMT systems is their robustness to handle “unseen” lexical constraints, thereby responding to random, potentially neologistic, or technical terms that users might bring up. However, in previous studies, a significant portion of the lexical constraints is exposed during training. <cit.> demonstrated the overlapped ratio of lexical constraints between the training and evaluation data (35.6% on average). Meanwhile, <cit.> also raises the issue of the high frequency of lexical constraints for test sets appearing in the training data. When lexical constraints are included in the training examples, we find that a well-optimized vanilla Transformer <cit.> already satisfies lexical constraints by merely learning the alignment between the source and target terms co-occurring in the parallel training sentences.[We observe that the vanilla Transformer achieves a 66.67% copy success rate.] This presents difficulties in identifying whether the presence of target terms in the output is attributed to the learned alignment, or the proposed components in previous studies. Therefore, it is important to control lexical constraints not exposed during training to examine the model's ability to cope with “unseen” lexical constraints. As a response, we present a test benchmark for evaluating the LNMT models under these two critical issues. Our benchmark is specifically crafted not only to evaluate the performance of LNMT models but also to assess its ability to discern whether given lexical constraints are semantically appropriate or not. To the best of our knowledge, we are the first to release a hand-curated high-quality test benchmark for LNMT. Concurrently, we suggest a pipeline that allows researchers in LNMT communities to simulate realistic test conditions that consider the homograph issue and assign “unseen” lexical constraints. To this end, we propose a two-stage framework to deal with these issues. We first develop a homograph disambiguation module that determines whether LNMT models should apply a given lexical constraint by evaluating its semantic appropriateness. Further, we propose an LNMT model that integrates provided lexical constraints more effectively by learning when and how to apply these lexical constraints. Our contributions are summarized as follows: * We formulate the task of semantically appropriate application of lexical constraints and release a high-quality test benchmark to encourage LNMT researchers to consider real-world test conditions. * We propose a novel homograph disambiguation module to detect semantically inappropriate lexical constraints. * We present an LNMT model which shows the best translation quality and copy success rate in unseen lexical constraints. § BENCHMARK Here, we introduce (homograph disambiguation evaluation for lexically constrained NMT), a novel benchmark for evaluating LNMT systems in two circumstances; either the assigned lexical constraints are semantically appropriate or not, as illustrated in Table <ref>. The entire test data includes 600 test examples on 150 Korean → English lexical constraints. §.§ Test Examples Each test example consists of three main elements, as presented in Table <ref>: (1) a lexical constraint (UTF8mj양수 → amniotic fluid), (2) a source sentence containing the source term (UTF8mj양수) of the lexical constraint, and (3) its reference translation.[We outsourced the translation process to a professional translation company, and each translation was manually reviewed.] While the source term is a homograph with multiple meanings, one of them is chosen to serve as its lexical constraint.[Out of multiple different meanings of a homograph, we select the least frequent one as its lexical constraint. We describe the data construction details in Appendix <ref>.] Then, based on the meaning of the source term in the source sentence, each test example is classified into one of two groups: * Positive Example where the source term in its source sentence is semantically aligned to the lexical constraint (See test examples (a) and (b) in Table <ref>). For positive test examples, we expect lexical constraints should always be applied. * Negative Example where the given lexical constraint is semantically improper to impose (See test examples (c) and (d) in Table <ref>). Negative test examples allow us to evaluate how LNMT models respond to inappropriate lexical constraints. §.§ Positive References As seen in Table <ref>, we provide two auxiliary source-side example sentences demonstrating the specific use of the source term of its lexical constraint, assuming that the meaning can be differentiated by the context used in the sentences rather than the terminology itself. Hereafter, we name these example sentences as positive references. § METHODOLOGY Our methodology for semantically appropriate application of lexical constraints consists of two stages. Initially, we propose a homograph disambiguation module that can differentiate the semantics of lexical constraints. This module determines whether LNMT models should incorporate a lexical constraint or not. Subsequently, LNMT models, in our case, perform the translation, either with or without the given lexical constraints. §.§ Homograph Disambiguation Given a few example sentences demonstrating how to specifically use a word, humans can infer the proper meaning. Likewise, our conjecture is that we can fulfill the homograph disambiguation task by leveraging these inter-sentential relationships. §.§.§ Task Specification Given n example sentences illustrating one specific meaning of a homograph, our homograph disambiguation module aims to determine whether the same word in a newly given sentence, denoted as `New Sentence' in Fig. <ref>, carries the same meaning (label: 1) or not (label: 0). We conducted experiments with two example sentences (i.e., n =2),[We experiment with n=1, 2,and 3. The effect of varying the number of example sentences is analyzed in Appendix <ref>.] and the corresponding model architecture is described in Section <ref>. §.§.§ Model Architecture Input Representations As illustrated in Fig. <ref>, sentence embeddings of example sentences and the new sentence are individually obtained from the PLM and fed into the classifier. Embedding vectors for all the sentences are extracted from the averaged hidden representations of the last K layers of frozen PLM.[We utilize the last 16 layers of klue/roberta-large, a RoBERTa-based PLM trained on Korean corpus. See <https://huggingface.co/klue/roberta-large> for details.] Here, the embedding vector is obtained by the average of hidden representations for the tokens that make up a homograph within the sentence. We denote this averaging operation as Pooling in Fig. <ref>. Binary Classifier Similar to Sentence-BERT <cit.>, we use the concatenation (z ∈ℝ^6m+3) of the following as an input to the classifier: * Contextualized representation of a homograph (u, v, w ∈ℝ^m), * element-wise difference for each pair (u-v, v-w, u-w∈ℝ^m,) * pair-wise cosine similarity scores (sim(u,v), sim(v,w), sim(u,w) ∈ℝ), where m is the dimension of the embeddings and sim(·, ·) denotes the cosine similarity function. Our prediction o ∈ [0,1] for a `New Sentence' is calculated as o = σ(max(0, zW_r+b_r)W + b), where W_r∈ℝ^(6m+3)× m and b_r are the weight matrix and bias vector of an intermediate layer, respectively. W ∈ℝ^m× 1 and b are the weight matrix and bias vector for the final prediction layer followed by σ(·), which represents the sigmoid function. §.§ In this subsection, we introduce our LNMT model, , which stands for leveraging pre-trained language model with direct supervision on a copying score for LNMT, and its detailed implementation. To better incorporate target terms into the translations, combines LeCA <cit.> with PLM and strengthens a pointer network with supervised learning of the copying score. §.§.§ Problem Statement Lexically-constrained NMT Suppose X = (x_1, x_2, ⋯, x_|X|) as a source sentence and Y = ( y_1, y_2, ⋯, y_|Y| ) as a target sentence. Given the constraints C = (C_1, C_2, ⋯, C_n) where each constraint C_i = (C_i,S, C_i,T) consists of the source term C_i,S and corresponding target term C_i,T, LNMT aims to incorporate C_1:n,T into its generation. The conditional probability of LNMT can be defined as p(Y| X, C;θ) = ∏_t=1^|Y| p(y_t|y_<t, X, C ; θ). Input Data As in <cit.>, we modify X as X̂ = ( X, <sep>, C_1, T, ⋯, <sep>, C_n, T) by appending <sep> tokens followed by target terms, as illustrated in Table <ref>.[In our training, we randomly sample target terms from the target sentence. Please refer to Appendix <ref> for details.] If there are no lexical constraints, a source sentence remains the same, i.e., X̂ = X.[At test time, we append target terms only when lexical constraints are determined to be used by the homograph disambiguation module (as indicated in Table <ref>).] Combining a source sentence with target terms leads to the modification of Eq. (<ref>) as the following: p(Y| X, C;θ) = ∏_t=1^|Y| p(y_t|y_<t, X̂ ; θ). §.§.§ Integration of PLM As PLM such as BERT <cit.> is trained on large amounts of unlabeled data, leveraging PLM for LNMT can provide rich contextualized information of X, even in controlled unseen lexical constraint scenarios. We first feed the source sentence X to a frozen PLM to obtain a representation B of a source sentence, where B is the output of the last layer of the PLM. Conversely, our NMT model based on <cit.> receives a modified source sentence X̂ as input. Let L denote the number of encoder and decoder layers of NMT, H^l be the output of the encoder of NMT at the l-th layer, and h_t^l denote the t-th element of H^l. For each layer l ∈ [1, L], we employ multi-head attention with the output of PLM as in <cit.>, denoted as MHA_B. This maps the output of the NMT encoder at l-1th layer into queries and output of PLM, B, into keys and values.[Please refer to Appendix <ref> for more details.] The output of the t-th element of the NMT encoder at the l-th layer is given by h̃_t^l = 1/2( MHA (h_t^l-1, H^l-1, H^l-1) + MHA_B (h_t^l-1, B, B ) ) + h_t^l-1, h_t^l = LN(FFN(LN(h̃_t^l)) + h̃_t^l), where LN(·) denotes Layer normalization in  <cit.> and MHA and FFN(·) are the multi-head attention and feed-forward network, respectively. Similar to the encoder, multi-head attention with PLM is introduced for each decoder layer.[Please refer to Appendix <ref> for more details.] Combined with Section <ref>, a highly contextualized representation is given to the pointer network. §.§.§ Supervision on a Copying Score Pointer Network To copy target terms from X̂, we introduce a pointer network <cit.> as in <cit.>. For each time step, a pointer network takes in the output of the encoder and outputs a copying score g_t^copy∈ [0, 1], which controls how much to copy. The output probability of the target word y_t can be calculated as p(y_t|y_<t, X̂ ; θ) = (1 - g_t^copy) × p_t^word + g_t^copy× p_t^copy, where p_t^copy is a probability of copying, and p_t^word is a probability of the target word y_t in the vocabulary.[Please refer to Appendix <ref> for more details.] Copying Score As implied by Eq. (<ref>), inaccurately predicted g_t^copy results in the failure of copying target terms. However, in previous research on LNMT, the importance of a copying score was relatively understated. Despite the high probability of copying p_t^copy, an incorrect copying score can even lower the output probability of the target terms. Therefore, we propose a novel supervised learning of the copying score g_t^copy to obtain a more accurate value. Our supervision of the copying score strengthens the copy mechanism of the pointer network by allowing the model to learn exactly when to copy. Since target terms are in the source sentence, we can determine which words should be copied from the source sentence. For example, when translating a source sentence in Table <ref>, the appended target term, Covid-19, must be copied. Thus, the copying score g_t^copy of the target term Covid-19 should be higher, and g_t^copy should be lower for the remaining words in the target sentence. Our training objective can be defined as L(θ) = - ∑_t=1^|Y| log p(y_t|y_<t, X̂ ; θ) - λ J(θ), J(θ) = α∑_t∉ C_1:n, T (1 - g_t) ×log ( 1- g_t^copy) + β∑_t ∈ C_1:n, T g_t×log g_t^copy, where a gold copying score g_t is set to zero for t ∈{t| y_t ∉ C_1:n, T}; otherwise, g_t is set to one for t ∈{t| y_t ∈ C_1:n, T}. To mitigate the length imbalance between the target terms and remaining words in the target sentence, we set α and β to the value obtained by dividing their respective lengths from the total length. § EXPERIMENTS ON THE BENCHMARK In this section, we report the performance of our methodology when tested on the benchmark. In Section <ref>, we evaluate the performance of our homograph disambiguation module in determining the semantic appropriateness of a lexical constraint. In Section <ref>, we assess the performance of LNMT models using positive examples from the benchmark under conventional settings. Subsequently, we investigate the potential advantages that the homograph disambiguation module might bring when applied to the negative examples from the benchmark. §.§ Homograph Disambiguation §.§.§ Data Here, we present our dataset for training the homograph disambiguation module. Our training data was collected from the Korean dictionary[To release data, we collected the examples that are controlled by the appropriate license. CC BY-SA 2.0 KR.] and we manually inspected the quality of each sentence. In line with Fig. <ref>, each example consists of a triplet of example sentences containing a common homograph. Depending on the inter-sentential relationships between each input sentence, the homograph disambiguation module outputs a binary label: “1” is assigned if the homograph carries the same meaning in all sentences, and “0” if used differently in one example sentence. The brief data statistics of the training data are reported in Table <ref>. Note that any homographs are not allowed to be overlapped across train, validation, and test datasets. At test time, we evaluated our model on the benchmark. Specifically, for each lexical constraint, two positive references (refer to Table <ref>) and one of the four test example sentences ((a), (b), (c), or (d) in Table <ref>) is given as a triplet. §.§.§ Results We conducted experiments with two well-known variants of PLM trained on Korean corpora: klue/roberta-base, and klue/roberta-large. Our homograph disambiguation module achieved a test accuracy of 88.7%, and 92.3% when using klue/roberta-base, and klue/roberta-large, respectively. In spite of the imbalanced data distribution shown in Table <ref>, the values of precision and recall are balanced in both classes, as shown in Table <ref>. §.§ Lexically-constrained NMT §.§.§ Training Data We used 1.83M sentence pairs from two publicly available Korean-English datasets as training corpora: IWSLT 17 training data and AI Hub parallel data.[The AI HUB data can be found here: <https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115 topMenu=100 aihubDataSe=realm dataSetSn=126>.] We pre-tokenized the Korean corpora with Mecab and built a joint vocabulary for both languages by learning a Byte Pair Encoding <cit.> model in sentencepiece <cit.> with 32K merge operations. To simulate the unseen lexical constraints, we filtered out about 160K training sentence pairs with test lexical constraints on both sides when experimenting with the benchmark. This filtering process is crucial for examining how the models cope with any lexical constraints that users might introduce. §.§.§ Evaluation Metrics We evaluated the performance of our model in terms of BLEU[We measure the BLEU scores using sacreBLEU <cit.> with the signature nrefs:1|case:mixed|eff:no|tok:13a|smooth:exp|version:2.0.] and copy success rate (CSR). CSR is a metric for investigating the ratio of imposed lexical constraints met in translations. For a statistical significance test, we use  <cit.> with p=0.05 and 1,000 bootstraps. Test Scenarios There were two important test cases, as shown in Table <ref>. Given a source sentence, we can consider the Soft Matching test case, which allows some morphological variations, as introduced in <cit.>. As illustrated in Table <ref>, since the Korean word UTF8mj소화 can be used in multiple different forms via inflection, any one of the expected candidates (digest, digestion, and digestive) presented in the translation is considered to be correct in terms of CSR. We also have the Hard Matching[This test case is suggested by previous work <cit.>.] test case where the exact target term (e.g., digestion) presented in its reference has to be incorporated in the translation. Note that, this cannot be tested on negative examples since the target terms in lexical constraints do not appear in their references. Baselines * Code-Switching (CS) <cit.> replaces source terms with aligned target terms and learns to copy them via pointer network. * LeCA <cit.> modifies the source sentence as described in Table <ref>, and utilizes pointer network during training. * Cdalign <cit.> proposes constrained decoding based on alignment.[We compare our model to the ATT-INPUT approach, which suffers from high time complexity but guarantees a high CSR.] method TEST1 TEST2 PLUMCOT 96.75 9.42 CS 88.96 7.47 Cdalign 89.94 7.47 LeCA 89.94 7.14 §.§.§ Main Results Simulating Unseen Lexical Constraints Table <ref> shows the importance of simulating unseen lexical constraints. When lexical constraints are exposed during training, the vanilla Transformer already achieves 66.67% of CSR by mere memorization. We observe that eliminating around 160K overlapping training examples results in a significant reduction of CSR (66.67% → 11.97%), indicating that we manage to simulate the conditions where lexical constraints are nothing short of unseen. Results on Positive Examples The performances of LNMT models on positive examples[Recall that positive examples are bound together with semantically appropriate lexical constraints. Refer to (a) and (b) in Table <ref> for details.] are compared in Table <ref>. It is shown that outperforms all the baselines in both metrics by a large margin. Since we simulate the unseen lexical constraints, the external information from the PLM contributes to the increase in BLEU. Combined with the supervision on a copying score, achieves the highest CSR.[Instead of using the benchmark, we also tested the performance of in a test benchmark used in previous studies <cit.> to compare its effectiveness, as shown in Appendix <ref>.] The overall BLEU scores of Hard Matching are shown to be greater than Soft Matching as the expected target terms drawn from reference translations are given to the models. Benefits of Homograph Disambiguation Here, we analyze beneficial effects of homograph disambiguation on LNMT. Since lexical constraints in negative examples[Refer to examples (c) and (d) in Table <ref>.] are semantically improper, the homograph disambiguation module determines whether they should be imposed or not. Corrections are made with its decisions; the corresponding lexical constraints are removed. The effects of the correction are shown in Fig. <ref>. We observed significant drops in CSR across all of the models, which is desirable since the lexical constraints are irrelevant to the context. By removing inappropriate constraints, all the models achieve a consistent and statistically significant improvement in translation quality by a large margin. §.§.§ Ablation Study We study the effect of each component of , and the results are provided in Table <ref>. Compared to the without supervision, the supervision on a copying score significantly improves CSR (93.85% vs. 98.06%). We find that leveraging rich contextual representation of PLM can further improve the translation quality (18.51 → 20.91). The BLEU score of a model that combines only PLM without supervision on a copying score is lower than that of the model. This may simply be due to a higher BLEU score from better reflecting the target terms in the positive examples (an increase in CSR from 93.85% to 98.06%). Combining the two components yields the best performance in both metrics. More ablations can be found in Appendix <ref>. §.§ Qualitative Analysis Table <ref> provides translated examples. Given a lexical constraint, incorporates the target term correctly. In a negative example, the meaning of UTF8mj 세제 is properly translated into detergent by with correction.[Note that the correction is made by homograph disambiguation.] We provide more examples in Table <ref>. § RELATED WORK §.§ Lexically-constrained NMT Recent work on LNMT broadly falls into two categories: decoding algorithms and inline annotation. During beam search, decoding algorithms enforce target terms to appear in the output <cit.>. This approach ensures a high CSR, but the decoding speed is significantly degraded. To alleviate this issue, <cit.> suggests a decoding algorithm with a complexity of O(1) in the number of constraints. Another variation on decoding algorithms utilizes word alignments between source and target terms <cit.>. In inline annotation studies, the model is trained to copy target terms via modification of training data. Either a source term is replaced with the corresponding target term, or the target term is appended to the source sentence <cit.>. Concurrently, <cit.> consider the morphological inflection of lexical constraints during the integration of target terms. While these methods incur a slight computational cost and provide better translation quality, target terms are not guaranteed to appear <cit.>. To better copy target terms in a source sentence, a pointer network <cit.> that uses attention weights to copy elements from a source sentence is introduced <cit.>. In this work, we further enhance the copying mechanism of a pointer network via supervised learning of a copying score that achieves better performance in terms of BLEU and CSR. §.§ Homograph Issue in LNMT <cit.> points out the homograph issue in LNMT in an in-depth error analysis of their model. To the best of our knowledge, the homograph issue was explicitly addressed first in  <cit.>. In their work, given a source homographic term, the most frequent alignment is selected as its correct lexical constraint, while the other alignments are treated as negative terms that should be avoided in the translation. However, low-frequency meanings are important for LNMT since it is not guaranteed that users always bring up generic terminology. Different from their method, our homograph disambiguation module infers the meaning of lexical constraints and makes decisions to impose them or not. Furthermore, we confirm that our method works equally well on “unseen” homographs. §.§ Integration of PLM with NMT Followed by the success of PLM, researchers attempted to distill the knowledge of PLM into NMT <cit.>. BERT-fused <cit.> is one such method; it plugs the output of BERT into the encoder and decoder via multi-head attention. We borrowed the idea from BERT-fused, and for the first time, combined LNMT and PLM, which works well even in “unseen” lexical constraints by leveraging the rich contextual information of PLM. § CONCLUSIONS In this paper, we investigate two unexplored issues in LNMT and propose a new benchmark named . To address the homograph issue of the source terms, we built a homograph disambiguation module to infer the exact meaning of the source terms. We confirm that our homograph disambiguation module alleviates mistranslation led by semantically inappropriate lexical constraints. is also proposed to improve LNMT by using the rich information of PLM and ameliorating its copy mechanism via direct supervision of a copying score. Experiments on our benchmark show that significantly outperforms existing baselines in terms of BLEU and CSR. § LIMITATIONS Our study includes some limitations that must be addressed. Some test examples might have wrong predictions made by the homograph disambiguation module. Specifically, in positive examples where lexical constraints should be imposed, its errors result in wrong corrections (i.e., the elimination of necessary lexical constraints). Table <ref> shows how these erroneous corrections affect the results. We can observe an overall decline in CSR; however, it does not hurt the translation quality. We verify that the differences in BLEU resulting from wrong corrections are not statistically significant for all the methods. Considering the gain achieved in negative examples, as seen in Fig. <ref>, our proposed homograph disambiguation might serve as a useful starting point to address homographs in LNMT; however, there is still room for improvement. Our current homograph disambiguation module is designed as a stand-alone system outside the LNMT. However, building an end-to-end system can be beneficial, which can be addressed in future work. § ACKNOWLEDGMENTS Authors would like to thank all Papago team members for the insightful discussions. Also, we sincerely appreciate the fruitful feedback from Won Ik Cho and CheolSu Kim. We thank the anonymous reviewers for their valuable suggestions for enhancing this work. This work was supported by Papago, NAVER Corp, the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2022R1A2B5B02001913). acl_natbib § BENCHMARK We collected monolingual example sentences that contain one of the pre-specified homographs from the Korean dictionary. For each homograph, retrieved example sentences are classified into multiple groups according to their meanings. We chose one group with the least frequent meaning and used its examples as positive references to determine a lexical constraint for the homograph. Examples from the other groups are considered as negative references. Eventually, six reference sentences were collected for each homograph; more specifically, four positive references and two negative references. Setting aside two positive references for homograph disambiguation, as stated in Table <ref>, we outsourced the translation of two positive and negative examples, as introduced in Table <ref>. For positive examples, professional translators were requested to translate source terms of lexical constraints into pre-defined target terms. We guide the translators to carefully translate negative examples by focusing on the exact meaning of lexical constraints. § IMPLEMENTATION DETAILS §.§ Configuration of We implemented and all the models based on fairseq <cit.>. We matched the embedding dimensions, the number of layers, and the number of attention heads of all models for a fair comparisons. was trained from scratch and klue/roberta-large <cit.> was used for our PLM.[Please refer to <ref> for more details.] §.§ Computational Cost All the experiments were conducted on a single A100 GPU. It takes about 84 hours to train and 5 hours to train the homograph disambiguation module. The number of training / total parameters for is 156M and 493M. The number of training / total parameters for the homograph disambiguation module is 6M and 343M. § ABLATION STUDIES §.§ Weights of the supervised learning of a copying score The results in Table <ref> were reported according to the different weights λ of the supervised learning of the copying score in Eq. (<ref>). Based on experimental results, we were able to find a compromise where λ is 0.2. §.§ Number of example sentences We experimented with a varying number of example sentences. As we use more example sentences, the information from the inter-sentential relationships becomes richer, eventually improving homograph disambiguation performance. Experiments with n=1, 2, and 3 show an accuracy of 91.33%, 92.33%, and 92.67%, respectively. Although an experiment with n=3 provides the best accuracy, collecting positive references can sometimes be burdensome to users. Therefore, we conclude that n should be decided by considering its trade-off. §.§ Randomly Sampled Test Constraints Different from our benchmark, at test time, lexical constraints were randomly sampled from the alignments in each sentence pair in previous studies  <cit.>. Ten different test sets were built based on ten randomly sampled sets of lexical constraints, as described in <cit.>. Test statistics are reported in Table <ref>. It is shown that achieves the highest BLEU. The CSR is slightly lower than Cdalign, indicating that the gain for “seen” constraints is insignificant.[Note that the models are trained with full data as we cannot remove training examples that overlap with random test lexical constraints in advance.] § EQUATION DETAILS Let Q, K, and V be the query, key, and value in <cit.>, respectively. Then MHA in Eq. (<ref>) and Eq. (<ref>) can be calculated as MHA(Q,K,V) = Cat_h=1^H[head_1, ⋯, head_n]W_o, head_i = Attn( QW_i^Q, KW_i^K, VW_i^V), Attn(q,k,v) = softmax(qk^T√(d_k))v, where the projection matrices are parameters W^O∈ℝ^Hd_v× d_model, W_i^Q, W_i^K, W_i^V∈ℝ^d_model× d_k for MHA. In this paper, we employ d_model=768, H=12, d_k=d_v=d_model / H = 64. Note that all baselines and our model use the same number of heads and the same projection matrices size. We use additional multi-head attention, MHA_B, which only differs in projection matrices size where W_i^Q, W_i^K, W_i^V∈ℝ^d_PLM× d_k for MHA_B.[Here, we utilize klue/roberta-large, a RoBERTa-based PLM trained on Korean corpus. The size of d_PLM is 1024.] § INPUT DATA AUGMENTATION As illustrated in Table <ref>, we modify a source sentence X as X̂ by appending <sep> tokens followed by target terms. Since lexical constraints are domain-specific or user-provided terminology, we exclude the top 1,500 frequent words from a 32K joint dictionary. In our training, we randomly sample at most 3 target terms from the target sentence. For each sentence, from 0 to 3 target terms are sampled following the distribution [0.3, 0.2, 0.25, 0.25]. § INTEGRATION OF PLM IN DECODER Here, we follow the same notations in Section <ref>. Let S^l denote the decoder output at l^th layer. s_t^l is the t-th element of S^l, and S_1:t^l denotes t number of elements from s_1^l to s_t^l, masking elements from t+1 to the end. The output of each layer of the decoder can be calculated as ŝ_t^l = LN(MHA(s_t^l-1, S_1:t^l-1, S_1:t^l-1) ) + s_t^l-1 , s̃_t^l = 1/2(MHA(ŝ_t^l, H^L, H^L ) + MHA_B (ŝ_t^l, B, B ) ) + ŝ_t^l , s_t^l = LN(FFN(LN(s̃_t^l)) + s̃_t^l). § POINTER NETWORK We use the same notations in Section <ref> and Appendix <ref> and <ref>. Let |X̂| denotes the length of the modified source sentence X̂, and (α_t,1, α_t,2, ⋯, α_t,|X̂|) denotes the averaged attention weight of MHA(ŝ_t^L, H^L, H^L ) over the multiheads in Eq. (<ref>). Then our copying score g_t^copy can be calculated as g_t^copy =σ(W_g[c_t; s_t^L] + b_g), c_t = ∑_i=1^|X̂|α_t,i× h_i^L, where c_t and s_t^L are concatenated, W_g and b_g are the weight matrix and bias vector, and σ(·) is the sigmoid function.
http://arxiv.org/abs/2306.10978v1
20230619143655
Formation of spin-orbital entangled 2D electron gas in layer delta-doped bilayer iridate La$_δ$Sr$_3$Ir$_2$O$_7$
[ "Amit Chauhan", "Arijit Mandal", "B. R. K. Nanda" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.str-el" ]
APS/123-QED [email protected] [email protected] ^1Condensed Matter Theory and Computational Lab, Department of Physics, IIT Madras, Chennai-36, India ^2Center for Atomistic Modelling and Materials Design, IIT Madras, Chennai-36, India 5d transition metal oxides host a variety of exotic phases due to the comparable strength of Coulomb repulsion and spin-orbit coupling. Herein, by pursuing density-functional studies on a delta-doped quasi-two-dimensional iridate Sr_3Ir_2O_7, where a single SrO layer is replaced by LaO layer, we predict the formation of a spin-orbital entangled two-dimensional electron gas (2DEG) which is sharply confined on two IrO_2 layers close to the LaO layer. In this bilayer crystal structure, an existing potential well is further augmented with the inclusion of positively charged LaO layer which results in confining the extra valence electron made available by the La^3+ ion. The confined electron is bound along crystal a direction and is highly mobile in the bc plane. From the band structure point of view, now the existing half-filled J_eff = 1/2 states are further electron doped to destroy the antiferromagnetic Mott insulating state of IrO_2 layers near to the delta-doped layer. This leads to partially occupied Ir upper-Hubbard subbands which host the spin-orbital entangled 2DEG. The IrO_2 layers far away from the interface remain insulating and preserve the collinear G-type magnetic ordering of pristine Sr_3Ir_2O_7. The conductivity tensors calculated using semi-classical Boltzmann theory at room temperature reveal that the 2DEG exhibits large electrical conductivity of the order of 10^19. Formation of spin-orbital entangled 2D electron gas in layer delta-doped bilayer iridate La_δSr_3Ir_2O_7 B. R. K. Nanda July 31, 2023 ======================================================================================================== § INTRODUCTION The state-of-the-art growth techniques have paved the way to introduce dopants that can be confined in a single atomically thin layer, the so-called delta-doping (δ-doping) <cit.>. δ-doping is widely used to manipulate structures for fundamental importance as well as for novel applications <cit.>. Experimental realization of novel structures based on delta-doping <cit.> has been achieved by employing techniques such as molecular beam epitaxy <cit.>, metalorganic chemical vapor deposition <cit.>, flash lamp annealing <cit.>, etc. Though the δ-doping technique is widely applied in conventional semiconductors such as GaAs, it also paves the way to realize novel quantum phases in transition metal oxides and their heterostructures which involve chemically active d-electrons. For example, δ-doped SrTiO_3 exhibits quantum Hall effect <cit.> and Shubnikov-de Haas oscillations <cit.> arising from a two-dimensional electron gas (2DEG) which possess enhanced electron mobility <cit.>. Recently, iridate heterostructures and interfaces have been epitaxially grown which facilitate the growth of δ-doped iridate structures where an element is substituted with another element having excess holes or electrons. The 5d quantum materials, specifically iridates, are extensively studied as they host exotic states such as spin liquid <cit.>, Dirac and Weyl semimetals <cit.>, topological insulators <cit.>, etc., driven by competing interactions such as onsite Coulomb repulsion (U), spin-orbit coupling (SOC), Hund's coupling, etc. In Ruddlesden-Popper phases of strontium iridate, the Ir ions exhibit 4+ charge state and hence d^5 electronic configuration. Due to strong octahedral crystal field, the d orbital degeneracy gets lifted, giving rise to t_2g and e_g manifold. With the inclusion of strong SOC (≈ 0.43 eV), the t_2g manifold splits into spin-orbital entangled pseudo-spin J_eff = 3/2 and J_eff = 1/2 states (see Fig. <ref>). Due to Ir-5d^5 configuration, the J_eff = 3/2 (m_J = ± 1/2, ± 3/2) states are completely occupied, leaving a single hole in the J_eff = 1/2 state <cit.>. Further, with the inclusion of onsite correlation effect, these doubly degenerate J_eff = 1/2 states further split into lower and upper Hubbard subbands (LHB and UHB) with a gap in between. While the former is occupied, the latter spin-orbital entangled electron state can host 2DEG with intriguing transport properties upon electron doping. The Sr_3Ir_2O_7 (SIO-327, Ir-d^5), the quasi-two-dimensional member in Ruddlesden Popper phases of strontium iridate, has attracted a great deal of attention among theoreticians and experimenters alike, as it builds a perfect platform to realize novel properties via carrier doping of the half-filled Mott state. It has been established experimentally and theoretically that a dilute electron doping via La substitution causes collapse of both Mott and long-range G-type Neel state in (Sr_1-xLa_x)_3Ir_2O_7 (x ≈ 0.04) <cit.>. Moreover, a recent angle-resolved photoemission spectroscopy (ARPES) and optical reflectivity measurements report negative electronic compressibility <cit.> and charge density wave (CDW) instability <cit.> in electron doped SIO-327. Apart from electron doping in SIO-327, a very recent resonant inelastic X-ray scattering (RIXS) study reports the formation of an excitonic insulating state in pristine SIO-327 which may have the potential to exhibit new functionalities upon electron doping <cit.>. Motivated by the aforementioned observations, in this work, we pursue density functional theory (DFT) calculations on δ-doped SIO-327, where a single SrO layer is replaced by LaO layer. A single SIO-327 has six SrO layers along [100]. Therefore, to construct a δ-doped structure, it is sufficient to consider one unit cell and replace one of the SrO layers by LaO layer as shown in Fig. <ref>. The periodic superlattice formed out of it introduces a separation of 21 Åbetween two consecutive LaO layers which is reasonably large to ignore any interactions between these two layers. We predict the formation of a two-dimensional spin-orbital entangled electron gas in δ-doped SIO-327. The 2DEG is found to be non-spin polarized and sharply confined on two IrO_2 layers close to the LaO layer. While the extra La valence electron is bound along crystal a direction due to strong confinement potential formed due to the bilayer crystal structure, it is highly mobile in the bc plane. The spin density analysis suggests that the Neel state of pristine SIO-327 gets destroyed for those IrO_2 layers which are adjacent to the LaO layer. The IrO_2 layers far away from the interface remain insulating and preserve the G-type magnetic ordering of pristine SIO-327. The conductivity tensors estimated using semiclassical Boltzmann theory at room temperature infers that the 2DEG possess ultra-high planar conductivity tensors σ_yy,zz (≈ 10^19) which are three-orders higher as compared to normal tensor σ_xx, manifesting the formation of 2DEG. § STRUCTURAL AND COMPUTATIONAL DETAILS The bilayer crystal structure of SIO-327 is shown in Fig. <ref>. It crystallizes in C2/c space group with monoclinic crystal structure. The IrO_6 octahedra are distorted, the in-plane rotation angle is ≈ 11.81 ^∘ and out-of-plane tilt angle is ≈ 0.32 ^∘, respectively. For pristine SIO-327, the DFT+U+SOC calculations were carried out on the experimental structure <cit.> whereas for δ-doped structure, the calculations were performed on the optimized structure obtained by relaxing the atomic positions and cell volume while keeping the space group symmetry intact. All calculations were performed on a 1×2×1 supercell which is sufficient enough to accomodate the G-type magnetic structure of SIO-327. The strong correlation effect was incorporated via an effective onsite correlation parameter U_eff = U-J = 2 eV through the rotationally invariant approach introduced by Dudarev <cit.>. This choice of U is reasonable for SIO-327 as the experimentally reported value of band gap (≈ 0.27 eV) <cit.> is in close agreement with the theoretically estimated one at U_eff = 2 eV. The plane-wave based projector augmented wave method (PAW) <cit.> was utilized to perform DFT calculations in Vienna ab-initio simulation package (VASP) <cit.> within the Perdew-Burke-Ernzerhof generalized gradient approximation (PBE-GGA) for exchange-correlation functional. The Brillouin zone integrations were carried out using 1 × 4 × 8 Γ-centered k-mesh. The kinetic energy cutoff for plane-wave basis set was chosen to be 400 eV. The planar and macroscopic average potentials were calculated using QUANTUM ESPRESSO <cit.>. The principal components of conductivity tensors σ_αβ were computed at room temperature by employing semiclassical Boltzmann transport theory as implemented in VASPKIT <cit.>. A 5×18×18 k-mesh was used to obtain the smooth interpolation of bands and to compute the necessary derivatives which were required for the calculation of σ_αβ. § BULK ELECTRONIC STRUCTURE Before examining the effect of δ-doping on the electronic and magnetic structure of SIO-327, it is prudent to first analyze the ground state electronic and magnetic properties of undoped SIO-327. The Fig. <ref> depicts the band structures and corresponding atom resolved density of states (DOS) within DFT+U (first column) and DFT+U+SOC (second column) with the onsite Coulomb repulsion U_eff = 2 eV as appropriate for SIO-327. As shown in Figs. <ref>(a,b), the Fermi level (E_F) is well populated by t_2g states whereas the e_g states lie far above the E_F and are unoccupied. The latter occurs due to crystal field of IrO_6 octahedral complex which split five-fold degenerate d states into unoccupied e_g doublet and a low-energy lying and partially occupied t_2g triplet (see Fig. <ref>). The latter leads to a metallic state even with the inclusion of onsite Coulomb repulsion. With the inclusion of SOC along with U, see Figs. <ref>(c,d), the t_2g states further split to form spin-orbital entangled states which are expressed as | 1/2, ±1/2⟩ = 1/√(3)( |yz,σ̅⟩±|xy,σ⟩±i|xz,σ̅⟩), |3/2, ±1/2⟩ = 1/√(6)(|yz,σ̅⟩∓ 2|xy,σ⟩±i|xz,σ̅⟩), |3/2, ±3/2⟩ = 1/√(2)(|yz,σ⟩±i|xz,σ⟩), where ± corresponds to spin σ = ↑/↓, respectively. The J_eff = 3/2 states get completely occupied due to the d^5 valence state of Ir, whereas J_eff = 1/2 state is half-occupied and splits into LHB and UHB with a narrow gap in between (see Fig. <ref>). The opening of a gap with the inclusion of SOC infers that SIO-327 is a weakly correlated spin-orbit-assisted Mott insulator. This insulating state possesses G-type magnetic ordering (nearest-neighbor spins are antiferromagnetically coupled) which is depicted through spin density shown in Fig. <ref>(b). Moreover, the dominant spin-up or spin-dn mixture in local spin density reflects the spin mixture of the |1/2,± 1/2⟩ wave functions, respectively. § FORMATION OF SPIN-ORBITAL ENTANGLED ELECTRON GAS The unoccupied spin-orbital entangled state of the electron, i.e., the UHB of Ir, can give rise to an interesting quantum phenomenon upon electron doping. Recent experimental and theoretical studies have reported many intriguing features such as CDW instability <cit.>, negative electron compressibility <cit.>, and collapse of the band gap in La substituted SIO-327 <cit.>. To the best of our knowledge, a detailed theoretical analysis of the mechanisms governing the band gap collapse and emergent properties in electron-doped SIO-327 is still lacking in the literature. This motivates us to pursue a detailed electron structure analysis of electron doping in this quasi-bilayer compound. For this purpose, we have performed ab-initio calculations on delta-doped SIO-327 (La_δSIO-327). As shown in Fig. <ref> (c), the La_δSIO-327 configuration is achieved by replacing a single SrO layer by LaO layer in one of the bilayer units of the pristine crystal structure. Since La has one extra valence electron as compared to Sr, the δ-doping in this work implies the case of electron doping in SIO-327. As La is an electron donor, it will donate the extra electron to the system. In δ-doped 3d perovskite oxide SrTiO_3 <cit.>, it was found that the extra La electron spreads upto several TiO_2 layers around LaO which makes the Fermi surface complex as it is populated by many Ti-d states. To analyze the spread of electrons in La_δSIO-327, we have calculated the variation of the planar and macroscopic cell-average of the electrostatic potential V^PA and V^MA as a function of layers without (red dashed line) and with (black solid line) considering δ-doping. The V^MA is calculated in two steps. In the first step, the raw three-dimensional potential V^raw is averaged in the yz plane to obtain planar-average potential V^PA: V^PA(x)=1/S∫ V^raw(x,y,z)dydz, where S is the area of [100] plane of the unit cell. In the second step, the V^PA is averaged further over a period c normal to the bc plane to obtain V^MA: V^MA(x)=1/c∫_x-c/2^x+c/2 V^PA(x)dx, where c is the length at which averaging is performed. The c is chosen to be ≈ 3.6 Åwhich is the SrO-IrO inter-layer distance. The calculated V^PA, V^MA, and the electron wave functions for ground and first two excited states are shown in Fig. <ref>. Even without δ-doping, a potential well forms along crystal a direction with depth ≈ 6 eV. This confinement potential exists even in the bulk system due to the quasi-two-dimensional nature of SIO-327. As a consequence, different layers exhibit uneven potential which averages out to form periodic quantum wells (see Figs. <ref> (b,c)). On the contrary, in a regular perovskite structure, such confinement potential will vanish due to the three-dimensional nature of the crystal structure. With δ-doping, the well depth gets further modulated asymmetrically as the LaO layer repeats after lattice period a. Due to the large potential barrier, presumably, the extra La electron will spread up to two IrO_2 layers on either side of the LaO, i.e., layers L_2 and L_3. To validate it, we plot the charge density contours which are shown in Fig. <ref> (a). These contours are obtained by subtracting the charge densities of un-doped and doped SIO-327. As expected, while the Ir atoms corresponding to layers L_2 and L_3 hold most of the charge, the Ir atoms far away from the δ-doped layer do not gain any charge. The inset of Fig. <ref> (c) shows the wave functions (blue solid lines) corresponding to the ground and first two excited states (ψ_n(r) with n = 1, 2, 3,...) in δ-doped potential. These are obtained from the numerical solution to one particle Schrödinger equation. The strong localization of the wave functions inside the well and rapid decay outside the well further validate the obtained charge contours. Moreover, these wave functions resemble to the eigen solutions of the one-dimensional harmonic oscillator. It is important to note that, in δ-doped strongly correlated 3d transition metal oxide SrTiO_3, <cit.> the 2DEG was found to spread up to several TiO_2 layers. In our study, the 2DEG is found to be confined in only two IrO_2 layers and is spin-orbital entangled. Such confinement of 2DEG in few layers makes the Fermi surface devoid of large DOS. Therefore, such systems, once experimentally synthesized, will possess easier tunability and can be engineered to produce emergent quantum phases. To estimate the charged gain by Ir atom of layers L_2 and L_3 and to examine the electronic and magnetic structure of La_δSIO-327, in Figs. <ref> (b,c,d), we have plotted the layer resolved DOS, the schematic representation of the electronic states, and the band structure, respectively. The primary analysis of the ideal charge gained by Ir atoms can be made by analyzing the nominal charge states of single formulae unit of doped and undoped systems. In the undoped system Sr^6+_3Ir^8+_2O_7^14-, Sr and O ions possess 2+ and 2- charge states which give rise to Ir^4+ charge state and hence d^5 electronic configuration. As La possesses 3+ charge state, the adjacent IrO_2 layers on both the sides of LaO layer gain one additional electron to make the nominal charge state of Ir as 3.5. This is ideal when the electron gained is confined to the adjacent IrO_2 layers. The DFT calculations which are discussed next will let us know how much deviation occurs from this ideal distribution. To quantify the distribution of δ-doped electron (one per La atom), we integrate the DOS for UHB of Ir atoms in layers L_2 and L_3 from E_F to conduction band top. The obtained charges are listed in Table <ref>. As expected, while the dominant charge is held by Ir atoms (≈ 70%), the rest is distributed among other atoms. As inferred from band structure, the δ-doping leads to the formation of a metallic state in SIO-327. Due to charge spread, the Ir atom in layers L_2 and L_3 gets electron-doped (see charge contours in Fig. <ref>). Since, in bulk SIO-327, Ir exhibits half-filled J_eff = 1/2 state due to completely occupied J_eff = 3/2 states, which are now electron-doped, the antiferromagnetic Mott insulating state gets destroyed (see the spin and orbital polarization in Table <ref>). As a result, the UHB of the Ir atom from layers L_2 and L_3 gets partially occupied to form non-spin polarized 2DEG. However, the UHB of Ir atoms corresponding to layers L_1 and L_2 remains unoccupied and continues to show bulk insulating behavior with G-type magnetic ordering (see spin density in Fig. <ref> (c)). We did not notice any charge ordering which could have made the system insulator with the stabilization of 3+ and 4+ charge states of Ir, respectively. The layer resolved Ir DOS shown in Fig. <ref>(b) further confirms the picture made through band structure analysis. As can be seen clearly, layers L_1 and L_4 are insulating, whereas layers L_2 and L_3 possess finite DOS at the E_F leading to metallicity and collapse of magnetic ordering. § TRANSPORT PROPERTIES The formation of spin-orbital entangled 2DEG through the confinement effect can be quantified by calculating the conductivity. For this purpose, we have adopted semi-classical Boltzmann transport theory and calculated the conductivity tensors σ from the first-order derivative of the bands ϵ(k): σ_αβ(ϵ) = e^2τ/N∑_i,kv_α(i,k) v_β(i,k) δ(ϵ-ϵ_i,k)/dϵ, where τ is the relaxation time, i is the band index, v is the first-order derivative of ϵ_i,k and N is the number of k points sampled. The notations α and β denotes the crystal axes. The temperature-dependent conductivity evaluated using Eq. 4 is given by σ_αβ(T,μ) = 1/Ω∫σ_αβ(ϵ)[-∂f_μ(T,ϵ)/∂ϵ] dϵ, where Ω is the volume of the unit cell, μ (= E_F) is the chemical potential, and f is the Fermi-Dirac distribution function. The estimated conductivity tensors are shown in Fig. <ref> where we have plotted σ/τ vs Energy at room temperature. Due to confinement potential, while the electron motion is bound along the x direction, in the yz plane, it is free. This is very well reflected in the conductivity tensors. The conductivity tensor σ_xx is negligible as compared to σ_yy,zz. At E_F, the σ_yy,zz is four-order higher as compared to σ_xx. Interestingly, the 2DEG formed out of partially occupied Ir UHB possess ultra-high conductivity of the order of ≈ 10^19 Sm^-1s^-1. § SUMMARY To summarize, by pursuing density-functional studies on δ-doped quasi-two-dimensional iridate Sr_3Ir_2O_7 (SIO-327), where a single SrO layer is replaced by LaO layer, we predict the formation of a two-dimensional spin-orbital entangled electron gas (2DEG). A strong confinement potential forms along the x direction due to the quasi-two-dimensional nature of SIO-327, as well as due to the presence of positively charged LaO layer, the extra La electron gets bound and is highly mobile in the bc plane. The charge analysis suggests that nearly 70% of the doped electrons are confined to the IrO_2 planes adjacent to the LaO layer. As a consequence, the half-filled J_eff = 1/2 state gets electron-doped, leading to the destruction of the antiferromagnetic Mott insulating state and partially occupied Ir upper-Hubbard subbands that host the 2DEG. The IrO_2 layers away from the interface preserve the G-type magnetic ordering of pristine SIO-327. The conductivity tensors calculated using semi-classical Boltzmann theory reveal that the 2DEG possess ultra-high planar conductivity tensors σ_yy,zz (≈ 10^19), which are four-orders higher as compared to normal tensor σ_xx. Our study will encourage the experimenters to grow δ-doped structures for a wide class of spin-orbit correlated materials to explore the formation and application of spin-orbital entangled 2DEG. § ACKNOWLEDGEMENT The authors would like to thank HPCE, IIT Madras for providing the computational facility. This work is funded by the Department of Science and Technology, India, through grant No. CRG/2020/004330. 28 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Wood et al.(1980)Wood, Metze, Berry, and Eastman]Wood1980 author author C. E. C. Wood, author G. Metze, author J. Berry, and author L. F. Eastman, title title Complex free‐carrier profile synthesis by “atomic‐plane” doping of MBE GaAs, https://doi.org/10.1063/1.327383 journal journal J. App. Phys. volume 51, pages 383 (year 1980)NoStop [Chang et al.(2022)Chang, He, Prucnal, Zhang, Zhang, Zhou, Helm, and Dan]Chang2022 author author S. Chang, author J. He, author S. Prucnal, author J. Zhang, author J. Zhang, author S. Zhou, author M. Helm, and author Y. Dan, title title Atomically Thin Delta-Doping of Self-Assembled Molecular Monolayers by Flash Lamp Annealing for Si-Based Deep UV Photodiodes, https://doi.org/10.1021/acsami.2c04002 journal journal ACS Appl. Mater. Interfaces volume 14, pages 30000 (year 2022)NoStop [Liu et al.(1990)Liu, Lee, Chang, Wu, and Liou]Liu1990 author author D. G. Liu, author C. P. Lee, author K. H. Chang, author J. S. Wu, and author D. C. Liou, title title Delta‐doped quantum well structures grown by molecular beam epitaxy, https://doi.org/10.1063/1.104001 journal journal App. Phys. Lett. volume 57, pages 1887 (year 1990)NoStop [Schubert(1990)]Schubert1990 author author E. F. Schubert, title title Delta doping of III–V compound semiconductors: Fundamentals and device applications, https://doi.org/10.1116/1.576617 journal journal J. Vac. Sci. Technol. A volume 8, pages 2980 (year 1990)NoStop [Kim et al.(1993)Kim, Kim, and Min]Kim1993 author author Y. Kim, author M. Kim, and author S. Min, title title Properties of center and edge δ‐doped GaAs‐AlGaAs quantum wells grown by metalorganic chemical vapor deposition, https://doi.org/10.1063/1.108856 journal journal App. Phys. Lett. volume 62, pages 741 (year 1993)NoStop [Kim et al.(1995)Kim, Kim, and Min]KIM1995 author author T. Kim, author Y. Kim, and author S.-K. Min, title title Magnetotransport and electric subband studies of Si-delta-doped Al_0.27Ga_0.73As/GaAs single quantum wells grown by metalorganic chemical vapour deposition, https://www.sciencedirect.com/science/article/pii/004060909406261I journal journal Thin Solid Films volume 254, pages 61 (year 1995)NoStop [Oubram et al.(2009)Oubram, Mora-Ramos, and Gaggero-Sager]Oubram2009 author author O. Oubram, author M. E. Mora-Ramos, and author L. M. Gaggero-Sager, title title Effect of the hydrostatic pressure on two-dimensional transport in delta-doped systems, https://doi.org/10.1140/epjb/e2009-00294-0 journal journal The Eur. Phys. J. B volume 71, pages 233 (year 2009)NoStop [Matsubara et al.(2016)Matsubara, Takahashi, Bahramy, Kozuka, Maryenko, Falson, Tsukazaki, Tokura, and Kawasaki]Matsubara2016 author author Y. Matsubara, author K. S. Takahashi, author M. S. Bahramy, author Y. Kozuka, author D. Maryenko, author J. Falson, author A. Tsukazaki, author Y. Tokura, and author M. Kawasaki, title title Observation of the quantum Hall effect in δ-doped SrTiO_3, https://doi.org/10.1038/ncomms11631 journal journal Nat. Commun. volume 7, pages 11631 (year 2016)NoStop [Jalan et al.(2010)Jalan, Stemmer, Mack, and Allen]Jalan2010 author author B. Jalan, author S. Stemmer, author S. Mack, and author S. J. Allen, title title Two-dimensional electron gas in -doped SrTiO_3, https://doi.org/10.1103/PhysRevB.82.081103 journal journal Phys. Rev. B volume 82, pages 081103 (year 2010)NoStop [Kozuka et al.(2010)Kozuka, Kim, Ohta, Hikita, Bell, and Hwang]Kozuka2010 author author Y. Kozuka, author M. Kim, author H. Ohta, author Y. Hikita, author C. Bell, and author H. Y. Hwang, title title Enhancing the electron mobility via delta-doping in SrTiO_3, https://doi.org/10.1063/1.3524198 journal journal App. Phys. Lett. volume 97, pages 222115 (year 2010)NoStop [Okamoto et al.(2007)Okamoto, Nohara, Aruga-Katori, and Takagi]Okamoto2007 author author Y. Okamoto, author M. Nohara, author H. Aruga-Katori, and author H. Takagi, title title Spin-Liquid State in the S=1/2 Hyperkagome Antiferromagnet Na_4Ir_3O_8, https://doi.org/10.1103/PhysRevLett.99.137207 journal journal Phys. Rev. Lett. volume 99, pages 137207 (year 2007)NoStop [Kenney et al.(2019)Kenney, Segre, Lafargue-Dit-Hauret, Lebedev, Abramchuk, Berlie, Cottrell, Simutis, Bahrami, Mordvinova, Fabbris, McChesney, Haskel, Rocquefelte, Graf, and Tafti]kenney2019 author author E. M. Kenney, author C. U. Segre, author W. Lafargue-Dit-Hauret, author O. I. Lebedev, author M. Abramchuk, author A. Berlie, author S. P. Cottrell, author G. Simutis, author F. Bahrami, author N. E. Mordvinova, author G. Fabbris, author J. L. McChesney, author D. Haskel, author X. Rocquefelte, author M. J. Graf, and author F. Tafti, title title Coexistence of static and dynamic magnetism in the Kitaev spin liquid material Cu_2IrO_3, https://doi.org/10.1103/PhysRevB.100.094418 journal journal Phys. Rev. B volume 100, pages 094418 (year 2019)NoStop [Takahashi et al.(2019)Takahashi, Wang, Arsenault, Imai, Abramchuk, Tafti, and Singer]Takahashi2019 author author S. K. Takahashi, author J. Wang, author A. Arsenault, author T. Imai, author M. Abramchuk, author F. Tafti, and author P. M. Singer, title title Spin Excitations of a Proximate Kitaev Quantum Spin Liquid Realized in Cu_2IrO_3, https://doi.org/10.1103/PhysRevX.9.031047 journal journal Phys. Rev. X volume 9, pages 031047 (year 2019)NoStop [Chauhan and Nanda(2022)]Chauhan2022 author author A. Chauhan and author B. R. K. Nanda, title title Exploration of trivial and nontrivial electronic phases and of collinear and noncollinear magnetic phases in low-spin d^5 perovskites, https://doi.org/10.1103/PhysRevB.105.045127 journal journal Phys. Rev. B volume 105, pages 045127 (year 2022)NoStop [Ueda et al.(2018)Ueda, Kaneko, Ishizuka, Fujioka, Nagaosa, and Tokura]Ueda2018 author author K. Ueda, author R. Kaneko, author H. Ishizuka, author J. Fujioka, author N. Nagaosa, and author Y. Tokura, title title Spontaneous Hall effect in the Weyl semimetal candidate of all-in all-out pyrochlore iridate, https://doi.org/10.1038/s41467-018-05530-9 journal journal Nat. Commun. volume 9, pages 3032 (year 2018)NoStop [Pesin and Balents(2010)]Pesin2010 author author D. Pesin and author L. Balents, title title Mott physics and band topology in materials with strong spin–orbit interaction, https://doi.org/10.1038/nphys1606 journal journal Nat. Phys. volume 6, pages 376 (year 2010)NoStop [Swift et al.(2018)Swift, Porter, Wilson, and Van de Walle]Swift2018 author author M. W. Swift, author Z. Porter, author S. D. Wilson, and author C. G. Van de Walle, title title Electron doping in Sr_3Ir_2O_7: Collapse of band gap and magnetic order, https://doi.org/10.1103/PhysRevB.98.081106 journal journal Phys. Rev. B volume 98, pages 081106 (year 2018)NoStop [Hogan et al.(2015)Hogan, Yamani, Walkup, Chen, Dally, Ward, Dean, Hill, Islam, Madhavan, and Wilson]Hogan2015 author author T. Hogan, author Z. Yamani, author D. Walkup, author X. Chen, author R. Dally, author T. Z. Ward, author M. P. M. Dean, author J. Hill, author Z. Islam, author V. Madhavan, and author S. D. Wilson, title title First-Order Melting of a Weak Spin-Orbit Mott Insulator into a Correlated Metal, https://doi.org/10.1103/PhysRevLett.114.257203 journal journal Phys. Rev. Lett. volume 114, pages 257203 (year 2015)NoStop [He et al.(2015)He, Hogan, Mion, Hafiz, He, Denlinger, Mo, Dhital, Chen, Lin, Zhang, Hashimoto, Pan, Lu, Arita, Shimada, Markiewicz, Wang, Kempa, Naughton, Bansil, Wilson, and He]He2015 author author J. He, author T. Hogan, author T. R. Mion, author H. Hafiz, author Y. He, author J. D. Denlinger, author S.-K. Mo, author C. Dhital, author X. Chen, author Q. Lin, author Y. Zhang, author M. Hashimoto, author H. Pan, author D. H. Lu, author M. Arita, author K. Shimada, author R. S. Markiewicz, author Z. Wang, author K. Kempa, author M. J. Naughton, author A. Bansil, author S. D. Wilson, and author R.-H. He, title title Spectroscopic evidence for negative electronic compressibility in a quasi-three-dimensional spin–orbit correlated metal, https://doi.org/10.1038/nmat4273 journal journal Nat. Mat. volume 14, pages 577 (year 2015)NoStop [Chu et al.(2017)Chu, Zhao, de la Torre, Hogan, Wilson, and Hsieh]Chu2017 author author H. Chu, author L. Zhao, author A. de la Torre, author T. Hogan, author S. D. Wilson, and author D. Hsieh, title title A charge density wave-like instability in a doped spin–orbit-assisted weak Mott insulator, https://doi.org/10.1038/nmat4836 journal journal Nat. Mat. volume 16, pages 200 (year 2017)NoStop [Mazzone et al.(2022)Mazzone, Shen, Suwa, Fabbris, Yang, Zhang, Miao, Sears, Jia, Shi, Upton, Casa, Liu, Liu, Batista, and Dean]Mazzone2022 author author D. G. Mazzone, author Y. Shen, author H. Suwa, author G. Fabbris, author J. Yang, author S.-S. Zhang, author H. Miao, author J. Sears, author K. Jia, author Y. G. Shi, author M. H. Upton, author D. M. Casa, author X. Liu, author J. Liu, author C. D. Batista, and author M. P. M. Dean, title title Antiferromagnetic excitonic insulator state in Sr_3Ir_2O_7, https://doi.org/10.1038/s41467-022-28207-w journal journal Nat. Commun. volume 13, pages 913 (year 2022)NoStop [Dudarev et al.(1998)Dudarev, Botton, Savrasov, Humphreys, and Sutton]Dudarev1998 author author S. L. Dudarev, author G. A. Botton, author S. Y. Savrasov, author C. J. Humphreys, and author A. P. Sutton, title title Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study, https://doi.org/10.1103/PhysRevB.57.1505 journal journal Phys. Rev. B volume 57, pages 1505 (year 1998)NoStop [King et al.(2013)King, Takayama, Tamai, Rozbicki, Walker, Shi, Patthey, Moore, Lu, Shen, Takagi, and Baumberger]king2013 author author P. D. C. King, author T. Takayama, author A. Tamai, author E. Rozbicki, author S. M. Walker, author M. Shi, author L. Patthey, author R. G. Moore, author D. Lu, author K. M. Shen, author H. Takagi, and author F. Baumberger, title title Spectroscopic indications of polaronic behavior of the strong spin-orbit insulator Sr_3Ir_2O_7, https://doi.org/10.1103/PhysRevB.87.241106 journal journal Phys. Rev. B volume 87, pages 241106 (year 2013)NoStop [Blöchl(1994)]Bloch1994 author author P. E. Blöchl, title title Projector augmented-wave method, https://doi.org/10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume 50, pages 17953 (year 1994)NoStop [Kresse and Joubert(1999)]Kresse1999 author author G. Kresse and author D. Joubert, title title From ultrasoft pseudopotentials to the projector augmented-wave method, https://doi.org/10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop [Kresse and Furthmüller(1996)]Kresse1996 author author G. Kresse and author J. Furthmüller, title title Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume 54, pages 11169 (year 1996)NoStop [Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, Corso, de Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj, Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello, Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and Wentzcovitch]Giannozzi2009 author author P. Giannozzi, author S. Baroni, author N. Bonini, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author G. L. Chiarotti, author M. Cococcioni, author I. Dabo, author A. D. Corso, author S. de Gironcoli, author S. Fabris, author G. Fratesi, author R. Gebauer, author U. Gerstmann, author C. Gougoussis, author A. Kokalj, author M. Lazzeri, author L. Martin-Samos, author N. Marzari, author F. Mauri, author R. Mazzarello, author S. Paolini, author A. Pasquarello, author L. Paulatto, author C. Sbraccia, author S. Scandolo, author G. Sclauzero, author A. P. Seitsonen, author A. Smogunov, author P. Umari, and author R. M. Wentzcovitch, title title QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, https://doi.org/10.1088/0953-8984/21/39/395502 journal journal J. Phys.: Cond. Matt. volume 21, pages 395502 (year 2009)NoStop [Wang et al.(2021)Wang, Xu, Liu, Tang, and Geng]VASPKIT author author V. Wang, author N. Xu, author J.-C. Liu, author G. Tang, and author W.-T. Geng, title title VASPKIT: A user-friendly interface facilitating high-throughput computing and analysis using VASP code, https://www.sciencedirect.com/science/article/pii/S0010465521001454 journal journal Computer Physics Communications volume 267, pages 108033 (year 2021)NoStop
http://arxiv.org/abs/2307.05380v1
20230607153003
Optimized Crystallographic Graph Generation for Material Science
[ "Astrid Klipfel", "Yaël Frégier", "Adlane Sayede", "Zied Bouraoui" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cs.LG" ]
Comparison of SeDuMi and SDPT3 Solvers for Stability of Continuous-time Linear System Guangda Xulabel1 June 7, 2023 ===================================================================================== Graph neural networks are widely used in machine learning applied to chemistry, and in particular for material science discovery. For crystalline materials, however, generating graph-based representation from geometrical information for neural networks is not a trivial task. The periodicity of crystalline needs efficient implementations to be processed in real-time under a massively parallel environment. With the aim of training graph-based generative models of new material discovery, we propose an efficient tool to generate cutoff graphs and k-nearest-neighbours graphs of periodic structures within GPU optimization. We provide pyMatGraph a Pytorch-compatible framework to generate graphs in real-time during the training of neural network architecture. Our tool can update a graph of a structure, making generative models able to update the geometry and process the updated graph during the forward propagation on the GPU side. Our code is publicly available at https://github.com/aklipf/mat-graph. § INTRODUTION New materials discovery is a fundamental challenge in material sciences where high-throughput screening based on machine learning models is largely employed to obtain materials with desired properties. Crystalline (crystal) material generation has recently received considerable attention, e.g. <cit.>. In our setting, we are interested in generating new crystal materials for developing new solar panels with a band gap enabling hydrolyse. This helps to solve problems related to clean energy production and storage, which is one of the major challenges facing our society. It can also be used to produce hydrocarbons from CO2, helping to reduce the carbon footprint of human activities. From organic chemistry to material science, Graph Neural Networks (GNN) have received increasing attention in a variety of tasks such as classification <cit.> and generation <cit.>. Notice that organic molecules are composed of wide carbon chains with a limited variety of atoms, while crystal materials are three-dimensional periodic structures composed of a wide variety of chemical bonds and atoms. The periodic structure of crystals is often represented as a parallelepiped tiling, a.k.a crystal lattice or unit cell. While generating graph-based representations of organic molecules is straightforward, the periodic structure of crystals makes difficult graph processing when training a generative model, and in particular when a massively parallel environment is required. More precisely, generative models may update the geometry of a chemical structure during forward propagation. However, since the graph associated with a given structure is built from the local environment of atoms, a modification of the geometry leads to the modifications of the graph associated with the structure. Consequently, building a generative model with a dynamic graph is hard to achieve on a periodic structure compared to organic molecules. When training graph-based generative models for material discovery, cutoff distance is a commonly used technique <cit.>. It designates a relative distance threshold value above which no interaction between nodes is considered. In the same vein, <cit.> suggests that k-nearest-neighbours (KNN) graphs can also be a good choice for GNN models. KNN-graph is a type of graph where all the nodes are connected to the k-nearest nodes. When processing small molecules, any naive strategy of computing the interatomic distances is feasible, allowing to compute KNN or cutoff graph in a short amount of time and reasonable memory. However, for periodic structures which are infinite, the search area should be carefully selected to avoid unnecessary calculation and memory saturation. In fact, the volume of the search space expands with the cube of the search radius. As such, possible graphs should be generated in milliseconds to be usable in practice during the training process. Moreover, a periodic structure is represented with a multi-graph where a given node can share multiple edges with another and with itself which brings more complexity to the graph generation process. Finally, for big structures, a processing strategy suitable for massively parallel environments should be used in order to deal with a batch of multiple structures at the same time. To address the aforementioned issues, we propose an efficient tool that solves KNN and cutoff graph generation for crystalline materials. We provide a compatible implementation with PyTorch that performs on GPUs[Code available at <https://github.com/aklipf/mat-graph>]. We used an approach inspired by the KD-tree search algorithm adapted for periodic structures and propose a data structure adapted to massively parallel environments (GPUs) that effectively keeps track of the KNN of each atom. We empirically show the benefits of using our tool. § CRYSTALLOGRAPHIC GRAPH GENERATION A crystalline structure can be defined with a cloud of atoms and a repetition pattern that represent periodicity. The repetition pattern is often described as a parallelepiped called a lattice or a cell. The periodic structure is obtained with the tiling of the space by the crystal cell. Consequently, a given atom inside of the cell has multiple positions because of the tiling in space and the local environment of an atom which can overlap with adjacent repetition. §.§ Crystallographic Graph We follow <cit.> to define the graph associated with a crystal material as an oriented graph where each edge is represented by triplets containing the index of the source node, the index of the destination node and the relative cell coordinate of the destination node. Figure <ref> illustrates this representation. Notice that the definition of graph provided in <cit.> generalizes to most of the graph definitions proposed in previous works <cit.>. We now recall the formal definition of crystalline structure <cit.>. The representation space of featured materials M^F is the disjoint union ∐_n ∈ℕ^F_n where: M^F_n = {(ρ, x, z) | ρ∈ GL_d(ℝ), x ∈ [0, 1[^n × d, z ∈ F^n } Chemical materials are represented in = ^ℕ, with atomic numbers as feature sequence z. ^F_n is an infinite set of triplet ρ, x, z that represents all possible materials with n atoms. The atomic number has a chemistry reference, e.g. 1 for hydrogen or 6 for carbon. We call directed 2-graph Γ = (Γ_0, Γ_1, Γ_2) a triplet of sets together with applications: * π_1 : Γ_1 →Γ_0 ×Γ_0, written π_1(γ) = (γ, γ) * π_2 : Γ_2 →Γ_0 ×Γ_0 ×Γ_0 We call Γ a directed 1-graph when Γ_2 = ∅. The aforementioned graphs are often called multi-graphs or hyper-graphs because they generalise 1-graphs to dimensions ≥ 1. They are directed because we do not assume any symmetry on Γ w.r.t vertice permutations. Recall that π_1 and π_2 may not be injective. Let M = (ρ, x, z) in _n^F be a material and c_i > 0 for 1 ≤ i ≤ n denotes cutoff distances. We define a directed 2-graph Γ = Γ_M,c by the graded components: * Γ_0 = {1, …, n } * Γ_1 = { (i, j, τ) ∈Γ_0 ×Γ_0 ×^d | || ρ (x_j - x_i + τ) || < c_i } * Γ_2 = { (γ, γ') ∈Γ_1 ×Γ_1 | γ = γ'} This graph construction includes many definitions of material graphs, making it versatile and usable in most contexts. This definition includes a graph built from a constant cutoff distance (i.e. c_i is constant), a graph built from k nearest neighbour or built from chemical properties such as the covalent radii. For more detail, we refer to <cit.>. §.§ Generation Process To handle the periodic nature of crystalline, we adapt our graph generation process to work in a torus space. To this end, graph generation is performed by exploring the direct repetition of a cell where we start by evaluating the adjacent cell and extend the search area until we find all the edges. Our graph generation method is built upon two main parts: a searching algorithm and an ordered stack. Combined, the generation process follows an iterative process limiting the RAM usage by splitting the search area. Our generation process remains fast since only a few iterations are required, avoiding useless search areas. Searching procedure Our search procedure is based on a classic KD-tree search strategy. As shown in Figure <ref>, a search radius is used to represent the area where connected nodes can exist. On the other side, we expand the explored area up to a search radius. As the search radius is defined with the KNN in the case of a KNN-graph, the search radius decreases over time when a new area is explored. The search procedure pseudo-code is given by Algorithm <ref>. Ordered stack To keep track of the k closest points already discovered by our search procedure, we proposed an efficient data structure to store points. Our ordered stack first concatenates new data and then sorts them by distance. After that, the edges are filtered to keep only the KNN in the case of a KNN-graph or the edges under a given cutoff distance in the case of a cutoff-graph. In addition to graph generation, our tool provides additional functionalities such as: * Symmetric graph: as some GNN require symmetric directed graphs to perform specific message-passing schema, our tool includes a procedure that makes a given graph symmetric by adding missing edges while guaranteeing the uniqueness of the edges. * Triplets generation: We provide an implementation to generate triplets composed of two edges sharing the same source nodes during the run-time. This task is important because recent works use triplets information during inference <cit.>. § PERFORMANCE EVALUATION To evaluate the performance of our tool, we conducted experiments on Materials project <cit.> which is a dataset composed of 133420 crystalline materials studied with ab inito calculation. We considered the same setting as <cit.> where structures composed of more than 64 atoms are removed since they are in general considered outliers. The experiments are performed on an Nvidia quadro RTX 8000 GPU. CPU vs GPU We compared the time required to process all the structures for our tool with and without GPU optimization. We used a fixed batch size of 256 structures and generated the structures for the 16, the 32 and the 64 nearest neighbours of atoms. As shown in table <ref>, the KNN-graph generated on GPU is up to 40 times faster than an equivalent CPU library. Complexity, inference time and RAM usage To check the time complexity of our method, we compare the generation time of one batch with various KNN settings in Table <ref> and batch size in Table <ref>). Experiments on batch size have been performed for a KNN-graph with 32 neighbours. § CONCLUSION We propose an efficient tool to convert crystalline materials into graphs. Our library allows for reducing the time spent during prepossessing. More importantly, the graph conversion is quick enough to be used during the training process without the prepossessing step and updates the graph while updating the geometry of a given structure. Our tool opens new possibilities in generative networks for material science. § ACKNOWLEDGMENTS This work has been supported by ANR-22-CE23-0002 ERIANA, ANR-20-THIA-0004 and by HPC resources from GENCI-IDRIS (Grant 2022-[AD011013338]). named
http://arxiv.org/abs/2306.03336v1
20230606011339
Exploiting Scratchpad Memory for Deep Temporal Blocking: A case study for 2D Jacobian 5-point iterative stencil kernel (j2d5pt)
[ "Lingqi Zhang", "Mohamed Wahib", "Peng Chen", "Jintao Meng", "Xiao Wang", "Toshio Endo", "Satoshi Matsuoka" ]
cs.DC
[ "cs.DC" ]
A case study for 2D Jacobian 5-point iterative stencil kernel (j2d5pt) authorsperrow=4 Tokyo Tech AIST Japan RIKEN R-CCS Japan AIST RIKEN R-CCS Japan SIAT China ORNL USA Tokyo Tech Japan RIKEN R-CCS Tokyo Tech Japan Tokyo Institute of Technology, Japan AIST, Japan [email protected] RIKEN CCS, Hyogo, Japan [email protected] AIST, Japan RIKEN CCS, Hyogo, Japan [email protected] Oak Ridge National Laboratory, US Wangx2@orn Shenzhen Institutes of Advanced Technology, China Wangx2@orn General Purpose Graphics Processing Units (GPGPU) are used in most of the top systems in HPC. The total capacity of scratchpad memory has increased by more than 40 times in the last decade. However, existing optimizations for stencil computations using temporal blocking have not aggressively exploited the large capacity of scratchpad memory. This work uses the 2D Jacobian 5-point iterative stencil as a case study to investigate the use of large scratchpad memory. Unlike existing research that tiles the domain in a thread block fashion, we tile the domain so that each tile is large enough to utilize all available scratchpad memory on the GPU. Consequently, we process several time steps inside a single tile before offloading the result back to global memory. Our evaluation shows that our performance is comparable to state-of-the-art implementations, yet our implementation is much simpler and does not require auto-generation of code. <ccs2012> <concept> <concept_id>10010147.10010169.10010170.10010173</concept_id> <concept_desc>Computing methodologies Vector / streaming algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010169.10010170.10010174</concept_id> <concept_desc>Computing methodologies Massively parallel algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies [500]Computing methodologies Vector / streaming algorithms [500]Computing methodologies Massively parallel algorithms Exploiting Scratchpad Memory for Deep Temporal Blocking Jintao Meng July 31, 2023 ======================================================= § INTRODUCTION When observing the previous generations of GPUs, Nivida GPUs for instance, there is a clear trend of increase in the cache capacity. Especially the volume of scratchpad memory (or shared memory in CUDA <cit.>) increased from 720 KB in K20 (2013) to 17.30 MB in A100 (2020). The latest H100 (2023) GPU even pushes max usable shared memory to be 29.83 MB to more than 200 KB per stream multiprocessor(SM). GPU optimizations that are commonly used in HPC applications were designed mostly assuming that scratchpad memory is not larger than 100 KB per stream multiprocessor <cit.>. There is a potential in leveraging the untapped scratchpad memory to aggressively optimize for data locality. In this work, we use a case study kernel commonly used in HPC applications, namely 2D Jacobian 5-point iterative stencil, to fully take advantage of the scratchpad memory for tiling data in an unusual way. More specifically, we run each of the tiles in a serial fashion one after the other while aggressively using the shared memory to run each tile entirely from shared memory. We use device-wide synchronization to resolve the spatial dependency between thread blocks. We demonstrate a new approach to leverage the large capacity of shared memory by proposing a temporal blocking stencil scheme that optimizes for peak data locality, i.e. running the entire problem from shared memory. Our method is much simpler than complex temporal blocking schemes; iterative kernels that use our methods can be manually written, unlike complex temporal schemes that require auto-generation of code. § RELATED WORK Temporal blocking <cit.> tiles the domain and processes the domain with in combined time steps. Due to space limitations, we mainly review StencilGen <cit.> and AN5D <cit.>. Both works used 2.5D or 3.5D tiling and relied on code auto generation for performance optimization. In addition, they relied on overlapped tiling within thread blocks. They did not exploit the inter thread block data exchange pattern. Regarding the usage of scratchpad memory, StencilGen stores all combined time steps in scratchpad memory; AN5D uses scratchpad memory conservatively for double buffer. As a result, in the j2d5pt double-precision kernel. StencilGen and AN5D consumed about 4.32 MB and 0.864 MB scratchpad memory, respectively. So, both AN5D and StencilGen left most of the scratchpad memory untapped, and are overly complex to implement. § DEEP TEMPORAL BLOCKING (DTB) language = C++, breaklines = true, breakindent = 10pt, lineskip=-1pt, basicstyle = , commentstyle = , classoffset = 0, keywordstyle = , stringstyle = , frame = trbl, framesep=0pt, numbers = left, stepnumber = 1, xrightmargin=12pt, xleftmargin=0pt, numberstyle = , tabsize = 1, captionpos = t, directivestyle=, emph=int,char,double,float,unsigned, int3, float4, float2, emphstyle=, escapeinside=<@@> §.§ Basic function Listing <ref> shows the base kernel function we used in this case study. We only modified the input and output pointer location to use scratchpad memory. In this kernel, we move the time loop from the host side to he be inside the kernel. Next, we tile the domain of the problem spatially and run the tiles in a serial fashion. For each tile, we run it entirely to completion, over all its time steps, before we start on the next tile. §.§ Dependency Between Thread Blocks We use the CUDA grid-level barrier to ensure that each thread block can exchange the halo region correctly. We use the bulk synchronous parallel (BSP) model. §.§ Processing the Tiles in Order After we load a tile into the scratchpad memory, we process the tile for several time steps (temporal blocking) before moving to the next tiling. Figure <ref> shows the process. § EVALUATION We compare DTB with StencilGen <cit.> and AN5D <cit.>, the state-of-the-art implementations for temporal blocking for stencils (a domain size of 8192^2). We used 8592×8328 to run the DTB. We also report a pruned version that considers 8192^2 as a valid domain size. Figure <ref> shows the result: the performance of DTB is comparable to that of state-of-the-art temporal blocking implementations (SOTAs). § CONCLUSION In this work, we discuss a case study on the use of scratchpad memory for DTB on the j2d5pt stencil. Instead of applying a complex temporal blocking implementation, we just tile the domain so that each tile fully occupies the scratchpad memory. Evaluation shows that DTB is compatible with other SOTAs. We anticipate that DTB could perform even better on a larger scratchpad memory architecture, which would be explored in future work. This work was supported by JSPS KAKENHI under Grant Number JP21K17750. This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This research used resources at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The authors wish to express their sincere appreciation to Jens Domke, Aleksandr Drozd, Emil Vatai and other RIKEN R-CCS colleagues for their invaluable advice and guidance throughout the course of this research. Finally, the first author would also like to express his gratitude to RIKEN R-CCS for offering the opportunity to undertake this research in an intern program. ACM-Reference-Format
http://arxiv.org/abs/2306.01440v1
20230602110046
A Modular Test Bed for Reinforcement Learning Incorporation into Industrial Applications
[ "Reuf Kozlica", "Georg Schäfer", "Simon Hirländer", "Stefan Wegenkittl" ]
cs.AI
[ "cs.AI", "cs.SY", "eess.SY" ]
[ D. Thévenin July 31, 2023 ================= § INTRODUCTION
http://arxiv.org/abs/2306.02524v1
20230605011838
Kinodynamic FMT* with Dimensionality Reduction Heuristics and Neural Network Controllers
[ "Dongliang Zheng", "Panagiotis Tsiotras" ]
cs.RO
[ "cs.RO" ]
Experimentally Realizable Continuous-variable Quantum Neural Networks George Siopsis July 31, 2023 ===================================================================== This paper proposes a new sampling-based kinodynamic motion planning algorithm, called FMT*PFF, for nonlinear systems. It exploits the novel idea of dimensionality reduction using partial-final-state-free (PFF) optimal controllers. With the proposed dimensionality reduction heuristic, the search space is restricted within a subspace, thus faster convergence is achieved compared to a regular kinodynamic FMT*. The dimensionality reduction heuristic can be viewed as a sampling strategy and asymptotic optimality is preserved when combined with uniform full-state sampling. Another feature of FMT*PFF is the ability to deal with a steering function with inexact steering, which is vital when using learning-based steering functions. Learning-based methods allow us to solve the steering problem for nonlinear systems efficiently. However, learning-based methods often fail to reach the exact goal state. For nonlinear systems, we train a neural network controller using supervised learning to generate the steering commands. We show that FMT*PFF with a learning-based steering function is efficient and generates dynamically feasible motion plans. We compare our algorithm with previous algorithms and show superior performance in various simulations. § INTRODUCTION Motion planning, as a fundamental component of robot autonomy, has been studied extensively in the last three decades to increase its efficiency and capability. Efficiency means faster convergence to better solutions. Efficient planning algorithms are crucial for robots with limited computation power and for replanning in changing environments. Capability means dealing with more complicated planning problems. High-dimensional state space, nonlinear system dynamics, and cluttered environments with nonconvex obstacles still pose great challenges for efficient kinodynamic planning despite recent advances. Sampling-based motion planning algorithms, such as PRM <cit.> and RRT <cit.>, have been developed to solve planning problems in high-dimensional continuous state spaces by incrementally building a graph/tree through the search space. The optimal sampling-based planning algorithm RRT* <cit.> almost surely converges asymptotically to the optimal solution. RRT* is well-suited for planning in high-dimensional spaces and obstacle-rich environments. Many applications of RRT* have been studied in recent years <cit.>. For motion planning for dynamical systems, sampling-based optimal kinodynamic planning algorithms (SBKMP) such as Kinodynamic RRT* <cit.> and Kinodynamic FMT* <cit.> have been developed to consider differential constraints. SBKMP requires any two points sampled in the planning space to be connected with an optimal trajectory. For robots with differential constraints, the optimal trajectory between two states is obtained by solving a two-point boundary value problem (TPBVP), which is a non-trivial undertaking for complex nonlinear systems. The solution to this local TPBVP is also referred to as the steering function. Simulation-based methods such as SST <cit.> avoid solving the TPBVP by using random control sampling and simulation. However, without the local optimal edges (trajectories) provided by the steering function, the convergence of SST to a good solution is slow. Solving TPBVPs efficiently for nonlinear systems is one of the bottlenecks of kinodynamic RRT*. Thus, researchers have looked into more efficient ways to solve these TPBVPs. A steering function based on LQR is used in <cit.>. A fixed-final-state free-final-time controller for linear systems that optimally connects any pair of states is introduced in <cit.>. While an analytical solution of the TBPVP for linear systems is available, considering general nonlinear system dynamics is difficult. Learning-based methods have the potential for solving TPBVP efficiently. To deal with nonlinear dynamics, steering functions based on supervised learning and reinforcement learning are developed in <cit.> and <cit.> respectively, and integrated with a kinodynamic RRT* algorithm. In <cit.>, a goal-conditioned state-feedback neural network controller for nonlinear systems is trained and used for solving the TPBVPs in RRT*. Another limitation of RRT* is the slow convergence rate of the solution to the optimal one, which is especially evident for the kinodynamic planning case where the sampling space is not just the configuration space but the full state space. Heuristic and informed sampling methods have been developed to improve the convergence rate. Informed RRT* <cit.> focuses sampling to an informed subset that could potentially provide a better solution. Exploiting the benefit of ordered search, FMT* <cit.> and BIT* <cit.> are shown to find better solutions faster than RRT*. However, most of these methods only consider the geometric planning problem. Existing work on heuristics for improving the convergence of kinodynamic motion planning is rather limited <cit.>. In our previous work <cit.>, the Kino-RRT* with a dimensionality reduction heuristic is developed, in which the heuristic is obtained by solving a partial final-state free (PFF) optimal control problem. Instead of sampling the full state space, Kino-RRT* only samples part of the state space while the rest of the states are selected by the PFF optimal controller. By sampling in the reduced state space and utilizing the PFF optimal controller, Kino-RRT* shows faster convergence. An analytical solution for the PFF optimal control problem for linear systems is also derived in <cit.>. In this paper, we propose the FMT*PFF, which is built on our previous works <cit.> and <cit.>. We extend the dimensionality reduction heuristic to learning-based planners and train neural network controllers to solve the PFF optimal control problem. Compared to <cit.>, the dimensionality reduction heuristic is used in FMT*PFF to improve the convergence rate of the algorithm. Also, training a PFF neural network controller is simpler than the set-to-set controller in <cit.>. Compared to <cit.>, solving PFF using supervised learning allows us to deal with nonlinear system dynamics efficiently. Furthermore, while <cit.> and <cit.> are based on the RRT* algorithm, FMT*PFF is based on the FMT* algorithm to benefit from ordered search. The contributions of the paper are: * A dimensionality reduction heuristic for accelerating sampling-based kinodynamic planning for nonlinear systems. * A neural network controller for solving the partial final-state free (PFF) optimal control problem. The proposed PFF neural network controller is used as the steering function in sampling-based kinodynamic motion planning algorithms. * The FMT*PFF algorithm is developed for planning with learning-based steering functions that cannot achieve exact steering. * Extensive simulations and comparison with previous methods show better performance with our method. § PRELIMINARIES §.§ Problem Statement The optimal kinodynamic motion planning problem is given by the following optimal control problem (OCP), min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_s, x(t_f)=x_g, u ∈𝒰, x ∈𝒳_free, ∀ t∈ [0, t_f]. where x ∈𝒳⊂^n_x is the state, u ∈𝒰⊂^n_u is the control input, x_s and x_g are the initial state and goal state, respectively. The free space is denoted by 𝒳_free⊂𝒳, where at each x ∈𝒳_free, the system does not collide with any obstacles in the environment. Finally, c(x, u) is a cost function that we aim to minimize. The goal of the optimal kinodynamic motion planning problem is to find a control trajectory u(t), t ∈ [0,t_f], such that the solution state trajectory x(t) is obstacle-free, reaches the goal state, and minimizes a cost. §.§ Partial-Final-State-Free Optimal Controller Sampling-based motion planning algorithms such as RRT* and FMT* solve the problem (<ref>)-(<ref>) by growing a tree. The state space is approximated by random samples. The transition between samples is achieved using optimal steering functions. The sampled nodes and the connections between nodes define a graph. A trajectory tree is obtained by searching over this graph. In kinodynamic RRT* and FMT*, the edge between two sampled states, x_a and x_b, is constructed using a steering function which is the solution of a TPBVP given by min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_a, x(t_f)=x_b, u ∈𝒰, ∀ t∈ [0, t_f]. Note that the obstacle constraint in (<ref>) is removed in TPBVP. In our proposed FMT*PFF, instead of sampling the full state x from the full state space 𝒳, we sample a partial state x̅ from a state space of reduced dimensionality 𝒳̅. Let x = [x_1^⊤ x_2^⊤]^⊤, where x_1 ∈ℝ^n_1, x_2 ∈ℝ^n_2, and n_1 + n_2 = n_x. We introduce the partial-final-state-free (PFF) optimal control problem as follows min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_a, x_1(t_f)=x̅_c, u ∈𝒰, ∀ t∈ [0, t_f]. Compared to (<ref>), instead of fixing the state x(), only x_1() is fixed and x_2() is free in (<ref>). Note that, after solving the problem (<ref>)-(<ref>), we obtain the full state trajectory which includes the full final state x(t_f). Thus, the PFF optimal controller chooses the remaining free final state to minimize the cost. If we set this full final state x(t_f) as the terminal state in the terminal constraint in problem (<ref>)-(<ref>) and solve the problem (<ref>)-(<ref>), we will get the same trajectory as in problem (<ref>)-(<ref>). Therefore, the partial state sampling and the PFF optimal controller work as an intelligent heuristic for state-space sampling. For linear systems and quadratic cost functions, the analytical solution of the problem (<ref>)-(<ref>) is derived in <cit.>. For general nonlinear systems and cost functions, solving (<ref>)-(<ref>) efficiently is the bottleneck for sampling-based kinodynamic planning algorithms. Methods that integrate a numerical solver within a sampling-based planner to solve (<ref>)-(<ref>) have been studied <cit.>. However, the long computation time makes them unacceptable for practical use. In this paper, we use the PFF optimal controller as the steering function. We train a neural network controller to solve the PFF optimal control problem. We show that the proposed FMT*PFF algorithm with a neural network controller is efficient and generates dynamically feasible motion plans. § THE FMT*PFF ALGORITHM The main differences between FMT*PFF and the original kinodynamic FMT* are: 1) state sampling in a reduced state space; 2) use of the PFF optimal controller as the steering function. In our case, the PFF optimal controller is approximated using a neural network. .5em .5em The FMT*PFF algorithm is given by Algorithm <ref> and a graphical illustration is given in Figure <ref>. We use x̅ to represent a partial state in the reduced state space and X̅ to represent the partial state set. We use x to represent a (full)) state in the (full) state space, and X to represent the set of (full) states. Some primitive procedures are given as follows. Sampling: The sampling procedure 𝖲𝖺𝗆𝗉𝗅𝖾𝖯𝖥𝖥(m) randomly samples m partial states in the reduced state space. The sampled partial states are collision-free in the corresponding reduced state space. For example, for a robot whose state space includes the position space and the velocity space, 𝖲𝖺𝗆𝗉𝗅𝖾𝖯𝖥𝖥 samples positions of the robot that are collision-free. Near Nodes: The function 𝖭𝖾𝖺𝗋(x,X̅) returns all the partial states in X̅ that are contained in a ball of radius r centered at x. The function 𝖭𝖾𝖺𝗋(X,x̅) returns all the states in X that are contained in a ball of radius r centered at x̅. One simple implementation of the distance function is the Euclidean distance in the partial state space. Collision Checking: The function 𝖢𝗈𝗅𝗅𝗂𝗌𝗂𝗈𝗇𝖥𝗋𝖾𝖾(τ) takes a trajectory τ (an edge segment) as an input and returns true if and only if τ lies entirely in the collision-free space. Cost: The procedure 𝖢𝗈𝗌𝗍(x) returns the cost-to-come from the root node to x. Segment Cost: The procedure 𝖲𝖾𝗀𝖢𝗈𝗌𝗍(x_i,x̅_j) returns the cost to go from x_i to x̅_j. This cost is obtained by solving the PFF optimal control problem with boundary conditions x_i and x̅_j. We can train a neural network to predict this edge cost. Steering: The procedure 𝖲𝗍𝖾𝖾𝗋𝖯𝖥𝖥(x_i,x̅_j) solves the TPBVP using the PFF optimal controller, and it returns a trajectory τ that starts from x_i and ends at x̅_j. In Algorithm <ref>, every vertex v is associated with a state v.x and the corresponding partial state v.x̅. We initialize the tree in Line 1-2. In Line 3, the m partial states and the goal are added to the unvisited set X̅_unvisited, which is a set of partial states that have not been added to the tree. X_open is the set of states that have already been added to the tree. They are the frontier nodes of the tree that will be extended next. x_c is the state in X_open that has the minimum cost-to-come (Line 22) and x̅_c is the partial state associated with x_c. These are initialized in Line 5. If x̅_c = x̅_g, the solution has been found. Otherwise, we try to extend the tree. In Line 8, the neighboring partial states of x_c in X̅_unvisited, X̅_near, are found. For every x̅_near in X̅_near, we try to find its best parent in X_open (Line 9-18). In Line 10, the neighboring state of x̅_near in X_open, X_near, are found. The best parent for x̅_near that results in the minimum cost-to-come for x̅_near is obtained in Line 11. The trajectory τ and the full state x_new are obtained in Line 12 using the PFF steering function. If τ is collision-free, the new vertex v and new edge are added to the tree, and the sets X̅_unvisited and X_open,new are updated (Line 13-18). v(x_parent) denotes the vertex associated with x_parent and v(x_c).x̅ denotes the partial state associated with x_c. §.§ A Neural Network Controller for PFF Optimal Control The proposed FMT*PFF can directly work with the analytical solution of the PFF optimal control problem, a numerical solver for the PFF optimal control problem, and a learning-based steering function for the PFF optimal control problem. In this section, we describe training the neural network controller for PFF optimal control. We first generate the training data. For a dynamical system, we use numerical optimization solvers <cit.> to solve the PFF optimal control problem given by (<ref>)-(<ref>) offline. Since we are interested in steering the system from different initial states to the partial final state, we generate optimal trajectories with different initial states. The initial states are uniformly sampled from an initial state set. We choose the partial final state to be the position, and let the remaining state (heading angle, velocity, etc.) be free. We utilize the translational invariant of the trajectories. If the goal position is not the origin, we can translate the goal position to the origin and translate the position of the starting state accordingly. After solving the optimal control problem with the translated boundary condition, we can get the trajectory of the original problem by translating the solution trajectory back. Therefore, we choose the goal position to be at the origin (zero). Translational invariance regarding the position is common for systems such as double integrator, car, and UAVs. Note that we only need to steer the system to nearby states given by the 𝖭𝖾𝖺𝗋 function in FMT*PFF. Thus, a 'local' PFF optimal controller is sufficient. This makes the learning-based controller using a neural network well-suited for this task since we can only generate finite training data. When using the neural network for prediction, it is important to make sure we are using it for interpolation instead of extrapolation. Extrapolation will diminish the prediction accuracy. For this purpose, the offline training data should cover the neighborhood used in the 𝖭𝖾𝖺𝗋 function. Each trajectory returned by the numerical solver contains a sequence of control inputs and a sequence of states indexed by time. We combine all the data points from all offline trajectories to form the final training dataset. Each data point is a tuple (x_i,u_i). The goal of the neural network is to mimic the structure of the optimal controller. The neural network controller, 𝖭𝖭𝖢𝗈𝗇𝗍𝗋𝗈𝗅𝗅𝖾𝗋: x_i → u_i, is a state feedback controller mapping from the current state x_i to the current control u_i to be applied. After training the neural network controller, it is used for online control and online trajectory generation. The 𝖲𝗍𝖾𝖾𝗋𝖯𝖥𝖥 in Algorithm <ref> is obtained by applying the neural network controller to get a trajectory. Given a novel initial state, which should be inside the initial state set used in training data generation, we repetitively apply the neural network control commands and simulate the system dynamics to obtain the trajectory that steers the system to the goal state, where the goal position is the origin. We also train a cost-to-go neural network that predicts the segment cost, 𝖲𝖾𝗀𝖢𝗈𝗌𝗍: x_1 → J, where x_1 is the initial state, J is the cost of the trajectory from x_1 to the goal returned by the numerical solver. § EMPIRICAL EVALUATION §.§ 2D Double Integrator We first compare FMT*PFF with the kinodynamic FMT* <cit.> and the Kino-RRT* <cit.>. For this purpose, we will consider linear systems and use the analytical solution for (<ref>)-(<ref>) and (<ref>)-(<ref>). The difference between FMT*PFF and the kinodynamic FMT* is that kinodynamic FMT* samples the full state-space and solves (<ref>)-(<ref>) for steering functions while FMF*PFF samples in the reduced state-space and solves (<ref>)-(<ref>) for steering functions. For comparison, we set the tuning parameters of the algorithms, such as the neighborhood radius r, to be the same. Both Kino-RRT* and FMT*PFF use the PFF optimal control, but they are based on RRT* and FMT*, respectively. The state of the 2D double integrator is given by x = [p^⊤ v^⊤]^⊤, where p = [x_1 x_2]^⊤ is the position and v = [x_3 x_4]^⊤ is the velocity. The control input is acceleration. The system dynamics is given by ẋ = A x + B u, where A = [ 0 I_2; 0 0 ], B = [ 0; I_2 ]. The cost function c(x,u) = 1 + u^⊤ R u, where R = I_2. The position is uniformly sampled within the boundary of the environment. The free final state of the PFF controller is the velocity. Thus, FMT*PFF only samples the position space. For the kinodynamic FMT* algorithm, the velocity is uniformly sampled in v ∈ [-2, 2]^2 m/s^2. Note that a larger interval for the velocity essentially requires searching in a larger state space, which will result in slower convergence. However, if the sampling velocity interval is too small, the search is confined to a small state space that may not contain the optimal solution. The planning results of the FMT*PFF algorithm and the Kinodynamic FMT* algorithm are given in Figure <ref>. By sampling in the reduced state space and using a PFF optimal controller, FMT*PFF reduced the dimensionality of the planning problem. Thus, FMT*PPF finds a better trajectory from the beginning and continues to find better solutions given the same amount of planning time. For Kinodynamic FMT*, the probability of sampling good velocities to decrease the cost is low. The solution cost vs planning time comparison is given in Figure <ref>. We can see from Figure <ref>, FMT*PFF also has better convergence performance compared to Kino-RRT*. This is because FMT*PFF uses ordered search while Kino-RRT* uses unordered random search. One drawback of FMT*PFF is that it is not an anytime algorithm. Still, the benefit of FMT*PFF becomes more clear when planning with learning-based steering functions that cannot achieve exact steering, since FMT*PFF does not use a rewiring procedure. §.§ A Simple Car Model The kinematic car model is given by ẋ = vcos(θ), ẏ = vsin(θ), θ̇ = u_1, v̇ = u_2, where (x, y) is the position, θ is the heading angle, v is the speed, u_1 and u_2 are the control inputs. We first generate offline training data by solving PFF optimal control problems using numerical optimization solvers. The offline trajectory examples used for training are shown in Figure <ref>. We sample initial states from an initial state set. The sampling intervals of x, y, θ, and v are x ∈ [-4, 4] m, y ∈ [-4, 4] m, θ∈ [-π, π] rad, and v ∈ [-2, 2] m/s, respectively. The reduced state space is the position space and θ and v are free states. Thus, the goal state is (x, y, θ, v) = [0 0 free free]. For this example, 10,400 trajectories are generated. State-action pairs from the trajectories were used for neural network training. After training the neural network, we tested the neural network controller for randomly sampled initial states. Some example trajectories obtained using the neural network controller are shown in Figure <ref>. Since the goal of the neural network controller is to steer the system to reach the origin, one index is the error between the end position of the trajectories and the origin. 1000 trajectories corresponding to sampled novel initial states are generated using the neural network controller. 98% of the resulting trajectory reached a 0.3 neighborhood of the origin. If this end position error is too large (greater than 0.3), the connection is not successful, and this edge will not be added to the tree. Note that FMT*PFF does not require exact steering, which makes it suitable for learning-based steering functions. The final trajectory obtained from the FMT*PFF algorithm is smooth and satisfies the differential constraint, while the trajectory from <cit.> may have small gaps between edges. The planning results of FMT*PFF using the neural network controller are shown in Figure <ref> and Figure <ref>. Figure <ref> shows the trees with a different number of samples and Figure <ref> gives the cost vs time performance. Finally, we use FMT*PFF to plan trajectories for the car model in various environment settings. We randomly sample the number, size, and location of the obstacles. We also vary the starting point and goal point of the car. The planning results are given in Figure <ref>. The FMT*PFF algorithm finds dynamically feasible trajectories using the neural network controller. § CONCLUSION We propose the FMT*PFF algorithm for optimal kinodynamic motion planning for nonlinear systems. The key idea is the use of a partial-final-state-free (PFF) optimal controller to reduce dimensionality and accelerate kinodynamic motion planning is introduced. FMT*PFF planning in the reduced state space and has faster convergence. By training a neural network model of the PFF optimal controller, FMT*PFF can plan trajectories for nonlinear systems in cluttered environments with nonconvex obstacles. FMT*PFF can deal with learning-based steering functions that can not achieve exact steering because no rewire is needed. We show that FMT*PFF is efficient and generates dynamically feasible motion plans. Through numerical simulations and comparison with previous works, FMT*PFF are shown to have better cost-time performance.
http://arxiv.org/abs/2306.05804v2
20230609104207
Simulating Quantum Mean Values in Noisy Variational Quantum Algorithms: A Polynomial-Scale Approach
[ "Yuguo Shao", "Fuchuan Wei", "Song Cheng", "Zhengwei Liu" ]
quant-ph
[ "quant-ph" ]
Yau Mathematical Sciences Center and Department of Mathematics, Tsinghua University, Beijing 100084, China Yau Mathematical Sciences Center and Department of Mathematics, Tsinghua University, Beijing 100084, China [email protected] Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 100407, China [email protected] Yau Mathematical Sciences Center and Department of Mathematics, Tsinghua University, Beijing 100084, China Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 100407, China Large-scale variational quantum algorithms are widely recognized as a potential pathway to achieve practical quantum advantages. However, the presence of quantum noise might suppress and undermine these advantages, which blurs the boundaries of classical simulability. To gain further clarity on this matter, we present a novel polynomial-scale method based on the path integral of observable's back-propagation on Pauli paths (OBPPP). This method efficiently approximates quantum mean values in variational quantum algorithms with bounded truncation error in the presence of independent single-qubit depolarizing noise. Theoretically, we rigorously prove: 1) For a fixed noise rate λ, OBPPP's time and space complexity exhibit a polynomial relationship with the number of qubits n, the circuit depth L, the inverse truncation error 1ε, and the root square inverse success probability 1√(δ). 2) For variable λ, the computational complexity becomes Poly(n,L) when λ exceeds 1logL and it becomes exponential with L when λ falls below 1L. Numerically, we conduct classical simulations of IBM's zero-noise extrapolated experimental results on the 127-qubit Eagle processor [Nature 618, 500 (2023)]. Our method attains higher accuracy and faster runtime compared to the quantum device. Moreover, this approach enables us to deduce noisy outcomes from noiseless results, allowing us to accurately reproduce IBM's unmitigated results that directly correspond to raw experimental observations. Simulating Quantum Mean Values in Noisy Variational Quantum Algorithms: A Polynomial-Scale Approach Zhengwei Liu =================================================================================================== § INTRODUCTION In the current Noisy Intermediate-Scale Quantum (NISQ) era <cit.>, Variational Quantum Algorithms (VQAs) <cit.> have become a significant component due to their applications in various fields such as combinatorial optimization <cit.>, quantum chemistry <cit.>, quantum machine learning <cit.>, quantum circuit compilation <cit.>, and quantum error correction <cit.>, etc. Quantum Mean Values (QMVs) <cit.> refer to the expectation of observables on the encoded states of quantum circuits. The cost functions in the majority VQAs can be formulated by QMVs. Compared to classical simulations of quantum states, simulating QMVs offers a more natural correspondence with experimentally observable information. Nonetheless, the classical estimation of QMVs remains a general challenge <cit.>. Efficient simulation for QMVs has been developed under certain limitations, such as shallow circuits or locally connected circuits <cit.>. In practice, NISQ devices are inevitably affected by noises. These noises would decoherent quantum systems and cause quantum states to collapse onto fixed points of noise channels, thereby limiting the quantum advantages <cit.>. Additionally, noise can also induce barren plateau phenomena, which greatly affect the trainability of VQAs <cit.>. On the other hand, noise potentially enables the simulability of complex quantum algorithms by classical methods <cit.>. For instance, general Instantaneous Quantum Polynomial-time (IQP) quantum circuits are hard for classical simulation. However, there are specific cases where noise can undermine the infeasibility of simulation <cit.>. In noiseless circuits, Random Circuit Sampling (RCS) tasks have been proven to be difficult to simulate classically <cit.>. However, a polynomial-time algorithm for simulating noisy RCS has been established in the presence of depolarizing noises <cit.>. For general cases, noisy simulation algorithms based on tensor networks also exhibit decreasing computational complexity as the noise rate increases <cit.>. In this work, we provide OBPPP, a novel polynomial-scale method for approximating QMVs in noisy VQAs, where the parameterized quantum circuit is composed by Pauli rotation gates and {H,S,CNOT}. We leverage the Feynman path integral on the Pauli basis, which could also be viewed as the Fourier transformation on quantum circuits <cit.>. There are two main advantage of adopting the Pauli basis for the path integral. Firstly, if the system exhibits sparsity in the Pauli basis, OBPPP could leverage it to significantly accelerate computations. Secondly, the contributions of high-weight Pauli paths could be heavily suppressed in the presence of depolarizing noise, thereby limiting truncation errors. As illustrated in Fig. <ref>, we randomly selected 40 Pauli paths with Hamming weights (the number of non-identity elements) ranging from 5 to 44. Translucent bars represent noiseless contributions, while opaque bars show the contributions of each Pauli path after applying single-qubit depolarizing noise with a noise rate of 0.1. The shaded area represents the truncation of contributions from Pauli paths with Hamming weights exceeding 35. The figure demonstrates exponential suppression of contributions under depolarizing noise as the Hamming weight increases, which ensures algorithmic efficiency and limits truncated error. In contrast to other numerical approaches like tensor networks <cit.>, OBPPP does not impose geometric structure requirements. To compare with related works for random circuits <cit.>, our method is relatively insensitive to the circuit depth. Moreover, we do not require the locality of observables. Our approach has approximately dequantized many commonly used noisy VQAs and could be served as a benchmark for assessing the capabilities of NISQ computation. The rest part of this paper is organized as follows. Sec. <ref> presents the notations used and outlines all prerequisites for our algorithm. In Sec. <ref>, we provide a comprehensive explanation of OBPPP along with essential proofs elucidating the influence of noise on classical simulability. In Sec. <ref>, we apply this method to simulate the QMV of the 127-qubit Eagle processor and reproduce experimental results quickly and accurately. Sec. <ref> offers a concise conclusion and a discussion on future research directions. § NOTATIONS AND PREREQUISITES In typical VQAs, the cost function is determined by the following quantum mean value: ℒ(θ)=H𝒰(θ)ρ𝒰^†(θ), where ρ is the density matrix of the n-qubit input state, and H is the Hamiltonian which is represented as a linear combination of Pauli operators, 𝒰(θ) is a parameterized quantum circuit, which is composed with L layers unitary transformation 𝒰_i(θ_i), 𝒰(θ)=𝒰_L(θ_L)𝒰_L-1(θ_L-1)⋯𝒰_1(θ_1) and θ=(θ_1,⋯,θ_L). The 𝒰_i(θ_i) in each layer consists of R_i rotation gates and C_i Clifford gates that act on mutually disjoint qubits, each θ_i=(θ_i,1,⋯,θ_i,R_i) is the parameter vector of the i-th layer. Specifically, if we denote the j-th rotation gate in the i-th layer as U_i, j(θ_i,j), where j ∈{1, ⋯ , R_i}. Then U_i, j(θ_i,j) takes the form: U_i,j(θ_i,j)=exp-i θ_i,j/2σ_i,j, where θ_i,j is the variational parameter, σ_i, j∈{𝕀, X,Y,Z}^⊗ n refers to an n-qubit Pauli word, X, Y, Z are Pauli matrices [ 0 1; 1 0 ], [ 0 -i; i 0 ] and [ 1 0; 0 -1 ], respectively. Similarly, the k-th Clifford gate in the i-th layer is denoted as V_i,k, where k ∈{1, ⋯ , C_i}. V_i,k∈{H(a),S(a),CNOT(a,b)}, where a,b refers to the index of qubit where the gate acts on. H(a),S(a), and CNOT(a,b) are defined as H(a) =𝕀⊗⋯⊗𝕀⊗H_a⊗𝕀⊗⋯⊗𝕀; S(a) =𝕀⊗⋯⊗𝕀⊗S_a⊗𝕀⊗⋯⊗𝕀 ; CNOT(a,b) =𝕀⊗⋯⊗𝕀⊗CNOT_a,b⊗𝕀⊗⋯⊗𝕀, where H_a and S_a represent the Hadamard and phase gate acting on the a-th qubit. CNOT_a,b is given by |x⟩_a|y⟩_b→|x⟩_a|x⊕ y⟩_b, where |·⟩_a represent computational basis in the a-th qubit. The set of Pauli words for all rotation gates U_i,j(θ_i,j) is {σ_i,j}. We denote the set {σ_i,j} as the Pauli word set after Clifford gates transformation, whose elements are expressed as: σ_i,j= 𝒱_L⋯𝒱_iσ_i,j𝒱_i^†⋯𝒱_L^†, where 𝒱_i = ∏_k=1^C_i V_i,k is the unitary transformation corresponds to the tensor product of all Clifford gates in the i-th layer. To ensure the validity of Lemma <ref>, we require an easily achievable prerequisite in this algorithm: the Pauli word set {σ_i,j} could generate {𝕀,X,Y,Z}^⊗ n up to phase of {e^iψ|ψ=0,π/2,π,3π/2}, formulated as ⟨{σ_i,j}⟩/(⟨{σ_i,j}⟩∩⟨ i𝕀^⊗ n⟩)={𝕀,X,Y,Z}^⊗ n, here ⟨{σ_i,j}⟩ refers to the Pauli subgroup that is generated by set {σ_i,j}, which means elements in ⟨{σ_i,j}⟩ can be expressed as the finite product of elements in {σ_i,j} (more formal definition is shown in the Appendix <ref>). We give a simple example to demonstrate the condition is indeed easily met. Consider there are a layer of R_X gates and a layer of R_Z gates acting on each qubit in the circuit at the last two layers, then {X_i,Z_i}_i=1,⋯,n is contained in {σ_i,j}. These { X_i,Z_i }_i=1,⋯,n are enough to generate {𝕀,X,Y,Z}^⊗ n. In fact, this sufficient condition can be further weakened, as shown in Appendix <ref>. We also demand ρ and H to be sparse. More precisely, we require the number of the non-zero elements of ρ=∑_a,bρ_a,b|a⟩⟨b| is polynomially related to the number of qubits n, denoted as Poly(n), where |a⟩ and |b⟩ are computational basis states. This constraint is sufficient for widespread VQA tasks, since there is a wide range of VQA applications <cit.> whose initial state is merely a product pure state under the computational basis, corresponding to a single non-zero element in ρ. The number of all Pauli words {σ} which linearly compose H is also restricted to be Poly(n). This restriction is also widely achieved in many VQA frameworks such as the second quantized Hamiltonians in the Variational Quantum Eigensolver (VQE) <cit.> and problems encoded Hamiltonians in the Quantum Approximate Optimization Algorithm (QAOA) <cit.>. The structure of the parameterized quantum circuit 𝒰(θ) is shown in Fig. <ref>. For comparison, a representation of the noisy quantum circuit's architecture under the noise assumption of this work is depicted in Fig. <ref>. In this work, the single-qubit depolarizing noise is assumed to occur independently before each layer and the final observation operator H. This noise channel 𝒩 can be modeled as 𝒩(ϕ)=(1-λ)ϕ+λϕ/2𝕀, where ϕ is a single-qubit density matrix and constant λ∈ [0, 1] reflects the noise rate. We define ℒ as the cost function under this noise channel, which is ℒ(θ)= H 𝒩^⊗ n(𝒰_L 𝒩^⊗ n(⋯𝒰_1𝒩^⊗ n(ρ) 𝒰_1^†⋯) 𝒰_L^†). § SIMULATION METHOD The main idea of our approach is to express ℒ as the path integral of the matrix algebra, in which we select Pauli operators as the basis. An essential benefit of this approach is the ability to quantify the impact of depolarizing noise, which reveals a noteworthy exponential suppression in the noisy contribution of the Pauli path as its Hamming weight increases. This phenomenon enables us to achieve a high level of precision in handling the truncation error and effectively estimate an approximate ℒ by discarding Pauli paths with higher Hamming weight. A Pauli path is a sequence s=(s_0,⋯,s_L)∈P^L+1_n, where P_n={𝕀√(2),X√(2),Y√(2),Z√(2)}^⊗ n represents the set of all normalized n-qubit Pauli words. As detailed shown in Appendix <ref>, the noiseless cost function can be expressed as the sum of the contributions of all Pauli paths, given by ℒ(θ)=∑_s∈P^L+1_n f(θ,s,H,ρ), where f(θ,s,H,ρ) denotes the contribution of one particular Pauli path s=(s_0,⋯,s_L)∈P^L+1_n: f(θ,s,H,ρ)= Hs_L(∏_i=1^Ls_i𝒰_i s_i-1𝒰_i^†)s_0ρ. We define 𝒮_i as the superoperator <cit.> 𝒰_i⊗𝒰_i and | ·⟩⟩ represents the vectorization of a matrix. Thus, Eq. (<ref>) can be expressed in an alternative form as ⟨⟨ H| s_L ⟩⟩(∏_i=1^L⟨⟨ s_i| 𝒮_i| s_i-1⟩⟩) ⟨⟨ s_0 | ρ⟩⟩. In the following discussion, we aim to establish that the time complexity for computing each f(θ,s,H,ρ) is nL+Poly(n). For the Hamiltonian term Hs_L, the presence of non-zero f(θ,s,H,ρ) requires the inclusion of the Pauli word s_L in H, thereby resulting in an associated computational cost of n for the evaluation of Hs_L. Similarly, the computation of the input term s_0ρ can be achieved with time complexity of Poly(n), facilitated by the polynomial-size non-zero elements in ρ. A more detailed explanation is provided in Appendix <ref>. Furthermore, for the calculation of the i-th layer term s_i𝒰_i s_i-1𝒰_i^†, we propose the following proposition. The i-th layer term in f can be calculated by the following equality with time and space complexity of n: s_i𝒰_i s_i-1𝒰_i^† =(s_i s_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k∏_σ_i,j∈ C(i,s_i-1)(s_i s_i-1)|_σ_i,j ∏_σ_i,j'∈ AC(i,s_i-1){(s_i s_i-1)|_σ_i,j'cosθ_i,j'- (is_i σ_i,j's_i-1)|_σ_i,j'sinθ_i,j'}. We define g:{𝕀,X,Y,Z}^⊗ n∪{CNOT_a,b,H_a,S_a}→2^{1,⋯, n} as a map from a unitary operator to the indices of qubits where the unitary operator's action is non-identity. Here 2^{1,⋯, n} represents all subsets of {1,⋯, n}. For example, g(σ_i,j) represents the qubit's indices of non-identity Pauli operators of σ_i,j, g(CNOT_a,b)={a,b}, g(H_a)={a} and g(S_a)={a}. For simplification, we divide the indices of n qubits in the i-th layer into three sets, The symbol |_I_i denotes the set where only identity gate I_i is trivially applied. The symbol |_V_i,k represents the set where the Clifford gate V_i,k is non-trivially applied. The symbol |_σ_i,j denotes the set where σ_i,j is non-trivially applied. Additionally, the sets C(i,s_i-1) and AC(i,s_i-1) denote the sets of Pauli words in {σ_i,j}_j=1^R_i that commute and anti-commute with s_i-1, respectively. By utilizing the orthonormality property of Pauli words, we can establish the following results: * For s_i𝒰_i s_i-1𝒰_i^†≠ 0, we must have s_i-1|_I_i = s_i|_I_i. * Similarly, if σ_i,j commutes with s_i-1, s_i|_σ_i,j=s_i-1|_σ_i,j. * If σ_i,j anti-commutes with s_i-1, we encounter two cases that s_i𝒰_i s_i-1𝒰_i^†≠ 0, s_i|_σ_i,j=s_i-1|_σ_i,j with a factor of cosθ_i,j or s_i|_σ_i,j=± i σ_i,j s_i-1|_σ_i,j with a factor of ∓sinθ_i,j, where the sign ± depends on the sign of Pauli word i σ_i,j s_i-1|_σ_i,j. * If s_i𝒰_i s_i-1𝒰_i^†≠ 0 then C(i,s_i-1)=C(i,s_i) and AC(i,s_i-1)=AC(i,s_i) hold. * For Clifford gate, the only case that s_i𝒰_i s_i-1𝒰_i^†≠ 0 is s_i-1|_V_i,k= ± V_i,k^† s_iV_i,k|_V_i,k. The following lemma is proposed in <cit.> for the presence of single-qubit depolarizing noise. Let f̂(θ,s,H,ρ) be the contribution of a Pauli path s=(s_0,⋯,s_L)∈P^L+1_n in the noisy cost function ℒ(θ). In the presence of the single-qubit depolarizing channel, the relationship between the noiseless contribution f and f̂ can be characterized as follows: f̂(θ,s,H,ρ)=(1-λ)^sf(θ,s,H,ρ), where s=∑_is_i and s_i denotes the Hamming weight of s_i. Lemma <ref> states that the path integral in the Pauli basis provides a convenient approach for quantifying the impact of noise. In essence, by estimating all noiseless contributions f(θ,s,H,ρ), it is sufficient to evaluate the noisy cost function ℒ. OBPPP calculates the contributions of the Pauli path with s≤ M to provide an approximation of ℒ. Let ℒ denote the approximate noisy cost function. By utilizing Lemma <ref>, ℒ can be represented as: ℒ(θ)=∑_s≤ Mf̂(θ,s,H,ρ)=∑_m=0^M (1-λ)^m ∑_s=m f(θ,s,H,ρ). Intuitively, it's a significant challenge to evaluate all the Pauli paths with s≤ M. However, based on the Remark of Proposition <ref>, the majority of these paths yield a zero contribution to the path integral. We observed that for any Pauli path s with f(θ,s,H,ρ)≠ 0, if one of s_i-1|_σ_i,j or s_i|_σ_i,j is fixed, then the other one would remain at most two cases for all i,j (similar results hold for |_V_i,j and |_I_i). As a consequence, given s_i, the number of possible cases for s_i-1 with f(θ, s, H, ρ) ≠ 0 is at most 2^s_i. Thereby, an efficient method could be emerged for enumerating all these Pauli paths s, which make non-zero contributions to ℒ while satisfying s≤ M. OBPPP method can be summarized as follows: Initially, we select all s_L which is included in H. For each case of s_L, we enumerate all potential s_L-1, resulting in a maximum of 2^s_L cases of s_L-1. Subsequently, for each case of s_L-1, all potential 2^s_L-1 cases of s_L-2 are selected. This process continues iteratively until s_0 is reached. The outcome of this method yields a maximum of Poly(n) 2^M Pauli paths, with a time complexity of Poly(n) L 2^M and a space complexity of Poly(n)+nL. More details are shown in Appendix <ref>. For each Pauli path s, utilizing Eq. (<ref>) and Proposition <ref>, it is possible to determine f(θ,s,H,ρ) with a time complexity of nL+Poly(n). Consequently, the overall time required for determining the approximate noisy cost function ℒ is within Poly(n) L 2^M. To further estimate the truncation error of the ℒ, we introduce the following lemma. For a detailed proof, see Appendix <ref> and <cit.>. Suppose Eq. (<ref>) is satisfied, for any distinct Pauli paths s,s^'∈P^L+1_n, we have 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. By Lemma <ref> and Lemma <ref>, an estimation of the Mean-Square Error (MSE) between ℒ and ℒ can be derived as follow: 𝔼_θℒ-ℒ^2 =𝔼_θ[∑_s> Mf̂(θ,s,H,ρ)]^2 (<ref>)=𝔼_θ∑_s> Mf̂(θ,s,H,ρ)^2 (<ref>)=𝔼_θ∑_s> M(1-λ)^2sf(θ,s,H,ρ)^2. In finite-size systems, the QMV could be bounded by a finite H_∞, note that in the noiseless setting ∑_s f(θ,s,H,ρ)=HU(θ)ρ U^†(θ)≤H_∞. Thus, we have 𝔼_θ∑_sf(θ,s,H,ρ)^2(<ref>)=𝔼_θ[∑_sf(θ,s,H,ρ)]^2 ≤H_∞^2. Combining this inequality with Eq. (<ref>), we obtain 𝔼_θℒ-ℒ^2 ≤ (1-λ)^2MH_∞^2≤exp-2λ MH_∞^2. Here the first inequality holds because f(θ,s,H,ρ)∈ℝ. To elucidate that f(θ,s,H,ρ)∈ℝ, several observations can be made from Eq. (<ref>) and Eq. (<ref>). Firstly, the Hermiticity of operators H and ρ ensures Hs_L and s_0ρ are real. Additionally, the terms C(i,s_i-1) and I_i in the s_i𝒰_i s_i-1𝒰_i^† correspond to the inner product of Pauli words, which are real. The realness of the V_i,k Clifford term can be verified by exhaustively applying {H,S, CNOT} to Pauli matrices {𝕀,X, Y, Z}. Furthermore, The term AC(i,s_i-1) is also real due to the product property of Pauli matrices [The Pauli matrices, σ_1=X,σ_2=Y and σ_3=Z, satisfy the relation σ_i σ_j=iϵ_ijkσ_k, where ϵ_ijk is the Levi-Civita symbol and Einstein summation notation is used. Moreover, s_i-1 and σ_i,j' anti-commute. Thus iσ_i,j's_i-1 is a Pauli Word with a sign ±.]. Eq. (<ref>) results directly to the following lemma: Suppose Eq. (<ref>) is satisfied, for ∀ν > 0, if M≥1/2λlnH_∞^2/ν, the mean-square error 𝔼_θℒ-ℒ^2 is upper bounded by ν. By Eq. (<ref>), the subsequent corollary provides the time complexity of obtaining the approximated noisy cost function ℒ in our method: Suppose Eq. (<ref>) is satisfied, for ∀ν > 0 and a fixed error rate λ, the time complexity of obtaining ℒ with mean-square error 𝔼_θℒ-ℒ^2≤ν is Poly(n) L( H_∞/√(ν))^1/λ=Poly(n,L,1/√(ν),H_∞). By utilizing Markov's inequality, we can derive the following theorem. The aforementioned discussion can be succinctly summarized by this theorem. For a more formalized proof, please refer to Appendix <ref>. Suppose Eq. (<ref>) is satisfied, given a fixed error rate λ, sparse H and sparse ρ, for arbitrary truncation error ε, there exists a polynomial-scale classical algorithm to determine the approximated noisy cost function ℒ, which satisfies ℒ-ℒ≤ε with a probability of at least 1-δ over all possible parameters θ. The time complexity is Poly(n) L(H_∞/ε√(δ))^1/λ=Poly(n,L,1/ε,1/√(δ),H_∞). To investigate the influence of noise rate λ on computational complexity, we establish MSE 𝔼_θℒ-ℒ^2 as a sufficiently small constant. We then examine the changes in computational complexity as the depth L increases for different noise rates. Based on our analysis, we propose the following proposition. Suppose Eq. (<ref>) is satisfied, and H_∞ is fixed. To estimate the approximate noisy cost function ℒ(θ) with the mean-square error 𝔼_θℒ-ℒ^2 less than a sufficiently small constant, we have * If λ=Ω(1/logL), there exists a classical algorithm that can complete the computation in time Poly(n,L). * If λ=1/L, there exists a situation where our method exhibits exponential time complexity with respect to L. The proposition implies a strong correlation between the classical simulatability of VQAs and the noise rate λ. To make quantum devices difficult to be classically simulated, it is necessary for the noise rate not to exceed 1/logL. § NUMERICAL EXPERIMENTS In the theoretical section, we estimated the computational complexity in the worst-case scenario, which is significantly higher than the actual complexity encountered during computation. In Appendix <ref>, we provide more detailed numerical examples to investigate this issue. In this section, we focus on validating the efficiency of OBPPP method in practical applications by performing a classical simulation of IBM's 127-qubit Eagle processor <cit.>. In essence, the execution process of OBPPP involves backward propagating the measured operator in the Pauli basis, through quantum gates to the initial state's density matrix. During this propagation, the Pauli path continuously expands. We then count all valid paths with non-zero contribution and a Hamming weight less than M. In practice, we utilize a depth-first search strategy by Python to generate a list of valid paths. Notably, our contributions of paths are actually represented as analytical expressions of θ_h, enabling us to obtain all corresponding results for θ_h across the entire continuous interval in a single computation. Furthermore, using Lemma <ref>, we can directly compute the operator expectation values of the noisy circuits based on the analytical expressions of paths, when the environmental noise is single-qubit depolarizing noise. This allows us to directly fit the raw experimental data before error mitigation. To the best of our knowledge, this is currently the only classical algorithm capable of achieving this. As shown in Fig. <ref>, we conducted six simulations, denoted as (a)-(e) from Ref.<cit.> and (f) from Ref.<cit.>. First, in cases (a)-(c) with known exact solutions, we compared the OBPPP (M=210) results against the exact solutions. Our findings demonstrate that OBPPP achieves higher precision compared to the quantum device in significant less runtime <cit.>. For cases lacking exact results (d)-(e), OBPPP aligns well with IBM's mitigated results. In the absence of both exact and experimental results (f), OBPPP can perform simulations faster than quantum chips, which holds a hypothesis runtime of 5 minutes per point. Additionally, for cases (a)-(e), we employed the least squares method to determine an optimal noise rate λ for fitting the expectation values of noisy circuits. A strong agreement was observed between OBPPP and IBM's unmitigated results in cases (a)-(e), with deviations below 0.002 for (a)-(d) and below 0.008 for (e). Additionally, the optimal λ ranges from 0.007 to 0.009, which is also in agreement with the error rates reported by IBM. For more simulation details, please refer to Appendix <ref>. In comparison with other recent classical simulation algorithms <cit.>, our method possesses two main advantages: the ability to obtain an analytical expression for θ_h and the capability to infer the expected values of noisy circuit outcomes. Compared to the tensor network method <cit.>, OBPPP demonstrates higher accuracy in (b) and (c). Throughout cases (a)-(e), OBPPP also exhibits faster execution times, especially in deeper circuits. (For (a)-(c), tensor network method product each point within 7 minutes) <cit.>. Furthermore, OBPPP does not impose any requirements on the geometric structure of the circuit or the area law of entanglement entropy. On the other hand, sparse Pauli dynamics(SPD) and OBPPP methods share similarities <cit.>. However, a notable difference lies in SPD truncating smaller sine functions after Clifford transformations, while OBPPP directly truncates the Hamming weight of Pauli paths. Consequently, the latter is less affected when computing θ_J=π4 in (f). Moreover, OBPPP can deliver more accurate results than SPD. § CONCLUSIONS AND DISCUSSIONS In this work, we have introduced OBPPP, a novel polynomial-scale method for approximating the noisy cost function ℒ defined by QMVs in VQAs under the independent single-qubit depolarizing noise. This method is based on the truncated path integral on the Pauli basis. In theory, we have proven that the time and space complexity of this method exhibits a polynomial relationship with the number of qubits n, the circuit depth L, the inverse of truncation error 1ε, and the root square inverse of success probability 1√(δ). Additionally, we analyzed how variations in the noise rate λ affect the classical simulatability of VQAs. We have proven that when the noise rate exceeds 1logL, the computational complexity is of Poly(n,L). These results highlight the crucial role that noise plays in shaping the computational efficiency and feasibility of classical simulations. Increasing the number of qubits for fixed error rates is unlikely to be sufficient for achieving quantum advantage. In practice, we have validated the efficiency of OBPPP by successfully performing classical simulations on IBM's Eagle processor in a shorter runtime than quantum hardware, while achieving more accurate QMVs. Furthermore, leveraging Lemma <ref>, we obtained different values of ℒ for various λ, resulting in a good fit to the unmitigated results of raw experimental data. It is important to note that the establishment of our method is based on the following prerequisites: firstly, we restrict the gates in the quantum circuit to be composed of Clifford gates {H, S, CNOT} and single-parameter Pauli rotation gates. Secondly, we require that the set {σ_i,j} defined in Eq. (<ref>) could generate all Pauli words {𝕀,X,Y,Z}^⊗ n. Furthermore, we specify that non-zero elements of the input state's density matrix ρ in computational basis and the Hamiltonian H in Pauli basis scale polynomially with n. Lastly, we focus exclusively on the scenario of the independent single-qubit depolarizing noise. It is worth noting that our approach eliminates the necessity for geometric constraints on quantum devices. As a result, it enables interactions with qubits positioned arbitrarily and facilitates the implementation of multi-qubit rotation gates. In comparison to previous methodologies that relied on 1 and Ω(logn) depth, our method does not impose any assumptions regarding circuit depth. Moreover, we do not require any presuppositions about the randomness of circuit structure, such as 2-design, or the prior distribution of the circuit output, such as anti-concentration. Our research study is primarily focused on examining the effects of the independent single-qubit depolarizing noise, despite it being the most common noise channel. However, exploring other forms of noise remains significance and necessity to fully comprehend their impact. Furthermore, we have prioritized investigating the correlation between noise and computational complexity. However, it is lack of conclusive results and sufficient rigorous proofs regarding noise's impact on the training performance of models. In numerical experiments, there is significant room for optimization and improvement in the current algorithms. For example, In Fig. <ref> (f), M=90 is far from the limit of classical computational capability. We chose this value merely because it reaches the memory limit of our current algorithm on our present computing device. This can be easily improved by optimizing the algorithm and utilizing better devices. Thus we believe stronger results would emerge in the near future. In conclusion, there is a substantial amount of related research that requires further exploration and investigation in the future. We thank Xun Gao, Fan Lu, Ningfeng Wang, Zhaohui Wei, Yusen Wu, Zishuo Zhao and Qin-Cheng Zheng for valuable discussions. S.C was supported by the National Science Foundation of China (Grant No. 12004205). Z.L was supported by NKPs (Grant No. 2020YFA0713000). Y.S, F.W and Z.L were supported by BMSTC and ACZSP (Grant No. Z221100002722017). S.C and Z.L were supported by Beijing Natural Science Foundation (Grant No. Z220002) * § PREPARATION OF DATA §.§ The input state ρ By assumption, the input state ρ in the algorithm has sparsity: ρ=∑_a,bρ_a,b|a⟩⟨b|, where |a⟩ and |b⟩ are computational basis states and there are Poly(n) size of non-zero ρ_a,b. For each element ρ_a,b|a⟩⟨b| in ρ, s_0 (ρ_a,b|a⟩⟨b|) can be calculated by s_0 (ρ_a,b|a⟩⟨b|)=ρ_a,b⟨b|s_0 |a⟩=ρ_a,b∏_j=1^n ⟨b|_j (s_0)|_j |a⟩_j, where |_j is the limitation that limit the operator on j-th qubit and |·⟩_j denotes j-th component of |·⟩. By Eq. (<ref>), s_0 (ρ_a,b|a⟩⟨b|) can be calculated with time (space) complexity n. Then, by the sparsity assumption, s_0 ρ can be calculated with time (space) complexity Poly(n). §.§ The Hamiltonian H By assumption, the Hamiltonian H is a linear combination of Pauli words, and there are Poly(n) size of Pauli words in H with non-zero coefficients. We store H in a tree data structure. Each node in the tree is assigned a Pauli operator. The leaf nodes of the tree correspond to a unique Pauli word and store the corresponding coefficient value of H. As an illustration, consider the Hamiltonian H=1 X_0+1 Z_1+0.5 X_0X_1 in QAOA, which can be represented by the tree depicted as Fig. <ref>. As a result, we can determine H s_L utilizing the tree data structure with time complexity of n and store the tree with space complexity of Poly(n). § PAULI PATH INTEGRAL In the method, we used the Feynman path integral in the Pauli basis to express the cost function ℒ(θ)=H𝒰(θ)ρ𝒰^†(θ) as: ℒ(θ)=∑_s_0,⋯,s_L ∈P_n f(θ,s,H,ρ), where f(θ,s,H,ρ)=Hs_L(∏_i=1^Ls_i𝒰_i s_i-1𝒰_i^†)s_0ρ. To verify the validity of Eq. (<ref>), we first express f(θ,s,H,ρ) as tensor network diagrams: f(θ,s,H,ρ)= Hs_L(∏_i=1^Ls_i𝒰_i s_i-1𝒰_i^†)s_0ρ = [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (H) at (0,0) H; [draw,shape=circle,inner sep=1pt] (sL) at (0.75,0) s_L; [thick] (H) – (sL) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); ∏_i=1^L[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(2.55,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_i; [rectangle,draw] (Uj) at (0.75,0) 𝒰_i; [draw,shape=circle,inner sep=-1pt] (sj1) at (1.5,0) s_i-1; [rectangle,draw] (Ujt) at (2.25,0) 𝒰_i^†; [thick] (sj) – (Uj) –(sj1)–(Ujt) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (2.55,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (rho) at (0.75,0) ρ; [draw,shape=circle,inner sep=1pt] (s0) at (0.,0) s_0; [thick] (rho) – (s0) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); = [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (H) at (0,0) H; [draw,shape=circle,inner sep=1pt] (sL) at (0.75,0) s_L; [thick] (H) – (sL) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); ∏_i=1^L[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1.8,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_i; [rectangle,draw] (Uj) at (0.75,0) 𝒰_i; [draw,shape=circle,inner sep=-1pt] (sj1) at (1.5,0) s_i-1; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_i; [thick] (sj) – (Uj) –(sj1) (l) –(Ujt)– (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1.8,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (rho) at (0.75,0) ρ; [draw,shape=circle,inner sep=1pt] (s0) at (0.,0) s_0; [thick] (rho) – (s0) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); . Using this form, the right side of Eq. (<ref>) can be expressed as ∑_s_0,⋯,s_L ∈P_n f(θ,s,H,ρ)= ∑_s_0,⋯,s_L ∈P_n[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (H) at (0,0) H; [draw,shape=circle,inner sep=1pt] (sL) at (0.75,0) s_L; [thick] (H) – (sL) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1.8,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_L; [rectangle,draw] (Uj) at (0.75,0) 𝒰_L; [draw,shape=circle,inner sep=-1pt] (sj1) at (1.5,0) s_L-1; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_L; [thick] (sj) – (Uj) –(sj1) (l) –(Ujt)– (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1.8,0) arc(-90:90:0.25); ⋯[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1.8,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_1; [rectangle,draw] (Uj) at (0.75,0) 𝒰_1; [draw,shape=circle,inner sep=1.5pt] (sj1) at (1.5,0) s_0; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_1; [thick] (sj) – (Uj) –(sj1) (l) –(Ujt)– (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1.8,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1,0.5); [rectangle,draw] (rho) at (0.75,0) ρ; [draw,shape=circle,inner sep=1pt] (s0) at (0.,0) s_0; [thick] (rho) – (s0) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1,0) arc(-90:90:0.25); = [baseline=(current bounding box.center)] (l)at(-0.25,0.75);(r)at(3.9,0.75); [rectangle,draw] (H) at (0,0) H; [rectangle,draw] (Ul) at (0.75,0) 𝒰_L; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_L; [rectangle,draw] (U1) at (3,0) 𝒰_1; [rectangle,draw] (U1t) at (3,0.75) 𝒰_1; [] (cdots_down) at (2,0) ⋯; [] (cdots_up) at (2,0.75) ⋯; [rectangle,draw] (rho) at (3.65,0) ρ; [thick] (H) – (Ul)– (cdots_down)–(U1)–(rho) (l) – (Ujt) – (cdots_up)–(U1t)–(r); [thick] (-0.25,0.75) arc(90:270:0.75/2); [thick] (3.9,0) arc(-90:90:0.75/2); = [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(2.55,0.5); [rectangle,draw] (H) at (0,0) H; [rectangle,draw] (U) at (0.75,0) 𝒰; [rectangle,draw] (rho) at (1.5,0) ρ; [rectangle,draw] (Ud) at (2.25,0) 𝒰^†; [thick] (H)–(U)–(rho)–(Ud) (l) – (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (2.55,0) arc(-90:90:0.25); = H𝒰(θ)ρ𝒰^†(θ) = ℒ(θ), where second equality is obtained by the property of orthonormal basis {P_n} ∑_s∈P_n[baseline=(current bounding box.center)] [draw,shape=circle,inner sep=1pt] (s1) at (1.75,0) s; [draw,shape=circle,inner sep=1pt] (s0) at (0.5,0) s; [thick] (0,0)–(s0) (0,0.5)–(0.75,0.5); [thick] (2.25,0)–(s1) (1.5,0.5)–(2.25,0.5); [thick] (1.5,0.5) arc(90:270:0.25); [thick] (0.75,0) arc(-90:90:0.25); = [baseline=(current bounding box.center)] [thick] (0,0)–(1,0) (0,0.5)–(1,0.5); [] at (1.2,0).; In the presence of single-qubit depolarizing noise, the noisy cost function ℒ can be expressed as: ℒ(θ) = H 𝒩^⊗ n(𝒰_L 𝒩^⊗ n(⋯𝒰_1𝒩^⊗ n(ρ) 𝒰_1^†⋯) 𝒰_L^†) = [baseline=(current bounding box.center)] (l)at(-0.25,0.75);(r)at(8.25,0.75); [rectangle,draw] (H) at (0,0) H; [rectangle,draw,minimum height=40] (Nl) at (1,0.4) 𝒩^⊗ n; [rectangle,draw] (Ul) at (2,0) 𝒰_L; [rectangle,draw] (Ujt) at (2,0.75) 𝒰_L; [rectangle,draw,minimum height=40] (Nl1) at (3,0.4) 𝒩^⊗ n; [rectangle,draw,minimum height=40] (N1) at (5,0.4) 𝒩^⊗ n; [rectangle,draw,minimum height=40] (N0) at (7,0.4) 𝒩^⊗ n; [rectangle,draw] (U1) at (6,0) 𝒰_1; [rectangle,draw] (U1t) at (6,0.75) 𝒰_1; [] (cdots_down) at (4,0) ⋯; [] (cdots_up) at (4,0.75) ⋯; [rectangle,draw] (rho) at (8,0) ρ; [thick] (H)–(H-| Nl.west) (Ul-|Nl.east)–(Ul)–(Ul-| Nl1.west) (cdots_down-|Nl1.east)–(cdots_down)–(cdots_down-| N1.west) (U1-|N1.east)–(U1)–(U1-| N0.west) (rho-|N0.east)–(rho) (l)–(l-| Nl.west) (Ujt-| Nl.east)–(Ujt)–(Ujt-| Nl1.west) (cdots_up-|Nl1.east)– (cdots_up)–(cdots_up-| N1.west) (U1t-|N1.east)–(U1t)–(U1t-| N0.west) (r-|N0.east)–(r); [thick] (-0.25,0.75) arc(90:270:0.75/2); [thick] (8.25,0) arc(-90:90:0.75/2); =∑_s_0,⋯,s_L ∈P_n[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(2.25,0.5); [rectangle,draw] (H) at (0,0) H; [rectangle,draw,minimum height=40] (N) at (1,0.4) 𝒩^⊗ n; [draw,shape=circle,inner sep=1pt] (sL) at (2,0) s_L; [thick] (H)–(H-| N.west) (sL-|N.east)–(sL) (l)–(l-| N.west) (r-|N.east)–(r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (2.25,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(3.05,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_L; [rectangle,draw] (Uj) at (0.75,0) 𝒰_L; [draw,shape=circle,inner sep=-1pt] (sj1) at (2.75,0) s_L-1; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_L; [rectangle,draw,minimum height=40] (N) at (1.75,0.4) 𝒩^⊗ n; [thick] (sj)–(Uj)–(Uj-| N.west) (sj1-|N.east)–(sj1) (l) –(l-|Ujt.west) (Ujt)–(Ujt-| N.west) (r-|N.east)– (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (3.05,0) arc(-90:90:0.25); ⋯[baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(3,0.5); [draw,shape=circle,inner sep=1pt] (sj) at (0.,0) s_1; [rectangle,draw] (Uj) at (0.75,0) 𝒰_1; [draw,shape=circle,inner sep=1.5pt] (sj1) at (2.75,0) s_0; [rectangle,draw] (Ujt) at (0.75,0.75) 𝒰_1; [rectangle,draw,minimum height=40] (N) at (1.75,0.4) 𝒩^⊗ n; [thick] (sj)–(Uj)–(Uj-| N.west) (sj1-|N.east)–(sj1) (l) –(l-|Ujt.west) (Ujt)–(Ujt-| N.west) (r-|N.east)– (r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (3,0) arc(-90:90:0.25); [baseline=(current bounding box.center)] (l)at(-0.25,0.5);(r)at(1.25,0.5); [rectangle,draw] (rho) at (1,0) ρ; [rectangle,draw,minimum height=40,opacity=0] (N) at (1,0.4) 𝒩^⊗ n; [draw,shape=circle,inner sep=1pt] (s0) at (0.,0) s_0; [thick] (rho)–(s0) (l)–(r); [thick] (-0.25,0.5) arc(90:270:0.25); [thick] (1.25,0) arc(-90:90:0.25); =H𝒩^⊗ n(s_L)s_L𝒰_L 𝒩^⊗ n(s_L-1)𝒰_L^†⋯s_1𝒰_1 𝒩^⊗ n(s_0)𝒰_1^†s_0ρ. Thus, we can express the noisy cost function ℒ as the sum of the contributions of all Pauli paths, given by ℒ̂(θ)=∑_s∈P^L+1_nf̂(θ,s,H,ρ), where f̂(θ,s,H,ρ)=H𝒩^⊗ n(s_L)(∏_i=1^Ls_i𝒰_i 𝒩^⊗ n(s_i-1)𝒰_i^†)s_0ρ. § ALGORITHM Our algorithm calculates the contributions of the Pauli path with s≤ M to provide an approximation of noisy cost function ℒ. The approximate noisy cost function can be expressed as: ℒ(θ)=∑_s≤ Mf̂(θ,s,H,ρ)=∑_m=0^M (1-λ)^m ∑_s=m f(θ,s,H,ρ). Generally, it is a significant challenge to evaluate all the Pauli paths with Hamming weight less than M. But owing to the remark of Proposition <ref>, most Pauli paths have zero contribution in the path integrals. We introduce the following method for enumerating all Pauli paths with non-zero contribution and s≤ M. The key idea of this method is based on the sparsity of H and the observation in Proposition <ref> that for any Pauli path s with f(θ,s,H,ρ)≠ 0, if one of s_i-1|_σ_i,j or s_i|_σ_i,j is fixed, then the other one has at most two cases, which holds for all i,j. Likewise, if s_i-1|_V_i,k (or s_i-1|_I_i) or s_i|_V_i,k (or s_i|_I_i) is fixed, the other one has only one case, which holds for all i,k. Moreover, for Pauli path s that has a non-zero contribution, s_i > 0 for i=0,⋯,L is required, otherwise at some layer s_i will be trivial, which will lead to s_i+1𝒰_i s_i 𝒰_i^†=s_i+1 s_i. To avoid trivial contribution, there must have s_L=⋯=s_i+1=s_i=(𝕀/√(2))^⊗ n. Without loss of generality, we can set H=0 (or replace H by H-H I), which leads to H s_L=0. Thus for Pauli path s with Hamming wight s≤ M and non-zero contribution, there must be s_L+⋯+s_L-i≤ M-(L-i). The back-propagation process for searching Pauli paths is as follows: * We begin by selecting s_L. In order to ensure that Hs_L≠0, s_L can only be selected from Pauli words in H. Owing to the assumption that Hamiltonian H is a linear combination of Pauli words in Polynomial size of n, there are at most Poly(n) cases for s_L. The time and space complexity of enumerating s_L are Poly(n). * For each case of s_L, the next step is to explore all potential s_L-1. There are at most s_L non-identity elements in {s_L|_σ_L,j}∪{s_L|_V_L,k}∪{s_L|_I_L}, corresponding to at most s_L non-identity elements in {s_L-1|_σ_L,j}∪{s_L-1|_V_L,k}∪{s_L-1|_I_L}. Furthermore, each element has at most two potential candidates, resulting in at most 2^s_L cases for s_L-1. In addition, we need to eliminate the cases in which s_L+s_L-1> M-(L-1). The time complexity of enumerating s_L-1 for a given s_L is n2^s_L and the space complexity is n. * Similarly, repeat step (2) to enumerate s_L-2 for each case of s_L-1 and eliminate candidates with s_L+s_L-1+s_L-2> M-(L-2). We obtain up to 2^s_L-1 cases for s_L-2, with time complexity n2^s_L-1 for a given s_L-1 and space complexity n. Repeating this process, we can enumerate all s_L-2,⋯,s_0. From the above discussion, given any s_L, the number of different Pauli paths output is at most 2^s_1+⋯+s_L≤ 2^M. Alternatively, for a given s_L we consider all Pauli paths s with s_L as the end element, s≤ M and f(θ,s,H,ρ)≠ 0 It can be considered as a tree starting from s_L and the new branching will only occur when s_i|_σ_i,j is not identity and σ_i,j∈ AC(i,s_i). While the number of non-identity elements in {s_i|_σ_i,j} is at most M, the number of possible cases is at most 2^M. Thus, to compute all contributions of the Pauli path with s≤ M, we need to calculate at most Poly(n) 2^M different Pauli paths. In step (1) the time cost is Poly(n). In step (2), considering all cases of s_L, the time cost is ∑_s_Ln2^s_L. In step (3), considering all case of s_L-1, the time cost is ∑_s_L∑_s_L-1(s_L)n2^s_L-1(s_L), where s_L-1(s_L) denotes the output of step (2) corresponding to a given s_L. Similar results hold for s_L-2,⋯,s_0. Thus, the time complexity of the above process is Poly(n)+∑_s_Ln2^s_L+∑_s_L∑_s_L-1(s_L)n2^s_L-1(s_L) +∑_s_L∑_s_L-1(s_L)∑_s_L-2(s_L,s_L-1)n2^s_L-2(s_L,s_L-1)+ ⋯ ≤Poly(n)n L 2^M=Poly(n) L 2^M. Here the inequality holds, because ∑_s_L⋯∑_s_i-1(s_L,⋯,s_i)n2^s_i-1(s_L,⋯,s_i) =∑_s_L⋯∑_s_i(s_L,⋯,s_i+1)∑_s_i-1(s_L,⋯,s_i)n2^s_i-1(s_L,⋯,s_i) (By s_i-1+⋯ +s_L≤ M) ≤∑_s_L⋯∑_s_i(s_L,⋯,s_i+1)∑_s_i-1(s_L,⋯,s_i)n2^M-(s_i(s_L,⋯,s_i+1)+⋯+s_L) (By #s_i-1≤ 2^s_i) ≤∑_s_L⋯∑_s_i(s_L,⋯,s_i+1)n2^M+s_i(s_L,⋯,s_i+1)-(s_i(s_L,⋯,s_i+1)+⋯+s_L) =∑_s_L⋯∑_s_i(s_L,⋯,s_i+1)n2^M-(s_i+1(s_L,⋯,s_i+2)+⋯+s_L) ⋮ ≤∑_s_L∑_s_L-1(s_L) n2^M-s_L ≤∑_s_L n2^M=Poly(n)n2^M. The space complexity of the above process is Poly(n)+∑_i=1^L(n)≤Poly(n)+nL. After obtaining candidates of Pauli path s=(s_0,⋯,s_L), the next step is computing its contribution f̂(θ,s,H,ρ). For each Pauli path s, it is possible to determine f(θ,s,H,ρ) with time complexity nL+Poly(n) using Eq. (<ref>) and Proposition <ref>. Thus, the overall time cost for computing ℒ is about ( nL + Poly(n) ) Poly(n) 2^M+Poly(n) L 2^M = Poly(n) L 2^M. The process of our algorithm is summarized in Algorithm <ref>. § PROOF OF LEMMA <REF> In Lemma <ref>, we expressed the contribution of a Pauli path s=(s_0,⋯,s_L)∈P^L+1_n in cost function L(θ) as: f(θ,s,H,ρ)=Hs_L(∏_i=1^Ls_i𝒰_i s_i-1𝒰_i^†)s_0ρ. By Eq. (<ref>), the contribution of a Pauli path s=(s_0,⋯,s_L)∈P^L+1_n in noisy cost function ℒ(θ) can be expressed as: f̂(θ,s,H,ρ)=H𝒩^⊗ n(s_L)(∏_i=1^Ls_i𝒰_i 𝒩^⊗ n(s_i-1)𝒰_i^†)s_0ρ, where 𝒩 is single qubit depolarization channel 𝒩(ϕ)=(1-λ)ϕ+λϕ/2𝕀. For i=1,⋯,L, we have 𝒩^⊗ n(s_i)=𝒩(s_i|_1)⊗⋯⊗𝒩(s_i|_n), where notation |_j represents that limit the operator on j-th qubit. Simple calculations show that 𝒩(𝕀/√(2))=𝕀/√(2), 𝒩(X/√(2))=(1-λ)X/√(2), 𝒩(Y/√(2))=(1-λ)Y/√(2) and 𝒩(Z/√(2))=(1-λ)Z/√(2). Thus, for any s_i∈P_n={𝕀/√(2),X/√(2),Y/√(2),Z/√(2)}^⊗ n, we have 𝒩^⊗ n(s_i)=(1-λ)^s_is_i, where s_i denotes the number of non-identity elements in s_i. So we get f̂(θ,s,H,ρ)=(1-λ)^sf(θ,s,H,ρ), where s=s_0+⋯+s_L. These complete the proof of Lemma <ref>. § PROOF OF PROPOSITION <REF> In Proposition <ref>, we claimed that the elements corresponding to each layer in f can be calculated as the following rules with time cost n: s_i𝒰_i s_i-1𝒰_i^† =(s_i s_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k∏_σ_i,j∈ C(i,s_i-1)(s_i s_i-1)|_σ_i,j ∏_σ_i,j'∈ AC(i,s_i-1){(s_i s_i-1)|_σ_i,j'cosθ_i,j'- (is_i σ_i,j's_i-1)|_σ_i,j'sinθ_i,j'}. Here the set C(i,s_i-1) and AC(i,s_i-1) denote the sets of Pauli words in {σ_i,j|j=1,⋯,R_i} commute and anti-commute with s_i-1, respectively. The symbol |_σ_i,j denotes that limit the operator on the qubits of σ_i,j non-trivially applied, |_V_i,k denotes that limit the operator on the qubits with Clifford gates V_i,k non-trivially applied and |_I_i denotes the limitation on the qubits without gates applied in i-th layer. In our setting, 𝒰_i is composed by a series of gates U_i,1,⋯,U_i,R_i and V_i,1,⋯,V_i,C_i without operating twice on each qubit. Pauli rotation gates U_i,j(θ_i,j) have form U_i,j(θ_i,j)=exp-i θ_i,j/2σ_i,j, as described in Eq. (<ref>). Clifford gates V_i,k can be chosen from {H(a),S(a),CNOT(a,b) }_ a≠ b. Then s_i𝒰_i s_i-1𝒰_i^†=(s_i s_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k∏_j=1^R_i(s_i U_i,j(θ_i,j) s_i-1 U_i,j^†(θ_i,j)) |_σ_i,j. The exponent of a Hermitian operate X is defined as Taylar expansion expX=∑_k=0^∞X^k/k!. When calculating rotation on Pauli words, the square of any Pauli word is identity σ^2=𝕀, thus we have exp-i θ/2σ=∑_k=0^∞(-i θ/2σ)^k/k!=∑_k=0^∞(-1)^k (θ/2)^2k/(2k)!𝕀- i(-1)^k (θ/2)^2k+1/(2k+1)!σ=cosθ/2𝕀-i sinθ/2σ. Therefore, according to the exchange relation of σ and another Pauli word σ', we have σ'exp-i θ/2σ=exp-i θ/2σσ', if σ commutes with σ'. σ'exp-i θ/2σ=expi θ/2σσ', if σ anti-commutes with σ'. So we can divide {σ_i,j} into two case. If σ_i,j commutes with s_i-1, we have (s_i U_i,j(θ_i,j) s_i-1 U_i,j^†(θ_i,j))|_σ_i,j =(s_i exp-i θ_i,j/2σ_i,jexpi θ_i,j/2σ_i,js_i-1) |_σ_i,j =(s_i exp-i θ_i,j/2σ_i,jexpi θ_i,j/2σ_i,js_i-1) |_σ_i,j =(s_i s_i-1)|_σ_i,j. While σ_i,j anti-commutes with s_i-1, we have (s_i U_i,j(θ_i,j) s_i-1 U_i,j^†(θ_i,j))|_σ_i,j =(s_iexp-i θ_i,j/2σ_i,j s_i-1expi θ_i,j/2σ_i,j) |_σ_i,j =( s_i exp-iθ_i,jσ_i,j s_i-1) |_σ_i,j =(s_i (cosθ_i,j𝕀-isinθ_i,jσ_i,j) s_i-1) |_σ_i,j =(s_i s_i-1)|_σ_i,jcosθ_i,j-(i s_i σ_i,js_i-1)|_σ_i,jsinθ_i,j. These complete the proof of Eq. (<ref>). § PROOF OF LEMMA <REF> In the Lemma <ref>, we claimed that if the set of Pauli words {σ_i,j} can generate {𝕀,X,Y,Z}^⊗ n up to phase, then for any distinct Pauli paths s and s^' we have 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0, where σ_i,j is defined in Eq. (<ref>): σ_i,j= 𝒱_L⋯𝒱_iσ_i,j𝒱_i^†⋯𝒱_L^†. In order to prove this claim, we first define the “split" relation between sets of Pauli words: There are two sets of Pauli words A and B. We define B to be split by A if there exist no two distinct elements in B that exhibit identical anti-commute/commute relation with each element in A. The name “split" is used because if A can split B, then any element in B can be uniquely determined by characterizing its exchange relation with each element in A. In a sense, A separates each element in B into independent parts by characterizing their exchange relationship. Before the discussion, we introduce the following lemma: Assume 𝒫, σ_a, σ_b and σ_c are Pauli words. If 𝒫σ_a and 𝒫σ_b have the same commute or anti-commute relation with σ_c. Then σ_a and σ_b have the same commute or anti-commute relation with σ_c. First, we assume 𝒫σ_a and 𝒫σ_b commute with σ_c, for i=a,b it can be expressed as 𝒫σ_iσ_c=σ_c𝒫σ_i=±𝒫σ_cσ_i, where the sign ± is set to + if and only if 𝒫 commutes with σ_c. This leads to σ_iσ_c=±σ_cσ_i, where the sign ± is same as Eq. (<ref>), for i=a,b. In a similar way to discuss the case of 𝒫σ_a and 𝒫σ_b anti-commute with σ_c, we obtain σ_a and σ_b have the same commute or anti-commute relation with σ_c. We will demonstrate that if {σ_i,j} can split Pauli word set {σ} which linearly compose H, then a similar conclusion can be established. Suppose the set of Pauli words {σ_i,j} can split the Pauli word set {σ} of H. Then for any distinct Pauli paths s≠ s^'∈P^L+1_n, we have 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. Note that 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ) =Hs_LHs'_L(∏_i=1^L𝔼_θ_is_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^†)s_0ρs'_0ρ, Thus 𝔼_θ_is_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^†=0 for some i, leads to 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. By Proposition <ref>, the element corresponding to i-th layer of Eq. (<ref>) can be written as 𝔼_θ_i s_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^† = (s_i s_i-1)|_I_i(s'_i s'_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k(s'_iV_i,k s'_i-1V_i,k^†)|_V_i,k ∏_σ_i,j∈ C(i,s_i-1)(s_i s_i-1)|_σ_i,j∏_σ_i,l∈ C(i,s_i-1')(s'_i s'_i-1)|_σ_i,l 𝔼_θ_i{∏_σ_i,j'∈ AC(i,s_i-1)[ (s_i s_i-1)|_σ_i,j'cosθ_i,j' - (is_i σ_i,j's_i-1)|_σ_i,j'sinθ_i,j']. .∏_σ_i,l'∈ AC(i,s_i-1')[ (s'_i s'_i-1)|_σ_i,l'cosθ_i,l' - (is'_i σ_i,l's'_i-1)|_σ_i,l'sinθ_i,l']}. The following proof is divided into two parts. In the first part of the proof, we show that if {σ_i,j} can split the Pauli word set {σ} of H, then s_L= s_L', otherwise 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. If there are s_i-1 and s_i-1' have different exchange relation with σ_i,j at i-th layer. Without loss of generality, we assume s_i-1 commutes with σ_i,j, whereas s_i-1' anti-commutes with σ_i,j. By the anti-commutation, i σ_i,j s_i-1' is a normalized Pauli word with a sign factor ±. As described in the remark of Proposition <ref>, we have s_i'|_σ_i,j=s_i-1'|_σ_i,j with factor cosθ_i,j or s_i'|_σ_i,j=(iσ_i,j s_i-1')|_σ_i,j (up to sign ±) with factor sinθ_i,j, otherwise s'_i𝒰_i s'_i-1𝒰_i^†=0. However, there is still 𝔼_θ_is_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^†=0, because of 𝔼_θ_i,jsinθ_i,j=𝔼_θ_i,jcosθ_i,j=0. Thus, for any layer i=1,⋯,L and j=1,⋯,R_i, Pauli words s_i-1 and s_i-1' have the same exchange relation with σ_i,j. In this setting, Eq. (<ref>) can be written as 𝔼_θ_is_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^† = (s_i s_i-1)|_I_i(s'_i s'_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k(s'_iV_i,k s'_i-1V_i,k^†)|_V_i,k ∏_σ_i,j∈ C(i,s_i-1)(s_i s_i-1)|_σ_i,j(s'_i s'_i-1)|_σ_i,j ∏_σ_i,j'∈ AC(i,s_i-1)𝔼_θ_i,j'{[ (s_i s_i-1)|_σ_i,j'cosθ_i,j' - (is_i σ_i,j's_i-1)|_σ_i,j'sinθ_i,j']. . [ (s'_i s'_i-1)|_σ_i,j'cosθ_i,j' - (is'_i σ_i,j's'_i-1)|_σ_i,j'sinθ_i,j']} = (s_i s_i-1)|_I_i(s'_i s'_i-1)|_I_i∏_k=1^C_i(s_iV_i,k s_i-1V_i,k^†)|_V_i,k(s'_iV_i,k s'_i-1V_i,k^†)|_V_i,k ∏_σ_i,j∈ C(i,s_i-1)(s_i s_i-1)|_σ_i,j(s'_i s'_i-1)|_σ_i,j ∏_σ_i,j'∈ AC(i,s_i-1)[(s_i s_i-1)|_σ_i,j'(s'_i s'_i-1)|_σ_i,j'𝔼_θ_i,j'(cosθ_i,j')^2 +(is_i σ_i,j's_i-1)|_σ_i,j'(is'_i σ_i,j's'_i-1)|_σ_i,j'𝔼_θ_i,j'(sinθ_i,j')^2], where the last equality is given by 𝔼_θ_i,jsinθ_i,jcosθ_i,j=0. Similarly, Eq. (<ref>) implies that up to sign ±, s_i|_I_i=s_i-1|_I_i , s'_i|_I_i=s'_i-1|_I_i, and s_i|_V_i,k=(V_i,k s_i-1V_i,k^†)|_V_i,k , s'_i|_V_i,k=(V_i,k s'_i-1V_i,k^†)|_V_i,k for k=1,⋯,C_i. If not, then s_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^†=0. For j∈{1,⋯,R_i} such that σ_i,j∈ C(i,s_i-1), we have s_i|_σ_i,j=s_i-1|_σ_i,j and s'_i|_σ_i,j=s'_i-1|_σ_i,j; otherwise (s_i s_i-1)|_σ_i,j(s'_i s'_i-1)|_σ_i,j=0. For j' being an index such that σ_i,j'∈ AC(i,s_i-1), we have two cases: * s_i|_σ_i,j'=s_i-1|_σ_i,j' and s'_i|_σ_i,j'=s'_i-1|_σ_i,j'. * s_i|_σ_i,j'= (iσ_i,j' s_i-1)|_σ_i,j' and s_i|_σ_i,j'= (iσ_i,j' s'_i-1)|_σ_i,j' (up to a sign ±). If neither of these two cases holds, the equation Eq. (<ref>) is equal to zero. We denote the product of these iσ_i,j' acting on s_i-1 and s'_i-1 as operator 𝒫_i. Then combined with Eq. (<ref>), we obtain s_i= 𝒫_i 𝒱_i s_i-1𝒱_i^†, s_i'=𝒫_i 𝒱_i s_i-1' 𝒱_i^† up to sign ±, for i=1,⋯ ,L. Thus, there must be s_i-1 =𝒱_i^†𝒫_i⋯𝒱_L^†𝒫_L s_L 𝒱_L⋯𝒱_i, s'_i-1 = 𝒱_i^†𝒫_i⋯𝒱_L^†𝒫_L s'_L 𝒱_L⋯𝒱_i up to sign ±. As discussed before, s_i-1 and s_i-1' have the same exchange relation with σ_i,j, leads to 𝒫_i𝒱_i+1^†⋯𝒱_L^†𝒫_L s_L 𝒱_L ⋯𝒱_i+1 and 𝒫_i𝒱_i+1^†⋯𝒱_L^†𝒫_L s'_L 𝒱_L⋯𝒱_i+1 have the same exchange relation with 𝒱_iσ_i,j𝒱_i^†. According to Lemma <ref>, 𝒱_i+1^†𝒫_i+1⋯𝒱_L^†𝒫_L s_L 𝒱_L⋯𝒱_i+1 and 𝒱_i+1^†𝒫_i+1⋯𝒱_L^†𝒫_L s'_L 𝒱_L⋯𝒱_i+1 have the same exchange relation with 𝒱_iσ_i,j𝒱_i^†. Repeating this process, we get s_L and s'_L have the same exchange relation with 𝒱_L⋯𝒱_iσ_i,j𝒱_i^†⋯𝒱_L^†=σ_i,j. As a result, s_L and s_L' have the same exchange relation with each element in {σ_i,j}. On the other hand, s_L and s'_L are contained in the Pauli word set {σ} of H, otherwise Hs_LHs'_L=0. This leads to conclude that s_L=s_L' due to the split assumption. In the second part of the proof, we demonstrate that s_L=s_L' implies s_i=s_i' for i=0,⋯,L, otherwise 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. To prove this claim, it suffices to show that if 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)≠ 0, and s_i=s_i', then s_i-1=s_i-1'. Given that s_i=s_i' and by C(i,s_i-1)=C(i,s_i)=C(i,s'_i-1) and AC(i,s_i-1)=AC(i,s_i)=AC(i,s'_i-1), Eq. (<ref>) can again be rewrited as Eq. (<ref>). Thus, we have s_i-1|_I_i=s'_i-1|_I_i=s_i|_I_i, s_i-1|_V_i,k=s'_i-1|_V_i,k=(V_i,k^† s_i V_i,k) |_V_i,k up to sign ± for k=1,⋯,C_i, otherwise s_i𝒰_i s_i-1𝒰_i^†s'_i𝒰_i s'_i-1𝒰_i^†=0. Suppose s_i-1|_σ_i,j≠ s'_i-1|_σ_i,j for some σ_i,j∈ C(i,s_i), then either (s_i s_i-1)|_σ_i,j or (s_i' s_i-1')|_σ_i,j is zero, resulting in Eq. (<ref>) being 0. Similarly, suppose s_i-1|_σ_i,j'≠ s'_i-1|_σ_i,j' for some σ_i,j'∈ AC(i,s_i), then both (s_i s_i-1)|_σ_i,j'(s'_i s'_i-1)|_σ_i,j' and (is_i σ_i,j's_i-1)|_σ_i,j'(is'_i σ_i,j's'_i-1)|_σ_i,j' are zero, resulting in Eq. (<ref>) equal to 0. Based on the above discussion, when s_i=s_i' we must have s_i-1=s_i-1' for i=1,⋯,L, otherwise 𝔼_θf(θ,s,H,ρ)f(θ,s^',H,ρ)=0. This finished the proof of the second part. If {σ_i,j} can split {𝕀,X,Y,Z}^⊗ n, it obviously can split {σ}. We will use a lemma to explain the equivalence between {σ_i,j} split {𝕀,X,Y,Z}^⊗ n and {σ_i,j} generates {𝕀,X,Y,Z}^⊗ n up to phase. Before presenting the lemma, we must clarify the definition of a Pauli set A that generates {𝕀,X,Y,Z}^⊗ n. We say a Pauli set A generates {𝕀,X,Y,Z}^⊗ n up to phase means that ⟨ A ⟩/( ⟨ A ⟩∩⟨ i𝕀^⊗ n⟩) = ℙ^n, where ℙ_n:=PG_n/⟨ i𝕀^⊗ n⟩ and PG_n is the n-qubit Pauli group. In this expression, the notation ⟨ A ⟩ refers to the Pauli subgroup that is generated by set A (the finite product of elements and their inverses in A). And the quotient is used to remove the effect of the phase factor. Essentially, this representation means that A generates {𝕀,X,Y,Z}^⊗ n up to phase {e^iψ|ψ=0,π/2,π,3π/2}. A splits {𝕀,X,Y,Z}^⊗ n if and only if ⟨ A⟩/(⟨ A⟩∩⟨ i𝕀^⊗ n⟩)=ℙ^n, where ℙ_n:=PG_n/⟨ i𝕀^⊗ n⟩. Suppose ⟨ A⟩/(⟨ A⟩∩⟨ i𝕀^⊗ n⟩)=ℙ^n. Take b_1,b_2∈{𝕀,X,Y,Z}^⊗ n such that b_1 and b_2 have the same exchange relation with every a∈ A. Since [b_1b_2,a]=0 for all a∈ A, we have [b_1b_2,g]=0 for all g∈⟨ i𝕀^⊗ n,A⟩=PG_n, which implies b_1b_2∈⟨ i𝕀^⊗ n⟩. Combined with b_1,b_2∈{𝕀,X,Y,Z}^⊗ n, we have b_1=b_2. On the other hand, suppose A splits {𝕀,X,Y,Z}^⊗ n, the goal is to prove that ⟨ A⟩/(⟨ A⟩∩⟨ i𝕀^⊗ n⟩)=ℙ^n. To prove this claim, it suffices to show that if ⟨ A⟩/(⟨ A⟩∩⟨ i𝕀^⊗ n⟩)≠ℙ^n, then there exist a non-identity Pauli word commutes with each element of A. We set C=⟨ A⟩/(⟨ A⟩∩⟨ i𝕀^⊗ n⟩) and then consider the C^*-algebras ℳ and 𝒫 generated by C and ℙ^n, respectively. By Von Neumann bicommutant theorem <cit.>, we have ℳ”=ℳ=ℳ. Since C≠ℙ^n, we can conclude that ℳ≠𝒫 as Pauli words constitute an orthonormal basis of the matrix algebra, resulting in ℳ”≠𝒫. This implies the existence of non-identity elements in ℳ'. In other words, there exists a non-trivial element x=c_1P_1+c_2P_2+⋯ (P_i stands for different Pauli words and c_i∈ℂ) which commutes with every element in A. This leads to the conclusion that {P_1,P_2,⋯} also commute with every element in A and they cannot all be identical. This finished the proof of Lemma <ref>. In addition, an equivalent proof can be found in <cit.>. § PROOF OF COROLLARY <REF> By Lemma <ref>, we can set M=1/2λlnH_∞^2/ν, while the mean-square error(MSE) 𝔼_θℒ-ℒ^2 is below ν. The total time cost for obtaining ℒ is about: Poly(n) L 2^M = Poly(n) L( H_∞/√(ν))^1/λ =Poly(n,L,1/√(ν),H_∞). § PROOF OF THEOREM <REF> From Eq. (<ref>), we have shown that 𝔼_θℒ-ℒ^2 ≤ (1-λ)^2MH_∞^2 ≤exp-2λ MH_∞^2. By Markov's inequality, ℒ-ℒ≥1/√(δ)√(𝔼_θℒ-ℒ^2)=ℒ-ℒ^2≥1/δ𝔼_θℒ-ℒ^2≤δ. Therefore, with probability at least 1-δ over parameters θ , we have ℒ-ℒ≤1/√(δ)√(𝔼_θℒ-ℒ^2)≤1/√(δ)exp-λ MH_∞. Let ε be the desired error, then 1/√(δ)exp-λ MH_∞≤ε can be satisfied when M≥1/λlnH_∞/ε√(δ). We can set M=1/λlnH_∞/ε√(δ) to meet the requirements, and the time complexity for obtaining observable ℒ at this point is: Poly(n) L 2^M =Poly(n) L(H_∞/ε√(δ))^1/λ =Poly(n,L,1/ε,1/√(δ),H_∞). This finished the proof of Theorem <ref>. § PROOF OF PROPOSITION <REF> The Proposition <ref> discussed two cases for λ=Ω(1/log L) and λ=1/L. For λ=Ω(1/log L), we need to calculate ℒ with the MSE 𝔼_θℒ-ℒ^2 less than a sufficiently small constant c. From Eq. (<ref>), we have 𝔼_θℒ-ℒ^2 ≤ (1-λ)^2MH_∞^2 ≤exp-2λ MH_∞^2. Then exp-2λ MH_∞^2 ≤ c can be satisfied when M≥1/2λlnH_∞^2/c∼1/λ. By setting M∼1/λ, the total runtime for obtaining observable ℒ is Poly(n) L 2^M = Poly(n,L) 2^1/λ = Poly(n,L) L^1 = Poly(n,L). For λ=1/L, we will construct a specific example, under which our method will have to incur exponential time cost with respect to L in order to achieve a sufficiently small MSE. We consider a special VQA algorithm, the ansätz consists of a layer of R_Z gates acted on each qubit, a layer of R_X gates acted on each qubit and L-2 layers R_X gates acted on the first qubit, shown in Fig. <ref>. The initial state is set as ρ=|0⟩⟨0|^⊗ n, and the Hamilton H=Z_1+Y_1, the cost function is defined as Eq. (<ref>). If we truncate noisy cost function for s≤ L, the approximate cost function can be expressed as: ℒ'(θ)=∑_s≤ Lf̂(θ,s,H,ρ)=∑_m=0^L (1-λ)^m ∑_s=m f(θ,s,H,ρ). Before considering the difference between ℒ' and ℒ, we first consider the noiseless situation. The unitary on the first qubit can be expressed as : U(θ)|_1=exp-iθ_2,1+⋯+θ_L,1/2X_1exp-iθ_1,1/2Z_1. We denote α=θ_2,1+⋯+θ_L,1/2, the noiseless cost function can be expressed as: ℒ(θ) =⟨0|exp-iθ_1,1/2Z_1^†exp-i α X_1^† (Z_1+Y_1) exp-i α X_1exp-iθ_1,1/2Z_1|0⟩ =cos2α-sin2α. Note that in Appendix <ref>, we have discussed for Pauli path s with non-zero contribution, must have s_i > 0 for i=0,⋯,L. Thus s≥ L+1 is required. Conversely, when s> L+1, there exist a qubit k≠ 1 such that s_i'|_k is not identity for some i', leads to f(θ,s,H,ρ)=0 by H|_k=𝕀. Thus the Pauli paths s with non-zero contribution to ℒ(θ) must obey s=L+1. As a result, we can conclude that ℒ(θ)=∑_s∈P^L+1_n f(θ,s,H,ρ)=∑_s=L+1 f(θ,s,H,ρ). Combine Eq. (<ref>) and Eq. (<ref>), we know 𝔼_θℒ(θ)^2=𝔼_θ[∑_s=L+1 f(θ,s,H,ρ) ]^2=𝔼_α(cos2α-sin2α)^2=𝔼_α[1-sin4α]=1. The last equality in the above equation can be verified as follows. Since α follows a generalized Irwin-Hall distribution, and its characteristic function φ_α(t)=𝔼[e^itα] can be expressed as (e^iπ/2t-e^-iπ/2t/iπ t)^L-1, we have 𝔼_α[sin4α]=Im 𝔼[e^i4α]=Im φ_α(4)=Im (e^2π i-e^-2π i/4π i)^L-1=0. The MSE between ℒ' and ℒ can be estimated as 𝔼_θℒ'-ℒ^2 =𝔼_θ[∑_s> Lf̂(θ,s,H,ρ)]^2 =𝔼_θ[∑_s=L+1f̂(θ,s,H,ρ)]^2 = (1-λ)^2(L+1)𝔼_θ[∑_s=L+1f(θ,s,H,ρ)]^2 =(1-λ)^2(L+1). By Bernoulli's inequality, for r≥ 1 and x≥ -1, we have (1+x)^r≥ 1+rx. Owing to λ=1/L, there is a constant c to make c≥λ (L+1). Therefore, we have (1-λ)^2(L+1)= (1-λ)^2(L+1)/4c 4c≥(1-λ2(L+1)/4c)^4c≥(1/2)^4c, leads to 𝔼_θℒ'-ℒ^2 =Ω(1). So the truncation s≤ L is not enough. While the number of Pauli paths with non-zero contribution and weight s=L+1 is about 2^L-1 (s_0=Z_1,s_1=Z_1,s_2=Z_1 or Y_1,⋯,s_L=Z_1 or Y_1). This leads to exponential complexity about L. § COMPUTATIONAL COST ANALYSIS The computational cost of our method is positively correlated to the number of Pauli paths that have a non-zero contribution. To analyze the computational cost numerically, we examine how the number of Pauli paths is affected by n, L, and M. We take QAOA as an example to numerically analyze the computational cost of our method. For a given qubit number n, we randomly generate a n× n adjacency matrix, in which the probability of each entry being 1 is 0.5. The Hamiltonian H is derived from the MaxCut problem, which corresponds to the adjacency matrix. The initial state is set as ρ=|0⟩⟨0|^⊗ n, and the cost function is defined by Eq. (<ref>). The ansätz used in this QAOA, shown in Fig. <ref>, comprises Pauli rotation gates including R_XX and R_ZZ. For different n and L, we calculate the number of Pauli paths s with weight s≤ M and non-zero contribution, shown in Fig. <ref>. Our numerical findings reveal that the number of Pauli paths is significantly smaller than the upper bound Poly(n)2^M, as suggested by the theoretical analysis. In the case of a fixed M, as n increases, the number of non-zero contributing Pauli paths will increase in a moderate manner. By Lemma <ref>, it is worth noting that M corresponds to a Normalized Root-Mean-Square Error bound (NRMSE) given by √(𝔼_θℒ-ℒ^2)/H_∞ under a fixed noise rate. Setting λ=0.2, we observe that for a given NRMSE bound, the number of paths first increases and then decreases with the increase of L, which aligns with the findings in Ref. <cit.>. On the other hand, M also corresponds to a noise rate λ under a fixed NRMSE bound. With NRMSE bound 0.0821, M takes the values M=50,48,46,44 for λ=0.2,0.21,0.22,0.23 (two significant digits), respectively. From this, it can be seen that the noise rate has a significant impact on computational complexity. § DETAILS ABOUT SIMULATION ON IBM'S EAGLE PROCCESSOR In Ref. <cit.>, IBM has reported experiments on a 127-qubit Eagle processor and demonstrated the measurement of accurate expectation values. The benchmark circuit used was the Trotterized time evolution of a 2D transverse-field Ising model, which was designed to mirror the topology of the Eagle processor. The time dynamics of the system are governed by the Hamiltonian H=-J∑_⟨ i,j⟩ Z_iZ_j+h∑_i X_i, where J is the coupling strength, h is the transverse field strength, and ⟨ i,j⟩ denotes the nearest-neighbor qubit pairs. Spin dynamics can be simulated through the first-order Trotterized time evolution of the Hamiltonian, which is given by U(τ)=exp-iτ H=∏_⟨ i,j⟩expiτ J Z_iZ_j∏_i exp-iτ h X_i+τ^2=∏_⟨ i,j⟩R_Z_iZ_j(-2Jτ) ∏_i R_X(2hτ)+τ^2, in which the evolution time T is discretized into N Trotter steps, with a single step evolution time of τ = T/N. The Trotterized time evolution is implemented by the ansätz shown in Fig. <ref>, in which a single step is composed of one layer of R_X gates and three layers of R_ZZ gates. The initial state is set as ρ=|0⟩⟨0|^⊗ 127. For simplicity, IBM chooses θ_J=-2Jτ=-π/2 and considers θ_h=2hτ to be in the range [0,π/2]. In the implementation of the simulation algorithm, we initially find all the Pauli paths that satisfy the condition s≤ M and have a non-zero contribution, using the back-propagation method described in Appendix <ref>. Subsequently, we transform these Pauli paths into trigonometric polynomials according to Eq. (<ref>), Prop. <ref> and Lemma <ref>. Finally, we calculate the expectation value by substituting different variables for trigonometric polynomials and summing them. In Fig. <ref>, we compare the results of our method with IBM's experiment results both before and after Error mitigation (zero-noise extrapolation). In order to compare the unmitigated results, we employed a classical optimizer to minimize the distance between experimental dataset {(θ_h,y_θ_h)} and our approximate noisy cost function ℒ, formalized as λ=min_λ√(∑_(θ_h,y_θ_h)∈data setℒ(θ_h)-y_θ_h^2). The classical optimizer is SLSQP, and integrated in scipy package <cit.>. The utilized circuits in Fig. <ref> are as follows: * In Fig.(a)-(c), the Trotter step is set as N=5, corresponding to a circuit with depth L=20. * In Fig.(d), the Trotter step is set as N=20 and there is an additional layer of R_X gates applied at the end of the circuit, corresponding to a circuit with depth L=21. * In Fig.(e), the Trotter step is set as N=20, corresponding to a circuit depth with L=80. * In Fig.(f), we set the rotation angle of R_ZZ gates as θ_J=-π/4 and the Trotter step as N=20, corresponding to a circuit depth with L=80. Additional information and the runtimes are presented in Table <ref>:
http://arxiv.org/abs/2306.04879v1
20230608021858
Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization
[ "Clemens JS Schaefer", "Navid Lambert-Shirzad", "Xiaofan Zhang", "Chiachen Chou", "Tom Jablin", "Jian Li", "Elfie Guo", "Caitlin Stanton", "Siddharth Joshi", "Yu Emma Wang" ]
cs.LG
[ "cs.LG" ]
Influence of physical interactions on spatiotemporal patterns David Zwicker [email protected] July 31, 2023 ============================================================= Efficiently serving neural network models with low latency is becoming more challenging due to increasing model complexity and parameter count. Model quantization offers a solution which simultaneously reduces memory footprint and compute requirements. However, aggressive quantization may lead to an unacceptable loss in model accuracy owing to differences in sensitivity to numerical imperfection across different layers in the model. To address this challenge, we propose a mixed-precision post training quantization (PTQ) approach that assigns different numerical precisions to tensors in a network based on their specific needs, for a reduced memory footprint and improved latency while preserving model accuracy. Previous works rely on layer-wise Hessian information to determine numerical precision, but as we demonstrate, Hessian estimation is typically insufficient in determining an effective ordering of layer sensitivities. We address this by augmenting the estimated Hessian with additional information to capture inter-layer dependencies. We demonstrate that this consistently improves PTQ performance along the accuracy-latency Pareto frontier across multiple models. Our method combines second-order information and inter-layer dependencies to guide a bisection search, finding quantization configurations within a user-configurable model accuracy degradation range. We evaluate the effectiveness of our method on the ResNet50, MobileNetV2, and BERT models. Our experiments demonstrate latency reductions compared to a 16-bit baseline of 25.48%, 21.69%, and 33.28% respectively, while maintaining model accuracy to within 99.99% of the baseline model. § INTRODUCTION Neural networks (NNs) underpin many machine learning (ML) systems, achieving state-of-the-art (SOTA) performance across a wide range of tasks, including computer vision <cit.>, natural language processing <cit.>, and generative models for text <cit.> and images <cit.>. However, these remarkable capabilities incur substantial compute and memory costs making these models challenging to deploy at scale while guaranteeing quality of service. These challenges are further exacerbated by the increasing proliferation of ML across tasks <cit.>. Overcoming these challenges requires resource efficient models that balance deployment costs against quality of service (QoS) metrics (e.g., latency and accuracy). Researchers have addressed this need using a variety of techniques including: hardware-efficient NN designs <cit.>, pruning <cit.>, and quantization <cit.>. Among these, quantization offers the simultaneous benefit of reducing the model footprint, enabling cheaper compute primitives, and reducing NN inference latency with the corresponding reduction in compute-energy. By perturbing model parameters from their trained values, quantization can degrade model accuracy. Most often, this is mitigated by either incorporating quantization during the initial training or additional training of the NN with quantized parameters, collectively referred to in the literature as quantization aware training (QAT) <cit.>. However, by updating model parameters and subsequent revalidation, and requiring training/finetuning data, QAT incurs significant overheads during model deployment. Post-training quantization (PTQ) aims to avoid this by determining quantization scales and rounding schemes either on a small calibration dataset or in a data-free manner, minimizing changes to model parameters. This trades off quantization complexity and model revalidation efforts against the accuracy of the quantized model <cit.>. Recognizing the benefits from model quantization, commercially available NN accelerators such as NVIDIA GPUs <cit.> or Google TPUs <cit.> support quantized operations at various bit-widths, e.g. int4, int8, fp8, fp16, fp32 or fp64, to facilitate efficient NN inference. Maximally exploiting these hardware capabilities is challenging in practice because different NN layers and operations need to be configured to different bit-widths to best balance model accuracy against serving efficiency. Since the search space of all possible bit-width choices is exponential with the number of layers (or tensor slices for finer-grained approaches), this presents a significant challenge for rapidly deploying quantized NNs while guaranteeing QoS. QAT tackles that challenge by: (i) training bit-widths alongside other model parameters, given model size constraints <cit.>, (ii) using black-box reinforcement learning solutions to determine bit-widths <cit.>, or (iii) using auxiliary metrics to reduce the search space <cit.>. The increased complexity associated with PTQ has typically resulted in research either: (i) ignoring mixed precision PTQ entirely and quantizing the model with a single bit-width <cit.>, (ii) used Pareto frontier methods based on Hessian sensitivity and model size <cit.>, or (iii) used integer programming <cit.> to determine mixed precision configurations. While the model fine-tuning involved in QAT may introduce computational overhead compared to the simpler PTQ techniques, PTQ often results in a prohibitive quality gap for the same level of model compression <cit.>. This quality compromise prevents the broader adoption and deployment of PTQ-based quantization methods. Existing mixed precision PTQ approaches assume the numerical precision of different layers are independent, i.e., that the decision of quantizing one layer is made regardless of other layers. As we will show, this assumption results in sub-optimal quantization configurations. Prior efforts are unable to effectively select insensitive layers to quantize, with some quantizing more sensitive layers resulting in a poorer quality than what is acceptable in production. The situation is further exacerbated when models are trained and deployed by different entities, where training accuracy is chosen without considering any subsequent quantization. In this case, any quantization-induced drop in accuracy might be intolerable. To tackle the challenges associated with mixed precision PTQ, this paper develops a unified method applicable to multiple model types (large scale, small scale, convolutional, and transformers) and data modalities (vision and text). Our approach enables deploying floating-point ML models to commercially available hardware with no manual intervention (see Figure <ref>). As our primary contributions: (i) we demonstrate the ineffectiveness of the layer-wise sensitivity metric and introduce a novel metric that combines second-order information with inter-layer dependencies, (ii) we propose a guided bisection search to identify optimal quantization configurations while maintaining a production-level accuracy, and (iii) we evaluate our technique experimentally on convolutional vision models and a transformer-based language model and show reductions of model footprints and inference latency given tight accuracy constraints. We demonstrate latency reductions of 25.48% (ResNet50), 21.69% (MobileNetV2), and 33.28% (BERT) while maintaining model accuracy within 99.99% of the baseline model on a calibration dataset. § RELATED WORK To find mixed precision quantization policies for QAT <cit.> use reinforcement learning with feedback from a hardware accelerator , reporting a 1.4-1.95× improvement to latency and 1.9× improvement to power over a baseline eight-bit integer model with comparable accuracy. Gradient-based QAT to learn the precision on weights and activations have shown considerable success on models trained on the ImageNet task, with multiple competitive results on sub-5 MB models <cit.>. When the numerical precision cannot directly be learned, alternative approaches typically employ a surrogate metric to determine layer importance or sensitivity and allocate precision accordingly. One example of such an approach was presented by <cit.> propose to use the mean of the Hessian trace to determine layer sensitivity and develop a Pareto frontier of all model quantization configurations for use in QAT. They reduce the size of a ResNet50 to 7.99MB while achieving 75.76% accuracy. Due to the improvement to model compression observed during mixed-precision QAT, recent work has also studied the feasibility of applying mixed-precision quantization to PTQ. <cit.> investigate how quantization impacts the model loss landscape, observing flat separable structures for mild quantization and highly non-separable, steep curvature, for low bit-width quantization. Building on this they devise a three step method to improve PTQ: (i) determine the quantization step that minimizes a norm of the quantization error of the individual layers, (ii) use quadratic interpolation to approximate an optimum quantization scale, and (iii) jointly optimize the parameters of all layers acquired on the previous step by applying a gradient-free optimization method. In a similar way <cit.> theoretically analyze the impact of the rounding decision in during quantization and formulate it as a binary optimization problem (round up vs round down). Their proposed solution uses a layer-wise local loss, which can be optimized using a relaxation method for improved PTQ performance. <cit.> demonstrate int4 and int8 PTQ on large Transformer-based models by using fine-grained quantization and layer-wise data-independent knowledge distillation. <cit.> introduce a mixed precision PTQ scheme that employs Hessian estimations, similar to previous QAT methods  <cit.>. To estimate the Hessian, the authors extract a distilled dataset from the unquantized model using batchnorm matching, which makes this inapplicable to transformer-based models. <cit.> quantize a model by updating its parameters to minimizes the error between the quantized layer output and full precision output, fine-tuning batchnorm parameters. They formulate the allocation of precision on a per-layer basis as an integer linear programming problem, with the cost being a function of the estimated model footprint and the accuracy. This method assumes strong inter-layer independence, and changes the model weights as well as batchnorm parameters, blurring the distinction between QAT and PTQ. This integer programming approach has also been adopted by other works such as <cit.>. Alternative approaches to determining layer sensitivity have also been studied, such as signal-to-quantization noise <cit.> and Fisher Information <cit.>. These metrics are used in a similar fashion, to construct an ordered list of layer sensitivity to facilitate a search for optimized mixed-precision quanization configurations. To the best of our knowledge, <cit.> are among few prior work examining inter-layer dependency for PTQ. They phrase the quantization process as a network-wise larger scale combinatorial optimization problem of discrete variables and enable efficient solution through various regularization techniques. However, they do not consider mixed precision and their accuracy degrades notably for the four bit configuration with eight bit top and bottom layers. § METHOD Figure <ref> provides and overview of our PTQ methodology. Initially, we determine per-layer sensitivity by approximating each layer's Hessian trace and inter-layer dependencies by assessing the impact of pairwise quantization across the model layers. We consolidate these into a single metric to establish an ordered sensitivity list. Next, we employ an bisection search to determine the sensitivity thresholds to facilitate a bit-width assignment to each layer that still meets the QoS requirements. The right side of the figure illustrates the model quantization process, optimizing the rounding scheme for weights <cit.>, and employing a percentile based calibration scheme for the activations <cit.>. §.§ Quantization Fixed point quantization often termed integer quantization reduces the precision of numerical values in a model for a corresponding decrease in storage and compute requirements. This is typically achieved by applying clipping and rounding operations to the original floating-point values, often formulated as: Q(𝐱) = round(clip(α·𝐱 ) · 2^b-1) · 2^-(b-1)·α^-1. Here, Q is the quantization function and 𝐱 is the floating point value. The clipping function saturates values exceeding the thresholds to their corresponding extrema( minimum -1 and maximum 1), b is the bit-width, and α is the quantization scale. To ensure compatibility with most commercially available hardware, we enforce that all operands (activations/weights) in a matrix multiply (matmul) have the same bit precision. For weights, we employ fine-grained quantization, where the rounding function and scale parameters are determined for each tensor dimension, e.g., per-channel, per-filter, or per-embedding. Building on previous work on PTQ, parameters remain unchanged and instead we adapt the rounding function <cit.>. Our work minimizes the Constrained Absolute Sum of Error (CASE) by modifying the rounding direction for the values contributing the most to the CASE for that matmul <cit.>. The scale (α) for the weights are set based on the minimum and maximum observed along the tensor dimension. We determine a single scale for activations using single forward pass with a calibration set (a subset of elements from the training data). We employ a percentile-based method to determine the quantization scale for the activations <cit.>, on a per-layer basis. §.§ Sensitivity Measures The space of possible configurations for a quantized model is exponential with the number tensors. Consider a ResNet50 with three different configuration options just for the parameters (e.g. bit-widths), this results in 3^50 possible quantization configurations. Exhaustively evaluating these configurations is not practical for modern workloads. Consequently searching through this space efficiently is critical to deploying quantized models. The use of an informative sensitivity metric can reduce this vast space, making it practical to search for performant configurations. One of the most commonly used sensitivity metric employs estimations of the Hessian, which pertains to the local curvature of a function. This choice is informed by theory that model accuracy is robust to perturbations in values that occupy flat regions of the loss function (low local curvature). However, for those values that occupy regions of high local curvature (sharp), small perturbations can have an exaggerated impact on model accuracy <cit.>. One way of estimating the local curvature, uses the Hessian of the loss function, which comprises second-order partial derivatives of the loss. Rather than directly evaluating the Hessian, which is computationally prohibitive, we approximate the trace using Hutchinson's algorithm as seen in related work <cit.>. We define a Hessian-based metric for the i layer of a network as: ℰ_i^Hessian = 𝐄[tr(L(𝐱, 𝕎)/∂𝐰_i^2)]. Where tr is the trace operator, L the model's loss function, 𝕎 the set of all considered tensors (e.g., weights/activations) and 𝐱 the calibration data. Higher ℰ_Hessian values signify increased local curvature of the loss function, implying greater model sensitivity to parameter changes. Sorting by ℰ_Hessian gives an ordering of the ease of layer quantization. Given an accurately ordered layer sensitivity list, a bisection search-like method can efficiently determine layer quantization configurations. However, as shown in Figure <ref>, a bisection search yields subpar results with layers ordered by the Hessian compared to a sequential (progressive) search algorithm. The progressive search, sequentially evaluates the suitability of assigning each layer a bit-width using cumulative model degradation as the assignment criterion (see Supplementary Materials <ref> for pseudo-code). We test for two orders in which layers are evaluated, first prioritized by the sensitivity metric ℰ_Hessian or randomly ordered. We attribute the performance discrepancy between progressive and bisection searches to incorrect ordering of high sensitivity layers. This misclassification changes ordering and is recoverable by the progressive search but catastrophic for the bisection method. As Figure <ref> demonstrates, the performance of progressive search is competitive with the Hessian-guided search even with a randomly ordered sensitivity list, significantly outdoing the Hessian-guided bisection search. However, with a correctly ordered sensitivity list, both search methods should yield identical configurations. Since assuming layer-wise independence does not produce an accurate ordering of the final per-layer sensitivity, we augment our sensitivity metric by estimating pairwise-layer sensitivities. Second order methods become cost prohibitive for this and Hutchinson's algorithm only captures the impact of the diagonal elements of the Hessian, making it unsuitable for our needs. Instead, we estimate multi-layer dependencies by directly quantizing the layers in a pairwise fashion: ℰ_i^InterLayer = ∑^l_j L(𝐱, 𝕎^i,j) - max( L(𝐱, 𝕎^i), L(𝐱, 𝕎^j)), 𝕎^i,j = {𝕎∖{𝐰_𝐢, 𝐰_𝐣}, Q(𝐰_𝐢), Q(𝐰_𝐣) }. Here, we sum the excess degradation incurred given the interaction between two layers. We define excess degradation to mean the difference in the loss (L) between the jointly quantized loss and the single layer quantized loss per-layer. We clip the minimum ℰ_i^InterLayer to 0, disregarding any negative values. We then normalize and scale ℰ_i^InterLayer to combine it with the ℰ_i^Hessian as: ℰ_i^AugHessian = ℰ_i^Hessian + βℰ_i^InterLayer, β =𝐄[ℰ_i^Hessian]/𝐄[ℰ_i^InterLayer]. Figure <ref> visualizes these metrics as well as their combination. Figure <ref> (top) shows the excess degradation from quantizing NN layers, pairwise, for three models. For both the vision models, the early and late layers show a higher sensitivity, while the transformer based BERT exhibits higher sensitivity towards the center. As seen, the Hessian-based sensitivity measure does not result in the same ordering, where e.g., the impact of quantizing the last layer is underestimated. §.§ Search for Quantization Configurations We use a bisection method to determine the quantization configuration with 𝒪(blogN) model evaluations. Here, N is the total number of layers and b the number of available quantization bit-widths. We implement this search on a sensitivity list sorted based on the augmented sensitivity measure (ℰ^AugHessian), to determine the threshold sensitivity value corresponding to different quantization levels. We evaluate the quantized configuration using the same calibration set used to determine scale parameters. The bisection search iteratively updates the threshold value, and thereby the quantization configuration, by expanding or decreasing the number of quantized layers depending on if the accuracy target is achieved. We progressively determine the sensitivity threshold for each available precision setting, starting form the highest (e.g., 8-bits) to the lowest (e.g., 4-bits). Pseudocode for the bisection algorithm is provided in Supplementary Materials Algorithm <ref>. § EXPERIMENTS We evaluate our proposed method on the ImageNet <cit.> and SQuaAD <cit.> datasets, using ResNet50 <cit.>, MobileNetV2 <cit.>, and BERT <cit.>. ResNet50 (ImageNet) and BERT (SQuAD) are commonly accepted dataset-model combinations from the MLPerf inference suite <cit.>[<https://mlcommons.org/>]. We show results on MobileNetV2 to demonstrate the versatility of our method and performance on small edge models. For calibration, determining the sensitivity, and guiding the search we randomly sample 4096 examples from the original training data which we use for all steps. For activation calibration we set quantization scales based on the 99.999 percentile value observed during a forward pass of the calibration set through a model with quantized weights. We also improvements to the quantization performance for MobileNetV2 on adding a layer size penalty. We estimate latency by benchmarking key kernels like and at various numerical precisions on A100 GPUs, using an inference batch-size of one. Directly capturing the interplay between memory hierarchy, bus-speeds, compute-utilization, and compiler optimizations. We identified the top-performing kernels for specific tensor shapes and precisions using the CUTLASS <cit.> profiler and optimizer. This data was then used to estimate deployment latencies for different multi-precision models. Our results (Tables <ref>, <ref>, and <ref>) show linear model size reduction with bit quantity and reflect the complex deployment interactions arising from latency reductions. We present our experimental results and contextualize them with respect to other work in Tables <ref>, <ref>, and <ref>, summarizing absolute accuracy, model size, latency, as well as relative measures. The tables provide results for two search settings: a 99% and 99.9% accuracy target (and 99.99% for BERT). Our method delivers competitive model latency and model compression while exceeding the accuracy delivered by other techniques that result in similarly compressed models. Some entries in the table have additional annotations, for fair comparison across techniques. We distinguish between QAT and PTQ by indicating the presence or absence of finetuning (). We always use the reported baseline indicating relative drop in accuracy using ^*. We use ^† to indicate that models quantized first and last layers to 8 bit, ^ to indicate a manual rerun of light pipeline, and ^⋆ to indicate that the method implements batch-norm finetuning. Across all search targets, the augmented Hessian sensitivity metric consistently improves upon model latency compared to the pure Hessian metric. For ResNet50, targeting a 99.9% accuracy, our quantized model attained the highest accuracy with 99.9% on the calibration set and 99.95% on the complete ImageNet validation set, while delivering a 25.88% reduction in latency. Similar outcomes were observed for MobileNetV2, where no other model exceeded the 99% accuracy threshold compared to the unquantized baseline. Our method delivered up to 99.75% accuracy while reducing latency by 21.69%. When quantizing BERT models, we observed a 16.93% latency difference between the accuracy targets of 99% and 99.99%, achieving the highest accuracy among quantized models while still improving model serving latency. Figure <ref> shows the performance difference arising from the use of the two sensitivity metrics. For all configurations, the model derived from the augmented Hessian occupies a superior position on the accuracy-latency frontier. Ablation studies show the impact of using only inter-layer dependency to guide search (Supplementary Materials section <ref>). While relaxed target accuracies the two metrics result in similar model serving latency, with more stringent targets the augmented search metric improves upon the Hessian by 5-15% across all models. We provide more insight on the difference between the two sensitivity metrics in Figure <ref> in Supplementary Materials. Figure <ref> shows a detailed bit allocation break down for all three models. The major difference between using only the Hessian for search guidance vs. the augmented Hessian is that more layerer are quantized to eight bits, especially visible for early layers. Additionally we show the difference between a 98% and 99.99% accuracy target, which manifests itself as more layers quantized to four bits for the augmented Hessian sensitivity. Table <ref> in the Supplementary Materials shows how many evaluations our bisection search took for the 99% and 99.9% accuracy targets in Figure <ref>, the values are aligned with the theoretical expectations of 𝒪(N) where N is the number of layers. With an average of only six evaluations our bisection search is significantly faster than sequential search. Limitations We have not evaluated the impact of mixed-precision kernels e.g., 4W8A, but we do not foresee complications arising from this for our approach. Our latency estimates are currently pessimistic since they do not capture the impact of kernel/operator fusion. Computing the excess degradation requires l · (l - 1)/ 2 + l (l number of layers) model evaluations. While these can be parallelized and batched, these evaluations might still be costly for larger models. Additionally, estimating the trace of the Hessian remains a computationally intensive task. Although this has not limited us for the models we evaluated, but might impact applicability for significantly larger models. Because we use a calibration dataset at multiple steps, quantization performance will strongly depend on the alignment between the calibration data and the evaluation/real world data. Additional research is needed to implement this in a completely data-free fashion <cit.>. § CONCLUSIONS We introduce a practical mixed precision PTQ pipeline for efficiently quantizing floating point NN models while maintaining a target accuracy on a calibration dataset. Our technique calibrates the quantizer scales and adapts the weight rounding scheme but does not adapt any of the original model parameters (including batch-norm parameters). We demonstrate the limitations of assuming layer independence in estimating layer sensitivity and address this using a new sensitivity metric that also captures the pairwise interaction between multiple quantized layers. This improved metric enables us to use a bisection-search to determine quantization configurations that outperform the unaugmented sensitivity metric. Our method is demonstrated across small (MobileNetV2), medium (ResNet50), and large-scale (BERT) models, applicable to both vision and text data modalities. It achieves latency improvements ranging from 25-33% with minimal impact on accuracy. On an anverage, we require six model evaluations to find these quantization configurations across the tested models. Broader Impacts Our mixed-precision PTQ can deploy models with low latency creating positive impacts such as improved accessibility or reduced energy consumption (contributing to sustainability and cost-effectiveness). However we also consider risks of negative impacts such as limited interpretability mostly due to a lack of research on the interpretablilty of quantized models (reducing transparency and trust) and potential bias and fairness issues which are actively researched <cit.>. plainnat § SUPPLEMENTARY MATERIALS §.§ Error Bars In Figure <ref> we show error bars over five trials where we changed the calibration and evaluation data of our bisection search to analyze the variance of our results. For ResNet50 we show that augmented Hessians are consistently better than pure Hessian information where the standard deviation is increasing which higher accuracy targets. For the MobileNetV2 the story roughly holds true however the latency mean using Hessian information for the highest accuracy target is lower. We contributed that the higher variance where by small uninformed random perturbation result in better models. §.§ Greedy Search We also use the Hessian and our augmented Hessian in combination with a greedy search method (outlined in  <ref>) to analyze a potential upper bound for PTQ quantization since the greedy method is progressive and potentially can correct for errors in the sensitivity ordering. Table <ref> shows the resulting latency percentages compared to a 16-bit floating point baseline for ResNet50 and MobileNetV2 given Hessian and augmented Hessian layer ordering. Generally the performance is close to the bisection search (see Table <ref> and <ref>) for both the Hessian and augmented Hessian underlining the contribution of the augmented Hessians even in the greedy setting. We have to note that the ResNet50 with a 99.9% target performed worse in a single greedy run than for our bisection search, which we attribute to the progressive nature of the greedy search, e.g. during the search layers are individually added and discarded wherein the bisection search quantizes multiple layers at a time so that cross layer interactions have a chance to compensate for accuracy degradations. Our main takeaway here is that a bisection search with augmented Hessian can perform a quick search (see Table <ref> for number of evaluations) which results in best possible accuracy. Note that the greedy search took at least N evaluations where N is the number of layers which is about 8× more evaluations than the bisection search. §.§ Search with Inter-Layer Dependencies Given that the inter-layer dependencies augment the Hessian in a meaningful way and improve PTQ we also ran a bisection with only inter-layer dependencies. The resulting latencies for ResNet50, MobileNetV2, and BERT are in Table <ref>. Compared to results from Tables <ref>, <ref>, and <ref> PTQ without any Hessian information results in roughly 10% slower latencies. Highlighting the importance of combined Hessian and inter-layer information. §.§ Sensitivity Threshold §.§ Observed Search Length §.§ Additional Comparison to Other Work §.§ Search Algorithm Pseudo Code
http://arxiv.org/abs/2306.11671v1
20230620164834
Additive GaN solid immersion lenses for enhanced photon extraction efficiency from diamond color centers
[ "Xingrui Cheng", "Nils Kolja Wessling", "Saptarsi Ghosh", "Andrew R. Kirkpatrick", "Menno J. Kappers", "Yashna N. D. Lekhai", "Gavin W. Morley", "Rachel A. Oliver", "Jason M. Smith", "Martin D. Dawson", "Patrick S. Salter", "Michael J. Strain" ]
physics.optics
[ "physics.optics", "physics.app-ph", "quant-ph" ]
Critical percolation in the ordering kinetics of twisted nematic phases R. A. L. Almeida July 31, 2023 ======================================================================= Effective light extraction from optically active solid-state spin centres inside high-index semiconductor host crystals is an important factor in integrating these pseudo-atomic centres in wider quantum systems. Here we report increased fluorescent light collection efficiency from laser-written nitrogen vacancy centers (NV) in bulk diamond facilitated by micro-transfer printed GaN solid immersion lenses. Both laser-writing of NV centres and transfer printing of micro-lens structures are compatible with high spatial resolution, enabling deterministic fabrication routes towards future scalable systems development. The micro-lenses are integrated in a non-invasive manner, as they are added on top of the unstructured diamond surface and bond by Van-der-Waals forces. For emitters at 5 μm depth, we find approximately 2× improvement of fluorescent light collection using an air objective with a numerical aperture of NA =0.95 in good agreement with simulations. Similarly, the solid immersion lenses strongly enhance light collection when using an objective with NA =0.5, significantly improving the signal-to-noise ratio of the NV center emission while maintaining the NV's quantum properties after integration. 25em *Both authors contributed equally to this work § INTRODUCTION Solid state quantum defects in bulk crystals have emerged as promising systems for applications such as quantum information science <cit.>, quantum sensing <cit.> and quantum imaging <cit.>. These defects are particularly attractive for such applications due to their long spin coherence times, room temperature stability, and the possibility for scalable fabrication and integration into existing electronic and photonic technologies. Among these solid-state quantum defects, the negatively charged nitrogen vacancy center (NV^-) in diamond has gained particular attention due to beneficial optical and spin properties. NV^- centers are point defects in the diamond lattice that consist of a substitutional nitrogen atom adjacent to a carbon vacancy site. They exhibit bright and stable photoluminescence (PL) at room temperature, as well as long-lived electron spin states which can be used as auxiliary qubits to nearby nuclear spins <cit.>, nanoscale magnetic field sensors <cit.> or nodes in a quantum network <cit.>. In recent years there has been significant progress in developing techniques for fabricating NV centers in diamond, including ion implantation <cit.>, and low energy electron beam irradiation <cit.> with post-treatment annealing. Laser writing has emerged as a particularly attractive fabrication technique due to the unique non-linear light-matter interaction mechanism so that single NV^- centers can be fabricated deterministically with minimal residual lattice damage and precision positioning inside host materials <cit.>. The collection efficiency of photons emitted by diamond color centers is a major limitation for many quantum applications, affecting other interesting emitters such as the silicon (SiV), tin (SnV) or germanium vacancy (GeV) center similarly <cit.>. The high refractive index mismatch between diamond and the surrounding medium results in a considerable proportion of emitted photons being reflected internally, leading to a significant loss of emission signal from any emitter inside the crystal. In order to address this limitation, several techniques have been developed to efficiently address atomic defects inside single crystalline diamond, including cavity coupling <cit.> and nanostructuring <cit.>. A particularly common approach to increase the light collection efficiency is the fabrication of a solid immersion lens (SIL) around a defect, typically by using a focused ion beam (FIB) to carve the diamond into a lens shape <cit.>. A standard milling process is stated to take around 1 h per lens <cit.> and is therefore challenging to scale, but significant collection improvements of up to 10× have been reported. Even though no detrimental effects of the milling process on the coherence properties of NV^- centers are reported <cit.>, there are some concerns that the ion bombardment might cause additional strain around the defect of interest <cit.>. Recent work at the forefront of NV-based quantum computing and quantum networking reports the use of such monolithic SILs for faster measurement time and reduced error rate due to enhanced signal to noise ratio, illustrating the practical significance of this method <cit.>. In this paper the advantages of laser written NV^- centers are combined with back-end integration of GaN solid immersion lenses using an additive micro-assembly method, avoiding any damage to the host crystal. This heterogeneous integration approach allows the decoupling of the lens fabrication processing from the vacancy centre definition and selection, while massively speeding up the lens fabrication due to the use of wafer scale compatible parallel wet and dry etching techniques. The GaN lenses are realised using the balanced etch selectivity of photoresist and GaN in inductively coupled plasma (ICP) etching creating high aspect ratio lenses, which cannot commonly be achieved when processing diamond with ICP etching <cit.>. GaN and diamond are closely index matched around the emission wavelength of the NV^- center providing minimal reflection effects at the material interface. The detailed fabrication process is outlined in the following section. § FABRICATION AND LENS INTEGRATION A schematic of the process flow is indicated in Fig. <ref> while corresponding experimental results are shown in Fig. <ref>. Fig. <ref> a) illustrates how the GaN solid immersion lenses are defined and suspended on a strain-optimised heteroepitaxially grown GaN-on-Si chip <cit.> based on the wafer-scale compatible microfabrication process reported in earlier work <cit.>. Initially polymer resist lenses are defined by grayscale lithography and thermal reflow (1), followed by inductively coupled plasma (ICP) dry etching to transfer the lens shape into the 2 μm thick GaN top layer (2). A mesa structure is lithographically defined around the lens and translated into the ca. 2 μm thick AlGaN/AlN buffer layer in an additional ICP etching step, exposing the Si substrate (3). Utilising a SiO_x hard mask, the lens and AlGaN/AlN mesa are laterally suspended by an anisotropic potassium hydroxide (KOH) wet etch followed by hard mask removal (4). The prefabricated GaN SILs are then extracted from their growth substrate using a modified dip-pen lithography system <cit.> under white light illumination (5). A patterned polydimethylsiloxane micro-stamp (6:1 Sylgard 184 PDMS) with around 35 μm extrusion height and 30 μm square lateral size is used. This polymer-based micro-transfer printing technique allows the deterministic release and placement of suspended semiconductor chiplets with sub-micron lateral precision using the reversible adhesion properties of PDMS <cit.>. The fabrication of diamond nitrogen vacancy centers at targeted positions and successive GaN SIL integration is illustrated in Fig. <ref> b). A commercially available electronic-grade single crystalline [100] diamond grown by chemical vapour deposition with nitrogen density <5 ppb is used as substrate. Laser writing is implemented using a regeneratively amplified Ti:Sapphire source producing laser pulses with 150 fs duration, 790 nm wavelength and a repetition rate of 1 kHz which is focused using a high-NA oil immersion objective lens (Olympus 60×, 1.4 NA). Initially a high laser pulse energy is used to break down the diamond lattice and create square graphite markers in regions with low intrinsic color center concentration (I). These surface markers are easily visible in an optical microscope, aiding the localisation of the writing sites and later the transfer printing process, as shown in Fig. <ref> b). A single pulse from the femtosecond laser is focused inside the electronic grade diamond substrate to introduce an ensemble of Frenkel defects at the target position determined by the markers in 5 μm depth (II). Optical aberrations related to refraction at the diamond interface are corrected using a liquid crystal spatial light modulator <cit.>. The sample is then subjected to a 1000 anneal for 3 hours under nitrogen flow to mobilise the vacancies, some of which combine with intrinsic substitutional nitrogen atoms in the diamond lattice to form stable NV centers (III) <cit.>. The laser-processed areas of the diamond sample are initially characterised by collecting the photoluminescence (PL) emission with a home-built confocal microscope using an oil immersion objective (NA = 1.25) (III). A 532 nm continuous-wave (CW) laser (GEM 532) is used as excitation source and the collected epi-fluorescence signal is focused on a single photon avalanche diode detector (SPAD), filtering in the spectral range from 600 nm to 740 nm. The setup is described in more detail in Fig. B.1 in the Supplementary Information. Photoluminescence maps of the created emitters were obtained at room temperature, with an example shown in Fig. <ref> a). As can be seen here, each marker quadrant contains an array of 5×5 writing sites with 5 μm spacing. Each site is irradiated by a single laser pulse, with pulse energy kept constant within a particular row but modified between rows. The laser pulse energy range straddles the narrow transition between the regime of lattice breakdown and graphitisation to the regime of creating vacancy ensembles without any sp^2-bonded carbon content. The in-plane placement accuracy of the NV centers is expected to be within 250 nm with respect to the grid <cit.>. The upper two quadrants of the array typically contain graphitised points while the lower two quadrants show vacancy ensembles. Anti-bunched photon emission is identified by collecting g^2(τ) autocorrelation statistics using a second SPAD and the positions of promising emitters are noted to guide the transfer printing of the GaN micro-lenses. Using the detailed spatially resolved precharacterisation of the photo-emitters, GaN SILs with a radius of curvature (ROC) matching the emitter depth are preselected, removed from their growth substrate and deterministically placed above targeted NV^- centers using the laser written marker structures as alignment guide, as illustrated in Fig. <ref> a) (IV)/(6). The related experimental results after lens release (V)/(7) are shown in Fig. <ref> b). Here, multiple micro-lenses are integrated on a small footprint, with a single GaN lens placed in each quadrant of a pre-characterised marker region. The corresponding PL map shown in Fig. <ref> a) is used to adjust the print position according to the most promising emitter locations. Alignment is mainly limited by image distortions in the PL map and the markers are solely used for visual guidance without applying any numerical methods. Before transfer printing, an ultrasonic solvent clean and a boiling acid treatment in 95 % sulfuric acid mixed with 30 % H_2O_2 with 3:1 volume ratio is applied to the diamond surface to remove any surface debris remaining after laser writing the surface markers. The adhesion of the GaN micro-lens to the diamond surface relies solely on Van-der-Waals forces without the use of any adhesion layers. This type of bonding depends on both μm-scale flatness and nm-scale local roughness of the participating surfaces. The micro-lens height is restricted to the 2 μm thick GaN epilayer to achieve a flat bottom surface <cit.>. As stated in earlier work <cit.> a combination of grayscale lithography and resist reflow is used to fabricate spherical and smooth lens profiles with highly engineered lens dimensions. Fig. <ref> c) shows an atomic force microscopy (AFM) line scan and a true-scale 3D representation of the AFM data of the marked device in Fig. <ref> b), indicating a smooth symmetrical lens with a fitted ROC =9 μm, taking 4 μm overall device thickness into account. After lens integration, we reevaluate the photoluminescence emission from the targeted emitter ensembles though the GaN SILs using air objective lenses. Fig. <ref> d) shows the emission from a graphitised spot in the top left quadrant using an objective lens with NA =0.5. The SIL is expected to contribute a magnification similar to its refractive index n=2.4 if the emitter is placed in the geometric centre of the lens sphere <cit.>. The visible lateral displacement between emitter and lens center accounts to roughly (δ x,δ y)=(1.0,0.4) μm, indicating a real displacement of (Δ x,Δ y)=(0.4,0.2) μm. Our simulations show, that the gross expected collection enhancement is maintained for both collection optics with NA =0.5 and NA =0.95 if the real lateral displacement does not exceed ±1 μm, see Fig. B.3 in the Supplementary Information. The PL map in Fig. <ref> d) reveals significant background fluorescence from the edges of the lens but the much weaker light emission from the center of the GaN lens is largely rejected by the confocal microscope arrangement. The photoluminescence emission spectrum of the GaN/AlGaN/AlN layer stack under green CW laser excitation is broad, covering 550-800 nm wavelength at room temperature, compare Fig. B.7 in the Supplementary Information. Therefore spectral filtering can only be partially applied to isolate the NV^- emission. Noticeably the apparent emitter depth increases due to the printed SIL in agreement with expectation: Without lens, refraction at the planar diamond interface causes the apparent emitter depth to lie much closer to the diamond air interface than its actual position inside the crystal <cit.>. Overall, we demonstrate three dimensional deterministic matching of emitters and SILs using targeted color center laser writing, grayscale lens-shape control and micro-transfer printing, moving towards novel micro systems development with scaling potential. § SIMULATED LIGHT EXTRACTION EFFICIENCY In order to quantify the potential collection efficiency improvement from NV^- centers caused by the transfer printed GaN SILs, three different scenarios are investigated with finite difference time domain simulations (FDTD): (i) a flat diamond substrate, (ii) a flat diamond substrate with added GaN micro-lens and (iii) a monolithic hemispherical solid immersion lens fabricated around the emitter, as can typically be achieved by FIB milling. Fig. <ref> a) illustrates all three cases. The GaN lens in simulation (ii) closely resembles the fabricated micro-lens shown in Fig. <ref> c). The emitter position overlaps with the midpoint of the lens sphere for both the GaN and the diamond lens. The commercial simulation software "Ansys Lumerical FDTD" is used with absorbing boundary conditions. In all three scenarios, the two electric dipole moments of the nitrogen vacancy center are modeled by two dipole emitters perpendicular to the tilted NV^- symmetry axis at 5 μm depth below the (100) diamond surface. Fig. <ref> b) shows the intensity cross section through each simulation region. In the case of the flat diamond surface the emitted wavefront exhibits strong curvature after passing the diamond-air interface, indicating large angles of the k-vector towards the surface normal caused by refraction. Additionally, total internal reflection (TIR) traps significant amounts of the upwards traveling light in the diamond slab. In contrast, both GaN and monolithic SILs allow the wavefront to maintain its shape after passing through the semiconductor materials because their radius of curvature (ROC) is matched to the position of the dipole emitter. Additionally, the GaN micro-lens visibly reduces the negative impact of total internal reflection while the FIB lens can eliminate TIR fully. The upwards directed far-field emission pattern is recorded using monitors positioned at the green lines in Fig. <ref> b), assessing the theoretical collection efficiency and its dependence on the numerical aperture (NA) of the collection optics. The result of the far field projections is displayed in Fig. <ref> c). As expected the intensity distribution is comparably weak and spread over wide angles if no SIL is in place, while both SILs increase the maximum light intensity by about one order of magnitude. Due to the buffer layer thickness and additive nature of the assembly process, the far field above the GaN micro-lens indicates strong improvement primarily when collection optics with low numerical aperture (NA <0.6) are considered. For collection optics with NA >0.6, not much additional gain in absolute collection efficiency is expected. Lower numerical aperture collection optics could be used to address arrays of multiple quantum emitters in a significantly enlarged field of view, which might offer a route to scaling of such quantum systems using spatial light modulators and free-space emitting photonic integrated circuits for beam delivery <cit.>. § ANALYSIS OF SIGNAL ENHANCEMENT The improvement of PL collection efficiency is assessed by comparing two sites where we combined NV^- center pairs with GaN lenses and show that anti-bunching in the photon statistic is maintained after lens integration. Two different home-built confocal microscopes are used to assess the effects of varying NA of the collection optics (NA = 0.5, 0.95 and 1.25), and both setups are depicted schematically in Fig. B.1 and B.2 in the Supplementary Information. The setups are qualitatively similar, but we cannot directly compare count rates quantitatively between them. Fig. <ref> a) includes PL maps of one writing site which is identified as a NV^- center pair. The PL signal is compared for different objective lens NAs with and without a GaN SIL in place. Prior to SIL integration, the written site is barely visible when imaged with NA =0.5, but with increasing the objective lens NA, the signal to noise ratio (SNR) increases significantly due to more efficient collection from the heavily refracted emitted light. After adding a GaN micro-lens on top of the same emitter, we are now able to clearly resolve the emission using NA =0.5, noting a roughly 5× enhanced count rate when the emitter is pumped to saturation. But the low SNR before lens integration makes it difficult to quantify the improvement with much accuracy. The magnification effect of the SIL additionally separates the emitter more clearly from the emitting surface marker structure in the left part of the PL map. The logarithmic line scans through the emitter PL signal reveal significantly improved SNR when comparing both NA =0.5 and NA =0.95 before and after lens integration. To confirm the compatibility of the GaN micro-lenses with measurements in the quantum regime, g^2(τ) autocorrelation measurements are taken after lens integration. Intensity autocorrelation is a well-established technique often used to verify the existence of single photon sources in solid state physics and goes back to the work of Hanbury Brown and Twiss (HBT) <cit.>. To collect the g^2(τ) statistic, the PL signal is split between a free-space SPAD and a fiber-coupled SPAD with timing jitter Δτ≈500 ps using a 50:50 beamsplitter. For these measurements the objective lens with NA =0.95 is used. A spectral filter in the optical path of one SPAD prevents potential optical cross-talk due to the breakdown flash commonly observed in Si APDs and SPADs <cit.>. Fig. <ref> b) contains the HBT measurement result from the emitter depicted in Fig. <ref> a) and we find 0.5<g^2(0)=0.59<0.66 after lens integration, indicating that the ensemble consists of two closely spaced NV^- centers, which cannot be resolved separately. Fig. <ref> b) also includes a background corrected spectral measurement of the discussed emitter taken through the lens, exhibiting a sharp zero-phonon line (ZPL) centerd at 637 nm and a phonon sideband of approximately 100 nm width, as is characteristic of the NV^- center in diamond <cit.>. The long wavelength transmission edge of the 665/150 nm band-pass filter can be seen around 740 nm wavelength. For the g^2(τ) measurement, a 600 nm long-pass filter is added to remove the first order diamond Raman line from the background signal. Both measurements demonstrate that the photophysics of the emitters remain undisturbed after passing through the GaN SIL. Furthermore, the GaN SIL enhances the photon count rate to enable faster measurements with similar SNR. To quantify this enhancement caused by the SILs, power saturation measurements are taken on two NV^- center pairs before and after SIL printing using the air objective lens with NA =0.95. The results are shown in Fig. <ref> a) and as expected, the count rates of the NV centers increase with increasing laser power up to a certain saturation level, after which the count rates plateau. Pair 1 corresponds to the emitter discussed in Fig. <ref>, while pair 2 is shown in the bottom right quadrant of the region discussed in Fig. <ref>. More information on both emitters and the applied lens profiles can be found in Fig. B.6 in the Supplementary Information. The data is fitted with the sum of two saturation curves derived for two-level quantum systems, allowing both emitters in the focal volume their own saturation power P_sat and intensity I_sat: I(P) = I_sat1· P/P + P_sat1 + I_sat2· P/P + P_sat2 A single saturation curve fit describes the data equally well. With this, (2.2±0.1)× and (1.8±0.1)× enhancement of I_sat is found for pair 1 and pair 2, respectively. Similarly a slight reduction in saturation pump power by a factor of (1.7±0.3)× for pair 1 and (1.2±0.2)× for pair 2 is observed. This is likely due to reduction in the spherical aberration that occurs at a planar interface with high index contrast <cit.>. We do not expect additional substantial narrowing of the point spread function of the pump laser caused by the SIL because objective lens and SIL exhibit similar numerical aperture, compare Fig. B.6 in the Supplementary Information. But the pump efficiency enhancement would likely be more noticeable when using a lower NA objective, because the SIL is then expected to reduce the diffraction limited spot size. These measurement results are compared to the expected collection enhancement derived from the simulations shown in Fig. <ref>. The transmitted light through the detector surfaces (green lines) in the air above the diamond substrate / GaN SIL are averaged between 650 and 750 nm wavelength. The far field projection is similarly averaged over 650, 700 and 750 nm wavelength. As neither transmission nor angular dependency vary strongly with wavelength, both can be multiplied to estimate the light collection efficiency in dependence of the acceptance angle / NA of the collection optics, without the need to weight the simulation result with the spectral density of the NV^- emission. The simulation result is shown in Fig. <ref> b), comparing the planar diamond surface to a GaN micro-lens with its radius of curvature matched to the emitter depth and ideal lateral alignment. The integrated transmission refers to the percentage of total light emitted by the dipole that is caught within the respective angular acceptance cone. The simulations take Purcell enhancement into account, which is found to be <5 % within the given spectral region. The simulations predict an improvement factor of around 2.5× for an objective lens with NA =0.95, which is a bit higher than what is found in the experiment. For both NV^- center pairs the real lateral displacement is less than the predicted critical value of ±1 μm, but vertical misalignment might lead to lower collection enhancement, if the emitter is actually placed deeper inside the crystal than intended. We estimate that an emitter that lays 1 μm too low, could cause the enhancement to drop to around a factor of 2×, compare Fig. B.4 in the Supplementary Information. Note that the SIL placed above NV^- center pair 1 was chosen to have a slightly larger diameter, leading to a larger ROC, for which the simulations predict increased collection at large collection NA, compare Fig. B.5 in the Supplementary Information. As noted before, the simulations for the GaN SIL predict similar overall collection efficiency for objective lenses with NA varying between 0.6 and 0.95, potentially gaining a larger field of view without loss of photon count rate. Taken together, these results provide further evidence that the GaN SILs are effective in enhancing the signal from quantum defects in diamond. Further enhancement could be achieved by placing the color center closer to the diamond surface and increasing the GaN epilayer thickness. In our current work, the 5 μm depth of the NV emitters, the 2 μm thick AlGaN/AlN buffer layer and the epilayer constrained GaN lens height (2 μm) limit the enhancement expected from the SILs, because these parameters affect how much the lens aperture covers the angular space above the emitter. The simulated enhancement factors for ROC-matched GaN lenses combined with emitters at various depths and varying GaN lens height compared to a hemispherical monolithic SIL are depicted in Fig. <ref> c). These results show that a 2 μm high GaN micro-lens is expected to increase the light collection efficiency up to 7× (NA =0.5), 6× (NA =0.7) and 4× (NA =0.95), if the emitter is placed in 1 μm proximity to the surface, compare with the light gold colored curve in Fig. <ref> c). In particular the overall absolute collection is not expected to deviate much between an NA of 0.7 and 0.95, potentially offering larger field of view without any losses if a suitable GaN SIL is used. But we note that with moving both to lower NA and lower emitter depth the confocal rejection of the PL from the GaN/AlGaN/AlN layer stack is likely to decrease. In addition, the GaN epilayer thickness could be increased to e.g. 4 μm, which allows to increase the lens diameter while maintaining the midpoint position of the spherical lens profile. Thus, the effective angular coverage of the lens aperture above the emitter would rise, improving the photon extraction regardless of emitter depth, compare the dark green and dark golden curves in Fig. <ref> c) with the light colored curves. This approach is expected to require tuning of the strain profile in the epilayer including the redesign of the buffer layer <cit.>. Overall, the derived collection efficiency for the planar diamond surface and the monolithic diamond SIL are in good agreement with previous experimental and theoretical work <cit.>, indicating the validity of the simulations. We added the expected enhancement and absolute collection efficiency for various emitter depths and different GaN epilayer thickness both for (100) and (111) crystal orientation in Fig. B.8 in the Supplementary Information, with the overall trends being very similar to what is shown in Fig. <ref> c). § CONCLUSION AND OUTLOOK In summary, we have investigated how additive GaN micro-lenses can be used to enhance the photon collection from diamond color centers, using laser written negatively charged nitrogen vacancy center as an example. To our knowledge, this is the first work that deterministically combines ca. 10 μm large semiconductor micro-lenses with color centres in a foreign host crystal, showing the potential of additive high-NA micro-optical components for quantum technology based systems in general. We find evidence of collection improvement on the order of 2× in good agreement with finite difference time domain simulations. The main advantages that transfer printed GaN solid immersion lenses offer over monolithic diamond hemispheres fabricated with focused ion beam milling are the potentially much faster fabrication speed and their additive nature that eliminates damage to the diamond lattice which could affect the color center properties. Further improvement of photon collection efficiency is expected by placing the color center closer to the diamond-air interface. We find that a dipole emitter in 1 μm proximity to the surface might experience 4-7× collection improvement. Still, the highest possible collection efficiency offered by these additive lenses remains limited in comparison to monolithic hemispheres due to the 2 μm thick buffer layer. This might be mitigated by increasing the GaN epilayer thickness to 4 μm, potentially achieving lenses with larger diameter but same midpoint of the lens sphere, covering a larger angular space above the emitter. When using an objective lens with NA =0.7 in this arrangement, the simulations predict that collection is relatively on par with what can be achieved with a hemispherical diamond lens, using the same collection optics. Deterministic laser writing of color centers has been reported with near unity yield <cit.>, making regular high quality color center arrays possible which could be combined with regularly spaced micro-optical elements for effective collection improvement. The GaN lens fabrication uses highly parallel ICP etching, enabling thousands of device to be fabricated in one wafer run, potentially utilising 6" GaN-on-Si wafer technology. The stamp-based transfer printing process is conducted manually here, but can easily be automated while maintaining μm-precise placement accuracy using optically visible marker structures <cit.>. Combined with a multi-stamp head approach <cit.>, a device throughput of >100-200 devices per hour is conceivable after initial alignment of donor and receiver chip and highly optimised processing. Alternatively continuous roller transfer printing might offer a way to scale the transfer process, but still needs further development in terms of overlay alignment accuracy <cit.>. Further scaling could be achieved by adding multiple GaN micro-lenses on one AlGaN/AlN membrane to create printable arrays, thus reducing the number of transfer print processes. Lens arrays could either be integrated with arrays of deterministically generated color centers or deterministic writing could be performed through the GaN lenses themselves after printing <cit.>. The latter approach could allow the reduction of depth at which vacancies can be created by laser writing and would additionally auto-align the respective color center close to the midpoint of the lens sphere <cit.>. § BACKMATTER §.§ Funding The authors acknowledge funding from the following sources: Royal Academy of Engineering (Research Chairs and Senior Research Fellowships); Engineering and Physical Sciences Research Council (EP/R03480X/1, EP/N017927/1, EP/P00945X/1, R004803/1, EP/M013243/1, EP/T001062/1, EP/V056778/1, EP/L015315/1); Innovate UK (50414); Fraunhofer Lighthouse Project QMag. NKW acknowledges funding of his PhD studentship by Fraunhofer UK. G. W. M is supported by the Royal Society. §.§ Acknowledgement The authors acknowledge Benoit Guilhabert (University of Strathclyde) for his development work on both the transfer print technique and suspension of GaN-on-Si thin films as well as Luke Johnson (University of Warwick) for acid cleaning the diamond sample after annealing. §.§ Disclosures The authors declare no conflicts of interest. § SUPPLEMENTARY INFORMATION This document provides further details on the experimental setups, an analysis of displacement and different lens geometries with FDTD simulations, additional AFM and PL data on the discussed emitters and PL spectra taken on GaN/AlGaN/AlN thin films at room temperature. figuresection §.§ Confocal setups used for photoluminescence measurements The confocal photoluminescence setup that was used to gather the power series on the two doublet emitters in addition to the presented autocorrelation and spectral measurements with a NA = 0.95 air lens is depicted in Fig. <ref>. NA = 1.25 oil lens were used in the same setup to get spatial distribution information of the emitter of interest with best signal to noise ratio. Here, a continous wave 532 nm laser (GEM532) excitation source with 1 mW output power was used, the fluorescence signal was collected back through the same objectives lens and spectral filtering of the excitation laser is applied before detection. Fig. <ref> shows the setup used for measurements with the NA = 0.5 air objective. A continuous-wave 532 nm laser (GEM532) with 6 mW output power was used as excitation source. Photoluminescence maps are taken with both setups. §.§ Displacement of emitter with respect to the mid point of the lens sphere and the influence of an air gap on the collection efficiency In this section we present additional FDTD simulations to investigate the effects of small displacements of two dipole emitters mimicking a NV^- centre in a diamond crystal with (100) surface orientation below a GaN SIL which radius of curvature is matched to the emitter depth of 5 μm. Two separate simulations are run and averaged to account for the two different dipole moments of the NV^- center, one of which is tilted by (90-54.7) ^∘ with respect to the surface normal (the NV axis is tilted by 54.7 ^∘, but both dipole moments are orientated perpendicular to the NV axis). Cross sections and far field projection are taken at λ=700 nm wavelength. The expected collection efficiency is calculated by averaging the transmission through the detector surface above the lens (indicated in green) multiplied with the projected far field distribution from the same detector, choosing λ=650-750 nm wavelength to match the spectral emission region of the NV^- centre. Both transmission and far field projection are found to be insensitive towards wavelength changes in this spectral regime, allowing us to ignore the spectral density distribution of the emission spectrum. Lateral displacement is discussed in Fig. <ref>, vertical displacement in Fig. <ref>. We illustrate the effects of a slightly larger radius of curvature accompanied by a larger diameter of the micro-lens on the collection improvement as function of an air gap between diamond and the AlN bottom surface of the lens platelet in Fig. <ref>. §.§ Additional photoluminescence and AFM measurements of SILs and emitters To provide some additional data from the investigated emitters, Fig. <ref> a) includes a larger field of view of the photo luminescence plots shown in Fig. 4 a) in the main text as well as an AFM line scan of this slightly larger SIL. Fig. <ref> b) includes photoluminescence maps and an AFM line scan of the second doublet emitter discussed in less detail in the main text, while Fig. <ref> c) illustrates the NA dependency of the PL map taken on the graphitised emitter spot discussed in Fig. 3 in the main text. Room temperature spectral measurements of transferprinted GaN lenses and AlGaN/AlN membranes on single crystalline diamond are shown in Fig. <ref>. A 532 nm CW excitation laser is used with a 550 nm long pass filter and 532 nm notch filter in a confocal arrangement. §.§ Detailed FDTD simulation results for emitters at different depth and increased GaN epilayer thickness To provide a more detailed overview of the simulation results for dipole emitters in different depth below the diamond surface and the potential arising from a thicker GaN epilayer, we summarised the absolute collection efficiency and collection enhancement as function of the NA of the collection optics for both (100) and (111) crystal direction in Fig. <ref>. The bottom left plot is already shown in the main text in Fig. 5 c). ieeetr
http://arxiv.org/abs/2306.08170v1
20230613230140
Is Your Wallet Snitching On You? An Analysis on the Privacy Implications of Web3
[ "Christof Ferreira Torres", "Fiona Willi", "Shweta Shinde" ]
cs.CR
[ "cs.CR" ]
Is Your Wallet Snitching On You? An Analysis on the Privacy Implications of Web3 Christof Ferreira Torres ETH Zurich Fiona Willi ETH Zurich Shweta Shinde ETH Zurich =================================================================================================== With the recent hype around the Metaverse and NFTs, Web3 is getting more and more popular. The goal of Web3 is to decentralize the web via decentralized applications. Wallets play a crucial role as they act as an interface between these applications and the user. Wallets such as MetaMask are being used by millions of users nowadays. Unfortunately, Web3 is often advertised as more secure and private. However, decentralized applications as well as wallets are based on traditional technologies, which are not designed with privacy of users in mind. In this paper, we analyze the privacy implications that Web3 technologies such as decentralized applications and wallets have on users. To this end, we build a framework that measures exposure of wallet information. First, we study whether information about installed wallets is being used to track users online. We analyze the top 100K websites and find evidence of 1,325 websites running scripts that probe whether users have wallets installed in their browser. Second, we measure whether decentralized applications and wallets leak the user's unique wallet address to third-parties. We intercept the traffic of 616 decentralized applications and 100 wallets and find over 2000 leaks across 211 applications and more than 300 leaks across 13 wallets. Our study shows that Web3 poses a threat to users' privacy and requires new designs towards more privacy-aware wallet architectures. § INTRODUCTION Web3 has gained tremendous adoption over the past few years. This is mainly fueled by the rise of decentralized applications (DApps) such as the Metaverse, NFTs, and decentralized finance (DeFi). DappRadar.com currently lists over 13,000 DApps across various blockchain platforms <cit.>. A report from 2022 states that NFTs generated 12 Billion USD in trades and that DeFi even reached a value of 127 Billion USD in total value locked on Ethereum <cit.>. The promise of Web3 is the ability to run traditional applications in a decentralized way, thus assuring better transparency and privacy. An important aspect of such decentralized infrastructure are wallets, which act as an interface between decentralized applications and the user. Wallets enable users not only to perform common blockchain operations such as managing their credentials (i.e., public and private key pairs) or signing of transactions, but also operations on DApps such as trading tokens or buying NFTs. All of these operations are provided to the user via a convenient and easy to use interface. There are several wallet operators that act as intermediaries and interact with decentralized apps on the behalf of the user. MetaMask is currently one of the most popular wallet operators with over 10 Million active users <cit.>. While built with the goal of better transparency and privacy, decentralized applications as well as wallets are still based on traditional web technologies, which are prone to privacy issues. Wallets, in particular have access to sensitive user information and are therefore a rich target for attacks. To make matters worse, wallet operators often use centralized providers by default to retrieve information from the blockchain, making them a single point of failure and allowing providers to easily track user activity across DApps. For example, Infura's recent privacy policy update mentions that IP addresses and wallet addresses of users will be collected <cit.>. Since Infura is the default blockchain provider of MetaMask, this means that Infura is capable of linking wallet addresses with IP addresses of millions of users. While users might accept trusting the wallet operators, they may not realize to what extent they are exposing their wallet information to third-parties. Wallets operate by injecting a wallet object into the DOM of every website the user visits. This facilitates the interaction between DApps and wallets. DApps can then simply use JavaScript to access wallet information. However, the browser does not take any particular measures to safeguard the wallet object. Thus, any malicious website, third-party, or browser extension can read this object or use this object to trick users into approving malicious actions (e.g., send assets to an attacker-controlled address). While these attack vectors have been exploited in the traditional web in the past, their prevalence in the context of Web3 is yet unclear. In this paper, we investigate whether wallet extensions are being used to track users online and whether DApps as well as wallet extensions leak the user's wallet address to third-parties. To answer this question systematically, we build a framework that is capable of simulating wallet objects and monitoring access to these objects. Hence, if a website checks the presence of a wallet object in conjunction to several other JavaScript attributes, we deem it as a tracking attempt to fingerprint the user. Moreover, our framework is also capable of automatically interacting with DApps as well as wallets and intercepting any cookies as well as requests made via HTTP and WebSockets. We identify a DApp or wallet to leak the user's wallet address if we find that any of the intercepted cookies or requests include the user's wallet address. Results. We report three main findings. First, of the 100K websites that we analyzed, 1,325 of them track users via wallet objects either directly or via third-party scripts. Second, of the 1,572 DApps that we analyzed, 211 of them leak the user's wallet address to third-parties such as blockchain providers or tracking and analytics platforms. Lastly, of the 100 wallets that we analyzed, 13 wallets leak the user's wallet address to third-parties. All together wallets include over 137 unique third-parties, thereby giving third-parties access to sensitive user information. In summary, our investigation shows that the existing wallet infrastructure is not in favor of users' privacy. Websites are abusing wallets to fingerprint users online, and DApps as well as wallets leak the user's wallet address to third-parties. Ethics Considerations. Throughout our analysis, we took adequate measures to avoid overloading the websites (e.g., limited ourselves to the landing page). We have informed the websites and third-parties about potentially unintentional data collection from their side. Contributions. We summarize our contributions as follows: * We present the first study that systematically measures the prevalence of websites and third-party scripts that use wallet information to track users online. We found evidence that 1,325 websites out of the top 100K websites probe their users for wallets. * We conduct the first large-scale measurement to assess the leakage of wallet addresses on various DApps and wallets. We find that 211 out of 616 DApps and 13 out of 100 wallets leak the user's wallet address to third-parties. * We measure the efficacy of 5 popular blocklists and observe that when all combined 44% of the third-parties would not be blocked. § BACKGROUND We provide background on Ethereum, decentralized applications, wallets, and privacy concerns that might arise when combining all these technologies together. §.§ Ethereum Ethereum is a blockchain or distributed ledger where transactions are grouped into batches of blocks and where each block points to its previous block via a cryptographic hash. Blockchains are typically maintained by a distributed peer-to-peer network, which is responsible for broadcasting transactions, appending new blocks, providing access to stored data, and executing smart contracts. Smart contracts are programs that are deployed and executed across a blockchain. As of January 2023, Ethereum has a market capitalization of over 180 billion USD <cit.>, making it the most popular blockchain technology that offers Turing-complete smart contract capabilities. Ethereum peers (i.e., nodes) may expose a JSON-RPC interface <cit.>, which defines an API that users or applications can use to interact with the blockchain (e.g., sending transactions or querying the state of a smart contract). Similar to other blockchains, Ethereum has its own native cryptocurrency (i.e., Ether), that enables users to transfer value across accounts and to pay for transactions. However, unlike Bitcoin for example, Ethereum follows an account-based model. The idea is similar to traditional bank accounts, where users own an account number and other users may transfer currency to this account number. In Ethereum, users do not own an account number, instead they own an account address, which is a unique 160-bit long hexadecimal string. However, similar to a bank account number, addresses act as a unique identifier that can be used to link transactions back to users and which should therefore be shared only with trusted parties. §.§ Decentralized Applications Decentralized Applications, also known as DApps, are applications that are accessible via the web, but where either all or some of the parts are hosted on decentralized platforms. However, for ease-of-use, availability requirements, and compatibility with existing technologies (e.g., DNS, HTTP client-server model, etc.), in most cases the user interface (UI) of DApps is hosted on a centralized web hosting service such as AWS. Only parts of the business logic are decentralized via the use of smart contracts. There are a number of different use cases for DApps, ranging from gambling platforms and online games (e.g., CryptoKitties), to decentralized marketplaces and exchanges (e.g., Uniswap). DappRadar currently lists over 3,000 DApps for the Ethereum blockchain alone <cit.>. However, to be able to interact with DApps, users are required to use a wallet, which acts as a bridge between the DApp and the user's identity on the blockchain. §.§ Wallets Users typically manage their accounts and cryptocurrency via a wallet. A popular choice are wallets in the form of a browser extension. A browser extension is a software module that users can install in their browser to enhance their browsing experience. Browser extensions have the capability to modify the Document Object Model (DOM) of websites and enjoy access to privileged browser APIs such as browsing history. MetaMask <cit.> currently is the most popular wallet extension for Ethereum with over 10 Million downloads on Google Chrome's web store <cit.>. Wallet extensions such as MetaMask inject a Web3 object into the DOM of any website that the user visits, regardless of whether the website is a DApp or not. Specifically, MetaMask adds a new object called to the existing object, which exposes the Ethereum Provider API <cit.>. The API enables DApps to interact via JavaScript with the Ethereum blockchain as well as the user's wallet. For example, DApps can call unprivileged properties such as , which will return if MetaMask is installed, but also privileged properties such as , which will return the user's wallet address to the DApp. Wallets are required to ask the user for prior permission and the user needs to grant it before a DApp is able to access privileged properties such as the user's wallet address.  <ref> depicts the conceptual flow of a user interacting with a DApp. A user starts by visiting the DApp's website. DApps usually expose a visual UI button on their website, which users must click if they wish to “connect” their wallet to the DApp (i.e., grant DApp access to their wallet). The DApp will then request permission to the wallet via that wallet's injected Ethereum Provider API. The wallet will display a popup to the user asking if it wants to grant permission to the DApp. In case the user grants the access, the wallet returns the requested information back to the DApp. Note that while a subset of the Ethereum Provider API is handled directly by the wallet extension (e.g., signing of transactions), another subset (e.g., retrieving latest block number) is simply forwarded to an Ethereum node (e.g., Infura <cit.>) via JSON-RPC. Also note that a DApp is not required to rely on a wallet extension to interact with the blockchain. A DApp can simply talk directly to a blockchain node. In fact, many DApps limit their interaction with the wallet extension to the bare minimum of only requesting the user's wallet address and the signing of transactions. §.§ Privacy Concerns Tracking is omnipresent on the web. Users are constantly being tracked across websites for purposes of analytics or targeted advertising, either via explicit (e.g., cookies) or implicit (e.g., browser fingerprinting) information. In the past, third-party cookies have been a popular way to track users across the web <cit.>, but most modern browsers nowadays block third-party cookies by default <cit.>. A popular alternative is browser fingerprinting <cit.>. The idea is to uniquely identify users based on differences of their browser's configuration (e.g., fonts, screen resolution, plugins, etc.). A fingerprint is generated by combining properties that are exposed to a website via JavaScript. As opposed to cookies, which are stateful, browser fingerprinting is stateless and thus difficult to mitigate without breaking usability (i.e., disabling JavaScript) <cit.>. Since DApps are developed using traditional web technologies, many DApps also include several third-party tracking scripts. DApps cannot sign transactions without the consent of the user. However, once a user has connected its wallet to a DApp, all third-party scripts embedded within the DApp can access the injected Web3 object via JavaScript. This grants third-party scripts access to sensitive information such as the user's account address or balance, without requiring prior consent of the user. Additionally, without being connected to a wallet, third-party scripts can check for the existence of a Web3 object in the DOM and thus infer that a user owns cryptocurrency and possibly which cryptocurrency and which wallet. This information can be leveraged to augment existing browser fingerprints as it adds additional bits of entropy <cit.>. While blockchains do provide some level of anonymity (i.e., pseudonymity), they do not provide full anonymity. All transactions from and to a given account can easily be linked to a user's account address, but not necessarily to its real-world identity. Yet, third-party scripts pose a threat to a user's anonymity since they also have access to a user's IP address and thus can potentially link multiple wallet addresses to their respective IP address <cit.>. In fact, Infura's recent privacy policy update has raised several concerns among the community as it states that wallet addresses and IP addresses will be collected <cit.>. § METHODOLOGY Next, we describe our approach for detecting Web3-based browser fingerprinting and identifying wallet address leakage across DApps and wallet extensions. A high-level overview of our measurement framework is depicted in  <ref>. §.§ Web3-Based Browser Fingerprinting Browser fingerprinting is a prevalent online tracking technique <cit.>. Our goal is to find evidence of whether websites or third-party scripts are leveraging any of the JavaScript properties that wallet extensions expose, to track users on the web. For that purpose, we use DuckDuckGo's Tracker Radar Collector (TRC) <cit.> to crawl popular websites and measure their behavior. TRC is a crawler that is designed for large-scale web measurements. It is modular and leverages multi-threading to speed up crawling. It uses Puppeteer <cit.> under the hood, which is a library that allows developers to control Chromium-based browsers for automation and testing purposes via the Chrome DevTools Protocol <cit.>. This gives TRC the capability to intercept network requests, read cookies, and instrument JavaScript calls. §.§.§ Detecting Wallet API Calls In contrast to OpenWPM <cit.>, another popular crawler which uses inline instrumentation by overriding JavaScript functions and objects with getters, TRC uses the Chrome DevTools Protocol to set breakpoints in the JavaScript engine. These breakpoints cannot be detected by websites and are only triggered when a certain function is called or property is accessed. Whenever the debugger hits any of the configured breakpoints, TRC will collect the JavaScript stack trace (e.g., filename, line number, etc.) and other metadata about the property access or function invocation and store it in a JSON file. We set breakpoints for five popular wallets (see  <ref>). We started by first only hooking the object, and added the other wallet API hooks after manually checking reported scripts during initial test runs. Moreover, after our final crawl we performed a manual inspection of all several scripts and were not able to find any other wallet APIs, which gives us confidence that the four breakpoints in <ref> are sufficient. However, breakpoints only get triggered if the object actually exists in the DOM. For example, to detect whether websites are trying to identify if MetaMask is installed, we set a breakpoint to be triggered whenever a script accesses the object. Thus, in the case of MetaMask, the object has to be injected into the DOM for it to be detected by our breakpoint. We therefore simulate each wallet listed in  <ref> by injecting (prior to any script execution) wallet specific properties into the DOM. For instance, to simulate MetaMask we inject the property into the DOM and set it to as defined in MetaMask's documentation <cit.>. This allows us to hook future accesses by the website, thus catching any access to the object beyond the simulated property. In other words, by hooking via the property , we will also be able to detect access to, for instance, . §.§.§ Identifying Fingerprinting Behavior The idea behind browser fingerprinting is to collect a large amount of diverse but stable information about a user's browser configuration, such that when combined together, enough entropy is provided to generate a unique fingerprint that identifies the same user across different sessions and websites. We leverage a similar approach as proposed in <cit.> to detect fingerprinting behavior in Android applications, and adapt it for JavaScript. TRC already provides a curated list of JavaScript properties and functions, that websites are known to leverage, to generate browser fingerprints <cit.>. We group each property and function call into one of 22 self-defined categories (see Appendix <ref>). A script is marked as a fingerprinting script if it calls JavaScript properties and functions belonging to at least 10 different categories, where at least one of the categories must belong to a list of 8 explicit browser fingerprinting categories. We tried values of 5, 10, 15, and 20 during earlier experiments with a small set of identified fingerprinting scripts and achieved the best accuracy when using 10 as threshold. Explicit browser fingerprinting categories include JavaScript properties and functions that are heavily used for fingerprinting purposes (e.g., CanvasRenderingContext2D, WebGLRenderingContext, AudioBuffer). §.§ Wallet Address Leakage Nothing prohibits DApps or wallet extensions from sharing a user's wallet address with third-parties. This sharing can either happen with or without the knowledge of the DApp or wallet extension. Our goal is to measure whether, how, and with whom DApps and wallet extensions share wallet addresses. To that end, we developed an automator for MetaMask (see  <ref>) that not only automatically installs and sets up MetaMask when visiting a DApp, but also automatically tries to connect MetaMask to the Dapp. Once connected to a DApp or a wallet extension is installed, our request interceptor will intercept any outgoing traffic as well as cookies and search for wallet address leaks. §.§.§ Connecting MetaMask to DApps For a DApp to be able to leak a user's wallet address, the user needs to have set up a wallet and connected its wallet to the DApp. As we do not want to repeat this step manually for thousands of DApps, we develop a component called MetaMask automator. First the automator sets up MetaMask. This is done even before visiting the website of the DApp. Our automator starts by launching a fresh instance of a browser and installing MetaMask's wallet extension. Afterwards, it launches MetaMask's UI in a new browser page and automatically clicks on the button to import an existing wallet. Browser extensions, including MetaMask, contain UIs that are essentially HTML pages with JavaScript code. Our automator leverages Puppeteer to extract all HTML elements (e.g., buttons, input fields, etc.) from MetaMask's UI. Puppeteer also provides functions that allows our automator to interact with the HTML elements such as clicking on buttons or typing in text into input fields. Once the “import wallet UI” has loaded, our automator will read a fixed passphrase and password from a file and automatically type in the passphrase and password into MetaMask's UI to import the wallet and finish setting up MetaMask with a wallet address that we control. After setting up the wallet, our automator visits the DApp's website. Once loaded, it searches for a “connect” button by scanning the HTML of the DApp's website for elements that contain keywords such as “Connect Wallet”, “Sign In”, “Account”, etc. We leveraged the 78 DApps by Winter et al. <cit.> to extract a list of common keywords for connect buttons (see Appendix <ref>). Afterwards, the automator tries to perform a click on every element that it found. It detects the connect button if it finds an element where the click succeeds. Once the automator finds the connect button, it searches for a MetaMask button. Often DApps allow users to connect via different wallets and therefore they let users select which wallet they want to use. The automator finds the MetaMask button by scanning the HTML for elements containing keywords such as “MetaMask” or “Browser Wallet” (see Appendix <ref>). Some DApps require users to click on a checkbox to agree to the terms and conditions before being able to connect. Our automator handles this case by searching the HTML for checkboxes and selecting them before clicking on any button. After successfully clicking the MetaMask button, a popup window will show up asking the user for permission to connect. Our automator intercepts this popup window and automatically clicks on the “confirm” button to finalize the connection request and give permission to the DApp to access our wallet's information. However, in some cases our automator might not be able to find the MetaMask button via text search because either the DApp uses an image or the text is not detectable. Thus, whenever our automator does not find any MetaMask button, it infers the dimensions of the browser's window and tries to perform hard-coded blind clicks on various offsets starting off from the middle of the window (e.g., 100 pixels to the bottom right, 50 pixels to the top left, etc.). §.§.§ Intercepting Outgoing Traffic and Cookies There are multiple ways in which DApps or wallet extensions can exfiltrate wallet addresses. Previous works have only focused on intercepting HTTP GET requests <cit.>. However, in our work we also intercept HTTP POST requests since DApps and wallet extensions may also leak information via the post body. Moreover, we also intercept WebSocket payloads. WebSockets became a popular alternative to HTTP polling due to their high efficiency (e.g., low latency and fast transmission). They establish a long-lived connection between the DApp or wallet extension and the server. While WebSockets allow for messages to be sent in a bi-directional manner, we are only interested in intercepting outgoing messages (i.e., requests going from the DApp or wallet extension to the server). To that end, we leverage the capabilities of the Chrome DevTools Protocol to intercept network requests to capture any HTTP GET and POST requests as well as outgoing WebSocket messages. Finally, cookies can also be used to exfiltrate wallet addresses. These can either be set by the server or by the client via JavaScript. We therefore capture cookies that are set via the response headers of HTTP requests and also leverage the capability of the Chrome DevTools Protocol to dump any cookies that were set via JavaScript. §.§.§ Identifying Wallet Address Leaks We identify wallet address leaks in websites and browser extensions by checking if any of the intercepted traffic (i.e., cookies, HTTP, and WebSockets) contains the wallet address in plain text. More specifically, for cookies, we check whether the value or name of the cookie contains the wallet address. For HTTP GET requests, we check whether the URL of the request contains the wallet address within the GET parameters. For HTTP POST requests, we check whether the post body contains the wallet address. Finally, for WebSockets, we check whether the payload contains the wallet address. However, checking for the wallet address in plain text is not sufficient. Prior studies <cit.> have shown that many third-parties often obfuscate their leaks by encoding or hashing them. Identifying obfuscated leaks is a challenging task, which often boils down to a brute-force search. We employ Senol et al.'s <cit.> technique, borrowed from Englehardt et al.'s <cit.> method, to identify email addresses in obfuscated strings. The method consists of searching for a variety of encodings and hashes within strings, by precomputing a set of strings, which contains all possible encodings (e.g., Base64, URL encoding, LZstring, etc.) and hashes (e.g., MD5, SHA256, MurmurHash3, etc.) of the wallet address. Afterwards, the contents of cookies, HTTP requests, and WebSocket payloads are split into multiple strings by potential separator characters (e.g., `=', `&', etc.) and compared with the strings contained in the precomputed set. This process is repeated until a level of three layers of encodings and decodings is reached. § MEASUREMENTS We describe our experimental setup and present the results of our large-scale measurement to detect web3-based user tracking and wallet address leakage[Our framework and data are publicly available at: <https://github.com/christoftorres/Web3-Privacy>.]. §.§ Experimental Setup We ran all our experiments on a desktop machine with 10 cores and 32GB of RAM. Moreover, we used Chromium version 108.0.5351.0 as our browser and our automator was build based on MetaMask version 10.22.2 for Google Chrome. Browser Fingerprinting. We measured browser fingerprinting using the top 1 Million Tranco <cit.> websites as of November 8th, 2022.[Available at: <https://tranco-list.eu/list/6JXYX/1000000>.] However, Tranco only provides domains and not URLs. Therefore, we tried matching Tranco domains to URLs using Google's Chrome User Experience (CrUX) Report <cit.> of November 2022. Whenever a domain did not match any URL contained in CrUX report, we tried inserting the prefixes and in front of the domains (prioritizing the prefix ) and checked whether these were accessible (i.e., got an HTTP response). We skipped domains that were neither accessible nor contained in the CrUX report. We started with the top websites (i.e., highest rank to lowest rank) and repeated this process until we had a list of the top 100K accessible websites. For each website, we limited the maximum crawl duration to 60 seconds and only visited the landing page. Wallet Address Leakage. We measured wallet address leakage using three different datasets. The first dataset consists of 66 DeFi websites from Winter et al.'s study <cit.>. The second dataset consists of 1,998 DApps that we crawled from DappRadar.com's top Ethereum Dapps <cit.>. <ref> provides an overview of the number of DApps per category. Note that not all URLs listed on DAppRadar.com are valid. For instance, many URLs in the category collectibles are simply pointing to a collection on <opensea.io>. Moreover, some of the URLs are not accessible. We filtered out these URLs and were left with 1,572 DApps with valid URLs (see <ref>), which we then crawled during our experiment. Finally, the third dataset consists of 100 popular wallet extensions that we downloaded from Google's Chrome Web Store <cit.>. We installed and set up each wallet extension manually using separate browser profiles for reproducibility. We stored the password and address of each wallet extension in a separate file such that we can afterwards search for the intercepted requests for wallet address leakage. For each DApp website that we crawled, we limited the maximum crawl duration to 30 seconds and only visited the landing page. To measure wallet address leakage on wallet extensions, we wrote a script to randomly click on 10 clickable HTML elements. The interaction with the wallet extension either stops after 10 elements have been clicked or if 60 seconds have passed. §.§ Web3-Based Browser Fingerprinting Wallet API Calls. TRC was able to crawl 96,905 out of 100K websites successfully (i.e., 96.91%). We found 1,114 unique scripts on 1,325 websites which made in total 1,517 JavaScript calls to at least one wallet APIs listed in  <ref>.  <ref> lists the top 10 most ranked websites which we found to call at least one wallet API. This list includes websites with millions of daily users such as TikTok and the New York Times. Interestingly, websites such as TikTok called all of our wallet APIs. After inspecting their code we found that these websites detect whether objects were added to the DOM. We checked whether this only occurs via our wallet simulator or if it also happens when visiting TikTok with MetaMask installed. Our check revealed the same results. This is because MetaMask and any other wallet will, similar to our wallet simulator, inject a new Web3 object into the DOM. This will be detectable by those websites and used for either analytical or tracking purposes. We therefore differentiate between explicit calls and implicit calls, where explicit means that a script includes an explicit call to a wallet API in their code and where implicit means that a script implicitly calls a wallet API when searching for new objects that were added into the DOM. In our experiments, we found that browser fingerprinting scripts (see <ref>) often enumerate the entirety of the window object using, for example, to create a unique fingerprint and thereby implicitly calls a wallet API as it is often part of the window object. We found 249 scripts performing explicit calls (22%) and 866 scripts performing implicit calls (78%).  <ref> lists all combinations that we observed of explicit wallet API calls. We observed in total 11 combinations, where a simple call to was the most popular call with 210 scripts calling this wallet API. Browser Fingerprinting Prevalence. Following our method defined in Section <ref> to identify browser fingerprinting, we find that 878 scripts (79%) belonging to 1,099 websites (83%) engage in browser fingerprinting and leverage wallet information to enhance the fingerprints they generate. The maximum number of fingerprinting categories collected by a single script was 19 out of 22. Both mean and average number of fingerprinting categories collected by browser fingerprinting scripts is around 12. Also, 71 (8%) of the scripts performing browser fingerprinting, collected wallet information explicitly whereas 808 (92%) of the scripts collected wallet information implicitly.  <ref> and  <ref> list each a small snippet from two third-party scripts that were detected by our framework. Both snippets check for the existence of wallet APIs.  <ref> tries to check whether Ethereum, Binance Chain, or Solana wallet extensions are installed and sends this information back to the third-party server via an HTTP POST request. Categories. We compared wallet API calls across website categories.  <ref> lists the top 10 categories in terms of number of websites that access wallet APIs. We used SafeDNS's website categorization service <cit.> to assign a category to each website. As shown in  <ref>, Pornography & Sexuality is where we detected the most number of websites (i.e., 200) accessing wallet information, where the most popular website was <xhamster.com> (ranked 160 in Tranco). Moreover, 69% of the wallet API calls were performed by a third-party script, where <adsco.re> is the most widespread third-party with wallet API calls on 45 different websites. Websites with most third-party calls are in the category News & Media (73%), whereas websites with least third-party calls are in the category Games (29%). Third-Parties. We found 680 websites (i.e., 51%) that include a third-party which calls a wallet API. The wallet API calls originate from 324 third-party scripts which belong to 118 unique third-party domains.  <ref> lists the top 10 third-parties for scripts that perform explicit (upper half) and implicit (lower half) wallet API calls. For explicit calls, we find that the third-party domain <wpadmngr.com> is the most widespread (embedded in 55 websites). For implicit calls, we find that the third-party domain <adsco.re> is the most widespread (embedded in 111 websites). URL and Code Similarity. When analyzing the URLs of the 324 third-party scripts, we noticed that a large number were similar. Several third-party URLs contain the path . We found that these third-parties most likely deploy Cloudflare's Anti-DDoS protection <cit.>, which consists of some JavaScript code that implicitly accesses wallet API information. We found 127 (i.e., 39%) such Cloudflare third-party scripts. We also clustered the remaining 197 third-party scripts based on their code by grouping scripts together which share the exact same JavaScript code. We found 2 clusters, one including the two third-parties <jsdelivr.net> and <unpkg.com> and one including the three third-parties <6347032d45.com>, <wpadmngr.com>, and <ba0182aa75.com>. The former third-parties are content delivery networks hosting the web3.js library, which is used by several DApps. The other three third-parties are interesting as we do not know who is running them, but we can see from  <ref>, that <wpadmngr.com> and <ba0182aa75.com> are the two most widely deployed third-party scripts calling wallet APIs explicitly. Moreover, as they share the same code, we can infer that they belong to the same organization and that they are together deployed on 94 different websites. Blocklists. Given that half of the calls to wallet APIs originate from third-parties, we checked whether blocklists could be a reliable countermeasure. We downloaded the latest blocklists of Disconnect <cit.>, DuckDuckGo <cit.>, EasyList <cit.>, EasyPrivacy <cit.>, and Whotracks.me <cit.>, and counted how many of the detected third-parties are blocked by the individual blocklists. We manually checked all third-parties and left out 10 of them as they are related to benign use-cases such as helper libraries (e.g., web3.js <cit.>).  <ref> depicts an overview on the number of blocked third-parties. We observe that Whotracks.me provides the best protection by blocking 46 third-parties (43%). The weakest protection is given by Disconnect with only 13 third-parties blocked (12%). Moreover, we also checked whether installing all blocklists at the same time (i.e., combining blocklists) would improve protection. As seen in  <ref>, the combination of all five blocklists results in blocking 60 third-parties (56%), hence an improvement of 12% as compared to only using Whotracks.me's blocklist. §.§ Wallet Address Leakage We analyze to what extent DApps and wallet extensions leak the user's wallet address to third-parties. §.§.§ DApps Winter et al. <cit.>. We compare the performance of our framework using Winter et al.'s <cit.> DeFi dataset. The dataset consists of 78 DeFi websites, however, 6 websites were down at the time of writing, 2 websites did not support MetaMask, and 4 were duplicates. After filtering, we were left with 66 websites to crawl. While Winter et al. connect manually to each website via MetaMask, we automatically connect to all of them using our MetaMask automator.  <ref> shows a comparison between the leaks measured by Winter et al. and our framework. Winter et al. found that 13 out of the 66 websites (20%), leak the user's wallet address to a third-party, whereas our results show that actually 39 out of the 66 websites (59%) leak the user's wallet address to a third-party. Overall, Winter et al. found 25 leaks whereas we found 2,164 leaks for the same websites. 98% (i.e., 2,131 leaks) are performed either via POST requests, WebSockets, or cookies. 61% of the leaks (i.e., 1,324 leaks) occur via POST requests. This emphasizes that solely analyzing GET requests, as Winter et al. did, is not sufficient. While Winter et al. found that wallet addresses are being leaked to 14 third-parties, our results show that the actual number is much higher, namely 64 third-parties.  <ref> highlights leaks that our framework and Winter et al. have in common (number between parentheses). For example, for <bifi.finance>, we detected 3 leaks which correspond to the same leaks as detected by Winter et al. However, we observe that <1inch.io>, <impermax.finance>, <jelly.market>, and <yearn.finance> did not leak the user's wallet address in our crawls anymore. On closer inspection, we find that <jelly.market> was down and we therefore were not able to collect any data and that the remaining three websites moved towards using their own API to retrieve blockchain data. Interestingly, for <dodoex.io> and <sablier.finance>, the leaks moved from GET requests to POST requests. This can be due to a change in the API of the third-parties. Moreover, while <alchemix.fi>, <cream.finance>, <debank.com>, <dmm.exchange>, and <idle.finance> did not leak the user's wallet address to any third-parties via GET requests during Winter et al.'s study, our results demonstrate the opposite. Since Winter et al.'s study is already more than a year ago, we assume that those third-party leaks were added after the study was conducted. Finally, we also observed that <dmm.exchange> leaks the user's wallet address to <kyberswap.com> via 3 different cookies set by Mixpanel (see <ref> for an example of such a cookie). DappRadar.com <cit.>. Winter et al.'s dataset is useful for comparing performance, but it is insufficient to draw any general conclusions due to it being relatively small and only focusing on DeFi websites. Therefore, we crawled DappRadar.com to obtain a much larger and diverse dataset. We ended up getting 1,572 DApp websites across 9 categories. Our automator was able to automatically connect to 616 (39%) of them. The automator had less issues in connecting to DeFi DApps, with a success rate of 61%. On the other hand, our automator found it hard to connect to High Risk DApps, with a success rate of only 20%. There are several reasons why it was not able to connect to all DApps. Most websites are simply down or our automator is not able to detect the connect button by scanning the HTML. Some websites do not support MetaMask, or require users to either agree on the terms or register and login via an email address and a password before being able to interact with the DApp. Section <ref> provides a clear breakdown regarding why our automator was not able to connect to certain DApps.  <ref> summarizes our detected leaks on the <DappRadar.com> dataset. Our framework identified 211 unique DApp websites (35% of the connected DApps) which leak the user's wallet address across 137 unique third-parties. As shown in  <ref>, Gambling DApps leak the least (6%), whereas Exchanges leak the most (59%). Our data also shows that 1,400 DApps (89%) embed at least one third-party. On average DApps embed 7 different third-parties. The maximum we observed was 61 third-parties embedded on a single DApp's website.  <ref> lists the top 20 third-parties where the user's wallet address is leaked to. As we can see, most third-parties are JSON-RPC providers (75%) and the rest are tracking & analytics platforms (25%). DApps need to connect to a blockchain node to retrieve blockchain related information. This connection is often performed via JSON-RPC providers. While leaks to JSON-RPC providers are unavoidable, they still may pose a threat to user's privacy as they may collect additional information such as what DApps the user visited or its IP address. Often users do not know to which JSON-RPC provider the DApp is connected to. Leaks to tracking & analytics platforms are unnecessary and a clear privacy violation. These platforms should not have access to sensible user information such as wallet addresses. For example, <ref> shows an HTTP GET request from <degens.farm> that leaks the user's wallet address to <google-analytics.com>. We studied the privacy policies of the top 20 third-parties and observe that 95% state that they collect the user's IP address. Pocket Network is the only third-party in  <ref> that does not collect the IP address of its users. We also observe that Infura is the most widespread third-party, with 42 DApps leaking the user's wallet address to Infura. For the DappRadar.com dataset, none of the DApps shared the user's wallet address via cookies. However, similar to Winter et al.'s dataset, most DApps share the user's wallet address via HTTP POST requests, then WebSockets, and finally HTTP GET requests. §.§.§ Wallet Extensions We analyzed whether any of the 100 wallet extensions contained in our dataset include third-parties and with whom they share the user's wallet address and potentially even the password or browsing history. Fortunately, none of the analyzed browser extensions seem to leak the user's password. At least we were not able to identify the password in any of the requests that we analyzed (including requests to the first-parties themselves). We analyzed the manifest file of each wallet extension and checked whether they can inject content scripts on any website and if they request access to sensible permissions. 89 of the 100 wallet extensions can inject content scripts on any website (i.e., the manifest includes one of the following patterns: 'http://*/*', https://*/*',<all_urls>', *://*/*'). Hence, these 89 wallet could potentially read the URL of the current page and send it to a backend. We also found that 66 wallet extensions request permission for either accessing “history”, “tabs”, or “activeTab”. We visited three different websites (<nytimes.com>, <etherscan.io>, and <uniswap.org>), using each extension and checked whether there are requests that include any of the three websites. We were not able to detect any extension leaking any of the visited websites. However, we did find that wallet extensions do leak the user's wallet address to third-parties.  <ref> lists the wallet extension that leak the user's wallet address. We found 13 out of 100 analyzed extensions which leak the user's wallet address to at least one of 24 third-parties. In total we found 139 third-parties across all browser extensions. While most wallet extensions only leak the wallet address to a single third-party, Coinbase's wallet extension leaks the user's wallet address to 10 different third-parties. Surprisingly, none of the wallet extensions' third-parties seem to overlap. However, we do observe that <sentry.io> and <infura.io> are present in both of our datasets, DApps and wallet extensions. Although, Infura is a benign platform, since it is a JSON-RPC provider and therefore required for the wallet extension work, the user is not made aware of this connection and the fact that Infura can link requests across websites and infer for example that a user X uses wallet Y and visits DApps A and B regularly. Sentry on the other hand is clearly not benign as it is a tracking & analytics platform, hence sensitive information such as the user's wallet address should not be leaked to such platforms. § DISCUSSION We discuss the limitations of our methodology and elaborate potential countermeasures, including their pitfalls. §.§ Limitations Our methodology for detecting wallet API calls is based on TRC which comes built in with an anti-bot detection. However, anti-bot detection solutions are not perfect and thus websites can still detect whether a bot is crawling them and thus behave differently or block access to the website. Moreover, our methodology leverages a wallet simulator that we build to inject fake JavaScript objects into the DOM such that we can simulate wallets without requiring to install them and setting them up. However, our simulator does not simulate a full-fledged wallet. It is limited to the simulated JavaScript properties listed in  <ref> in Section <ref>. Thus, third-party scripts could detect our wallet simulator by checking for inconsistencies such as missing JavaScript properties in the different wallet APIs. Although, the likelihood that third-party scripts currently do this is rather low. We did not experience such checks when analyzing the code of third-party scripts manually. However, it could be that in the future third-party scripts will adapt and try to probe for multiple properties of a wallet before making any decisions. Our MetaMask automator was only able to automatically connect to 39% of the analyzed DApps. Appendix <ref>, provides a detailed breakdown on connection failures that occurred over the DeFi subset of the DappRadar.com dataset. In 24% of the cases, the URLs did not point to a valid DApp and in 14% of the cases the DApp's website was simply down. 3% of the DApps did not support MetaMask. For 18% of the DApps our automator could simply not detect a connect button or MetaMask button within the HTML despite the DApp's buttons containing labels that match the keywords in Appendix <ref>. However, there are also Dapps that contain buttons that do not match any of our keywords (8%) or which represent their buttons as images (8%). 15% require users to give their consent by ticking a checkbox before being able to interact with them. Finally, 7% require users to create an account and login via email and password. Moreover, during our crawl we only visited the landing page of a website or DApp and might have missed any third-party scripts that perform tracking or leak the user's wallet address. This and the fact that we were not able to connect to all the DApps and limited ourselves to a handful of wallets (extensions are well as simulated wallet APIs), highlights that our results should only be considered as a lower bound. §.§ Countermeasures Privacy-conscious users want to prevent their wallet address to be leaked to third-parties, but also minimize their footprint (i.e., the fact that they have a wallet installed on their browser) for online tracking. Initially, wallets would expose the user's unique wallet address to any website without asking the user for prior permission. However, this changed with the release of EIP-2255 <cit.> which requires wallets to ask user's for permission prior to returning any sensitive information to DApps. However, the permission system is still flawed. Any third-party that is embedded inside the DApp also has access to the sensitive information once, although the user has granted only permission to the DApp and not the third-parties. Winter et al. <cit.> proposed a countermeasure which does not prevent wallet address leakage per-se, but limits its usefulness in linking users across DApps as it generates individual wallet addresses for each DApp a user visits. This follows a similar idea that has been proposed in the past to prevent linking users across websites by using different yet consistent web identities across websites <cit.>. Specifically, DApps always interact with a fake proxy wallet address that is derived from the user's real wallet address. All requests that either go through MetaMask or via an JSON-RPC provider are then intercepted and the fake address is swapped with the user's real wallet address such that the DApp is able to perform actions on real data. However, this approach has several pitfalls. First, the fact that the user's real balance is returned allows DApps and other third-parties to map the fake address to the real address by scanning the blockchain for an address that has the exact same balance. This is trivial because the balance has a high resolution (e.g., 256-bit resolution in the case of Ethereum) and thus the likelihood that two users having the exact same balance is very low. Second, the interception of traffic as well as the swapping between fake and real requires complex management and is prone to errors. For example, transactions are usually not directly mined and most DApps rely on a transaction receipt which includes a transaction hash that allows them to continuously poll the blockchain for the transaction's confirmation status. Hence, the countermeasure also needs to fake transaction hashes otherwise DApps and third-parties might use this information to obtain the users real wallet address. But in fact this might break the usability of many DApps as they sometimes point to other websites such as Etherscan using the transaction hash. Third, the proposed countermeasure does not hide the existence of a wallet extension from third-parties. Trackers will still be able to detect whether or not a user has a wallet installed on its browser. As an alternative, users could rely on Ad blockers <cit.> to simply block requests from and to third-party tracking scripts. In Section <ref>, we measured the effectiveness of popular blocklists against the third-parties that we found to access wallet information. The best performing blocklist only managed to block 46 out of 118 third-parties (i.e., 39%). Moreover, even with all the blocklists combined, only 51% of the third-parties are blocked. Blocklists do not scale, they can simply be evaded by deploying the same script to a different domain that is not yet blacklisted. For instance, we found that the script which is hosted on <wpadmngr.com> (top 1 in our list of detected third-party scripts) is identical to the scripts hosted on <ba0182aa75.com> and <6347032d45.com>. Since the two last domains appear to be random, we assume that they might be used by <wpadmngr.com> to avoid blocklists. § RELATED WORK There are several ways to track users online, ranging from classical stateful methods such as third-party cookies <cit.> to novel stateless methods such as browser fingerprinting <cit.>. A number of studies have been conducted over the past years in order to measure the prevalence of third-party cookies and novel browser fingerprinting techniques <cit.>. Essentially, any JavaScript API that provides stable yet user-configuration specific information can be leveraged to generate together with other attributes a unique browser fingerprint. This information may range from simple properties such as screen resolution to more advanced techniques such as canvas fingerprinting <cit.>. For instance, Englehardt et al. <cit.> were the first to provide evidence that third-party trackers enhance their browser fingerprinting scripts with information provided by the WebRTC API, Audio API, and Battery Status API. Our work analyses whether trackers are leveraging wallet APIs to enhance their browser fingerprinting scripts to better track users online. Recently, Senol et al. <cit.> discovered that a large number of websites leak the user's email address and password to third-parties. In a similar vein, our work aims to shed light into the inner workings of DApps and wallets to uncover if they might leak a user's wallet address or password to third-parties. Privacy is not only difficult to achieve on the web, but it is also challenging to achieve when dealing with cryptocurrencies. Security and privacy concepts are often not well understood by cryptocurrency users. For instance, Krombholz et al. <cit.> surveyed over 900 users with respect to their knowledge on security and privacy of Bitcoin. None of the users made a backup of their wallet passphrase on a separate computer. 22% report that they already lost some of their cryptocurrency due to scams or loss of their passphrase. Also, 32% think that Bitcoin is anonymous, despite the fact that transactions can be traced. This is in line with the findings of Mai et al. <cit.> and Voskobojnikov et al. <cit.> where users do not understand the concept of public and private keys or believe that transactions are confidential and cannot be seen by third-parties. These works point out that users might have a misconception of wallets with respect to the privacy that they provide. As more and more online vendors accept cryptocurrencies as a payment method and an increasing number of decentralized applications begin to emerge, the question around linkability and user privacy becomes indispensable. A number of previous works have focused on analyzing the linkability of cryptocurrency transactions <cit.> including their deanonymization via network-layer attacks <cit.>. Goldfeder et al. <cit.> were the first to analyze the intersection between cryptocurrencies and online privacy. The authors find that online trackers are able to collect enough information to link cryptocurrency transactions to online purchases. Béres et al. <cit.> demonstrate how attackers can link different Ethereum addresses to the same user by analyzing meta information such as time of the day and gas price. Even mixers (i.e., services that shuffle transactions in order to break linkability) have been found to be broken <cit.>. Users often do not understand how to use mixers properly and use, for example, the same wallet address for depositing and retrieving cryptocurrency, thereby making mixing essentially useless. Li et al. <cit.> present a denial-of-service attack against blockchain providers, which are frequently used by DApps and wallets to retrieve blockchain information. Blockchain providers often do not impose a gas limit on certain operations and thus malicious users may exploit this fact to make blockchain providers engage in heavy computations. The work by Winter et al. <cit.> is the closest to our work. However, their goal is to analyze the security, privacy, and decentralization properties of popular DeFi front ends, while we aim to analyze the privacy implications of wallets. The authors analyzed 78 handpicked DeFi websites for wallet address leakage and found that 17% of the websites leak the user's wallet address. We found that 59% of the websites leak the user's wallet address. This is because our framework not only analyzes HTTP GET requests but also HTTP POST requests, WebSockets, and cookies. Moreover, while Winter et al. analyzed the websites manually, our work analyzes them automatically. This enables us to perform an automated large-scale study on DApps. Finally, Winter et al. did not analyze whether wallet extensions also leak the user's wallet address and whether websites make use of wallet information to fingerprint users. § CONCLUSION We present the first systematic study on Web3-based browser fingerprinting and wallet address exfiltration. We built a framework which is capable of detecting JavaScript calls on wallet APIs as well as intercept and search HTTP requests, WebSockets and cookies for leaked wallet addresses. Our framework integrates a wallet simulator which imitates different wallet extensions by injecting wallet-specific properties into the website's DOM, and developed an automator which automatically sets up MetaMask and connects it to DApps. Using our framework we analyzed the top 100K websites and found evidence of 1,325 websites checking the presence of wallet extensions installed within the user's browser. We analyzed 1,572 DApps and found that 211 of them leak the user's wallet address to third-parties. Moreover, we analyzed 100 popular wallets and found that 13 of them deliberately leak the user's wallet address to third-parties. We evaluated countermeasures such as Ad blockers and found that they are not completely effective in blocking all the third-party scripts and leaks detected by our framework. We conclude that wallets pose a serious threat to user's privacy and that new solutions need to be developed that allow users to interact with DApps in a secure and privacy-preserving way. § ACKNOWLEDGMENTS We would like to thank our anonymous reviewers and our shepherd for their valuable comments and feedback. This work was supported by the Zurich Information Security & Privacy Center (ZISC). § AVAILABILITY The code that was used to conduct this study as well as the data that was collected during this study is publicly available on GitHub at: <https://github.com/christoftorres/Web3-Privacy>. plain § APPENDIX tocsectionAppendices §.§ Browser Fingerprinting Categories  <ref> lists all the JavaScript APIs that our framework uses to detect browser fingerprinting, including the category that we assigned to each API. For example, means that any JavaScript API call that includes the string “userAgent” will be collected and assigned to the category browser, and the category browser is not considered as an explicit browser fingerprinting category. §.§ List of Keywords Used by Automator  <ref> lists all the keywords that our automator scans for within a website's HTML to find a “Connect” and “MetaMask” button. §.§ Breakdown of Connection Failures <ref> provides a breakdown over the reasons that resulted in our automator in failing to automatically connect to the DeFi related DApps from our DappRadar.com dataset.
http://arxiv.org/abs/2306.07533v1
20230613041611
Improved Measurements of Muonic Helium Ground-State Hyperfine Structure at a Near-Zero Magnetic Field
[ "P. Strasser", "S. Fukumura", "R. Iwai", "S. Kanda", "S. Kawamura", "M. Kitaguchi", "S. Nishimura", "S. Seo", "H. M. Shimizu", "K. Shimomura", "H. Tada", "H. A. Torii" ]
physics.atom-ph
[ "physics.atom-ph" ]
[][email protected] Muon Science Laboratory, Institute of Materials Structure Science (IMSS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Muon Science Section, Materials and Life Science Division, J-PARC Center, 2-4 Shirakata, Tokai-mura, Naka-gun, Ibaraki 319-1195, Japan Department of Materials Structure Science, School of High Energy Accelerator Science, The Graduate University of Advanced Studies (SOKENDAI), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan [][email protected] Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Muon Science Laboratory, Institute of Materials Structure Science (IMSS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Muon Science Laboratory, Institute of Materials Structure Science (IMSS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Muon Science Section, Materials and Life Science Division, J-PARC Center, 2-4 Shirakata, Tokai-mura, Naka-gun, Ibaraki 319-1195, Japan Department of Materials Structure Science, School of High Energy Accelerator Science, The Graduate University of Advanced Studies (SOKENDAI), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Kobayashi-Maskawa Institute, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan Muon Science Laboratory, Institute of Materials Structure Science (IMSS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Muon Science Section, Materials and Life Science Division, J-PARC Center, 2-4 Shirakata, Tokai-mura, Naka-gun, Ibaraki 319-1195, Japan Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Muon Science Laboratory, Institute of Materials Structure Science (IMSS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Muon Science Section, Materials and Life Science Division, J-PARC Center, 2-4 Shirakata, Tokai-mura, Naka-gun, Ibaraki 319-1195, Japan Department of Materials Structure Science, School of High Energy Accelerator Science, The Graduate University of Advanced Studies (SOKENDAI), 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan MuSEUM Collaboration Muonic helium atom hyperfine structure (HFS) measurements are a sensitive tool to test the three-body atomic system, bound-state quantum electrodynamics (QED) theory, and determine fundamental constants of the negative muon magnetic moment and mass. The world's most intense pulsed negative muon beam at J-PARC MUSE allows for improving previous measurements and testing further CPT invariance through the comparison of the magnetic moments and masses of positive and negative muons (second-generation leptons). We report new ground-state HFS measurements of muonic helium-4 atoms at a near-zero magnetic field, performed for the first time using CH_4 as an electron donor to efficiently form neutral muonic helium atoms. The result of our analysis gives Δν = 4464.979(20) MHz (4.5 ppm), which is more accurate than both previous measurements at weak field and high field. Muonium HFS was also measured under the same conditions to investigate the isotopic effect of the gas density dependence on the frequency shift in He with CH_4 admixture and compared with previous studies. No isotopic effect was observed within the current experimental accuracy when comparing muonic helium with muonium (isotopic mass ratio of 36). Improved Measurements of Muonic Helium Ground-State Hyperfine Structure at a Near-Zero Magnetic Field H. A. Torii June 12, 2023 ===================================================================================================== Muonic helium (μHe) is a hydrogen-like atom composed of a helium atom with one of its two electrons replaced by a negative muon (μ^-). The negative muon is so closely bound to the helium nucleus that it nearly completely screens one proton charge producing a “pseudo-nucleus” with a positive effective charge and a magnetic moment nearly equal to that of a negative muon (μ_μ^-). To that effect, it is very similar to muonium another hydrogen-like atom made of a bound state of a positive muon and an electron (μ^+e^-), and forms with it the longest isotopic chain with a mass ratio of 36. Muonic helium has been used to study kinetic isotope effects in chemical reaction rates and test fundamental theories of chemical kinetics <cit.>. It is also a simple three-body system that plays a crucial role in the final stage of the muon catalyzed dt fusion in understanding the sticking phenomenon <cit.>. More recently, following the successful spectroscopy measurements of the 2s–2p transition in muonic hydrogen <cit.>, which has become known as the proton radius puzzle, and muonic deuterium <cit.>, the Lamb shift was also measured in muonic helium-3 and helium-4 (^3,4μHe^+) to determine the charge radius <cit.>. The ground-state hyperfine structure (HFS) in a muonic helium atom, resulting from the interaction of the remaining electron and the negative muon magnetic moment, is almost equal to that of muonium but inverted because of the different signs of their respective muon magnetic moments. High-precision measurements of the muonium ground-state HFS are regarded as the most sensitive tool for testing quantum electrodynamics (QED) theory <cit.>, and determining fundamental constants of the positive muon magnetic moment μ_μ^+ and its mass m_μ^+. New precise measurements are now in progress at the Japan Proton Accelerator Research Complex (J-PARC) by the collaboration <cit.>. In muonic helium atom too, the HFS interval is sensitive to variations of basic physical constants, and the same microwave magnetic resonance technique as with muonium can be used to measure the muonic helium ground-state HFS transition frequency Δν and the negative muon magnetic moment μ_μ^- and mass m_μ^-. Previous measurements were performed in the 1980s at the Paul Scherrer Institute (PSI) and Los Alamos Meson Physics Facility (LAMPF) with experimental uncertainties dominated by statistical errors. In helium-4, Δν was measured to a level of 13 ppm (weak field) <cit.>, 6.5 ppm (high field) <cit.>, and 47 ppm for μ_μ^-/μ_p (high field) <cit.>, respectively. In helium-3, Δν was also measured to a level of 12 ppm (weak field) <cit.>. The world's most intense pulsed negative muon beam at the Muon Science Facility (MUSE) of J-PARC brings a unique opportunity to significantly improve those measurements to further probe the three-body atomic system, bound-state QED theory and provide a more precise test of CPT invariance by comparing the magnetic moments and masses of positive and negative muons (second-generation leptons) <cit.>. Although muonic helium is similar to muonium, the theoretical approach (reviewed in <cit.>) in determining Δν has been limited due to the three-body interaction, and higher-order QED effects estimated to be around 130 ppm are still not yet fully considered <cit.>. Old measurements still outweigh the latest theoretical values <cit.>. Muonic helium HFS is the only available experimental data for three-body muonic atoms. At present, μ_μ^+ and μ_μ^- provide a test of CPT invariance at a level of 3 ppm limited by the negative muon mass accuracy <cit.>. Whereas several new measurements of the positive muon mass are now in preparation by the collaboration <cit.>, and muonium 1s–2s spectroscopy experiment at J-PARC <cit.>, and MuMASS at PSI <cit.>. Furthermore, the ratio μ_μ^-/μ_p is also needed to determine the negative-muon magnetic moment anomaly a_μ^- and its g factor g_μ^- in the muon g – 2 experiment at Brookhaven National Laboratory and its successor Fermilab. Currently the more accurate positive value μ_μ^+/μ_p <cit.> is used to determine both a_μ^+ and a_μ^- to test the Standard Model's predictions and CPT theorem <cit.>. One key factor to improve these old measurements is to efficiently produce neutral muonic helium atoms, the prerequisite to measuring HFS. When a helium atom captures a muon, it loses both its electrons by Auger emission and forms a (^4Heμ^-)^+ ion in its ground 1s state. Subsequently, it cannot capture an electron from neighboring helium atoms because its electron binding energy is similar to that in hydrogen. CH_4 as an electron donor was preferred to Xe used in previous experiments <cit.> because of its reduced total charge (Z=10 compared to Z=54 for Xe, Z-law <cit.>), a similar ionization energy of 12.5 eV, and gives a residual μ^- polarization of ∼ 5% <cit.> similar to Xe <cit.>. This low residual polarization mainly results from the depolarization mechanism during the muon cascade process in helium due to Auger transition and collisional Stark mixing. Another practical reason to use CH_4 is due to the large muon capture process by Xe nuclei that abundantly produces many radioactive iodine isotopes. We report here improved ground-state HFS measurements of muonic helium-4 atoms at a near-zero magnetic field using CH_4 for the first time as an electron donor. Muonium HFS was also measured under the same conditions to investigate the isotopic effect on the gas density shift in He with CH_4 admixture and compared with previous studies. The experiment was performed at J-PARC MUSE D-line using the apparatus developed by the collaboration to determine with high precision muonium HFS at zero field and described in <cit.>. The schematic view of the experimental setup enclosed in a magnetic shield box made of permalloy is shown in Fig. <ref>. Only the modifications performed to allow measurements with negative muons in high-pressure helium gas are reported. Pulsed polarized μ^- (backward decay μ^-, polarization > 90%, double-pulsed structure 100-ns wide separated by 600 ns and repetitive at 25 Hz) are stopped into a microwave cavity placed inside an aluminum gas chamber containing pressurized helium gas with 2% CH_4 admixture as an electron donor. Within a few ns (^4Heμ^-)^+ ions are neutralized, a time short compared to the Rabi-oscillations induced by the applied microwave. The entrance beam window of the gas chamber was made of a copper-beryllium (CuBe) foil 10 cm in diameter. A small vacuum chamber with a 75-μm thick Kapton window was mounted on the entrance window to avoid deforming the CuBe foil while evacuating the gas chamber. The fiber hodoscope was removed (Fig. 1 in <cit.>). Three measurements were performed at an absolute He gas pressure of 3, 4, and 10.4 atm with a 50, 100, and 125-μm thick CuBe window, respectively. The muon beam momentum was tuned in each case to maximize the number of stopped μ^- in the microwave cavity with an optimum at 25, 27, and 30 MeV/c, respectively. The momentum spread of the beam was Δp/p = 10% (FWHM). The muon stopping rate (and distribution) in the cavity was estimated using a Monte Carlo simulation, with 50%, 45%, and 75%, respectively. The three measurements were performed separately under a user program in different beam cycles with different primary proton beam power (typical beam intensity 0.7–1.2 10^6 μ^-/s). The gas pressure was measured by a pressure transducer (Fluke RPM4 Reference Pressure Monitor) with an accuracy of 0.02%. The He/CH_4 gas mixture was performed by filling the first 2% of the nominal pressure with CH_4 gas followed by high-purity He gas with an accuracy of 0.2%. The relative He/CH_4 ratio between measurements and the presence of other contaminants was confirmed by quadrupole-mass spectrometry sampling the gas through a capillary tube before and after the measurement. The muon spin is flipped by applying a microwave magnetic field in the cavity. A larger cylindrical cavity (181-mm inner-dia., 304-mm long) developed to enable muonium HFS measurement at lower gas pressure without severe loss of the statistics was used <cit.>. The cavity resonates in TM220 mode with a tunable frequency range of 4462–4466 MHz and a quality factor of 11400–11700. The remaining microwave system is identical as in <cit.>. Electrons (e^-) from μ^- decay are emitted preferentially in the direction antiparallel to the μ^- spin. At the resonance, the microwave field induces the μ^- spin flip changing the angular distribution of the decay e^-, which is detected with a segmented scintillation detector placed downstream. Measurements are performed by scanning the microwave frequency and measuring the electron asymmetry (NON/NOFF) to determine the resonance frequency Δν. Since the cavity and the downstream beam stopper attached to the aluminum absorber (1-mm thick, not shown in Fig. <ref>) are all made of copper, μHe signals are well separated from muonic copper (μCu) background events due to different muon lifetimes. Most of the μHe signals remain by selecting delayed events while drastically reducing μCu events. Muonic helium HFS resonance curves measured chronologically at a He gas pressure of 4.0, 10.4, and 3.0 atm are shown in Fig. <ref> using delayed events from 1.6 μs after the second muon pulse. The data for these curves were obtained in 105, 63, and 76 hours, respectively, including changing frequencies. The data analysis was performed by determining the hit-cluster and taking coincidence between the two detector layers as described in <cit.>. Data with fluctuating microwave power feedback readings were ignored in the final analysis. The resonance curve centers were determined by fitting a theoretical resonance line shape using the “old muonium” method <cit.> (same at zero field) from 1.6 to 60 μs. The reduced chi-square in Fig. <ref> are χ^2/NDF = 7.6/14, 5.5/7, and 47.0/22, respectively. The poor χ^2 of Fig. <ref>c results from data taken near the resonance center with no Rabi-oscillation signals observed despite the microwave power feedback readings being normal, so the data was retained. During the last two measurements at 10.4 and 3.0 atm, a blind analysis method was introduced that adds an unknown offset value to the applied microwave frequency (randomly selected ±8 kHz, fixed for all measurements), resulting in a measured resonance curve with that offset value. When the analysis was completed, the blind was opened, and the obtained value was corrected with that offset revealing the true resonance frequency <cit.>. The measured values for Δν are shown in Fig. <ref> as a function of the gas density in atm at 0°C and corrected for non-ideal gas behavior. Previous results from <cit.> measured with He + Xe(1.5%) are also shown for comparison. The HFS frequency at zero pressure Δν(0) of a free μHe atom is obtained by fitting the data. It is known that hydrogen-like systems like muonium <cit.> and alkali atoms <cit.> show both a linear and quadratic pressure dependence on the gas buffer they are embedded in. However, with the current data only the linear pressure shift coefficient due to competing short-range and long-range interactions between the μHe atom and the buffer gas at a given pressure <cit.> can be obtained. By fitting our measured values with Δν(p) = Δν(0) + Ap, we get Δν(0) = 4464.980(20) MHz (4.5 ppm), and A = 13.1 ± 3.2 kHz/atm (0°C). The error indicated is mainly statistical. Systematic uncertainties are discussed later. The obtained muonic helium HFS frequency is consistent with previous measurements but with better accuracy than both at weak field <cit.> and high field <cit.>, and performed for the first time with CH_4 as an electron donor. It should be noted that in the first observation of the muonic helium atom HFS resonance <cit.> the pressure shift correction from muonium, measured under the same conditions, was used to determine Δν(0) from only one measurement at 19.4 atm. This was justified at the time since no isotopic effects were observed for muonium and H, D, and T in noble gases <cit.>. Consequently, the isotopic effect on the pressure shift in He with CH_4 admixture was also investigated by measuring muonium HFS under the same conditions at 10.4 atm using decay μ^+ (Fig. <ref>). The asymmetry is nearly 10 times larger for muonium, which is consistent considering 50% polarization for muonium as opposed to ∼ 5% for muonic helium. Combining this muonium measurement Δν_(10.4 atm) = 4463.4382(23) MHz with an earlier one at 4 atm (from a beamtime) and with Δν_(0) from <cit.>, we obtain for muonium in He + CH_4(2%) a linear pressure shift coefficient of 13.8(2) kHz/atm (0°C). This value is similar within the error bars to the value reported in <cit.> measured with He + Xe(1.5%). Table <ref> shows a comparison of the linear pressure shift coefficients for hydrogen-like atoms in He. The preliminary value for muonium in pure He <cit.> was obtained indirectly from a study using Kr/He mixture to reduce the pressure shift effect in muonium HFS measurements. The hydrogen pressure shift data are shown for pure He <cit.> and He+Xe(1.5%) where the fractional pressure shift is calculated using the measured values <cit.> (1.5% Xe reduces the linear pressure shift coefficient in He by nearly 8%). Unfortunately, no pressure shift data were ever been reported for light hydrogen-like atoms in CH_4, only for ^133Cs atoms <cit.> where the linear pressure shift is negative same as in Xe, while it is positive in He. Thus, He with a small admixture of CH_4 is expected to behave similarly as with Xe and reduce slightly the total pressure shift. Comparing muonic helium with muonium, no isotopic effect can be seen within the current experimental accuracy. Also, an admixture of 2% CH_4 and 1.5% Xe seems to have similar effects. More precise measurements with μHe atoms would be needed to confirm the tendency that heavier atoms have slightly larger pressure shifts, as was suggested for tritium and hydrogen in Ne and Ar (no data for T in He) <cit.>. The systematics uncertainties of the current experiment are shown in table <ref> for Δν(0). Other errors are common with <cit.>. The detector pileup is negligible due to negative muon intensities being ten times smaller than for surface (positive) muons. The error on the pressure depends essentially on the temperature uncertainty when converting to 0°C, and can be reduced to about ∼ 5 Hz with better temperature control by keeping fluctuation below 0.1°C. Following the approach described in <cit.>, the upper-limit effect of the quadratic terms B on Δν(0) (i.e., Δν(p)=Δν(0)+Ap+Bp^2), which results from the three-body interaction of a muonic helium atom with two gas buffer atoms <cit.>, was estimated using the most precise measurement of B for muonium in Kr <cit.>. This is justified by the fact that B decreases as the atomic number of the noble gas decreases and appears isotope independent <cit.>. We obtain with B as a fixed value (nonzero) an upper limit of δΔν(0) = 0.78 kHz. Additional high-pressure measurements would allow the determination of B. The effect on the CH_4 concentration is more difficult to ascertain because of the unknown value of its pressure shift. As an upper value, assuming a shift for CH_4 similar to the largest known value for H in Xe <cit.>, the present concentration accuracy of 0.2% corresponds to an error of ∼ 3 Hz/atm. This can be reduced by using the same mixture from a gas container for all measurements. After nearly 40 years, new precise measurements of the muonic helium HFS were performed using the high-intensity pulsed negative muon beam at J-PARC MUSE. The result obtained with an error of 4.5 ppm has better accuracy than both previous measurements at weak and high fields. These HFS measurements were also the first to be performed with CH_4 admixture as an electron donor to form neutral muonic helium atoms. Muonium HFS measured under the same conditions does not reveal any isotopic effect on the pressure shift with muonic helium atoms within the current experimental accuracy. Further measurements at zero field are planned to improve the determination of the pressure dependence. High-field measurements are now in preparation at the H-line after muonium HFS measurements, using ten times more muon beam intensity than at the D-line, and with decay electrons being more focused on the detector due to the high-magnetic field, aiming at a precision below 100 ppb after 100 days. Furthermore, a new experimental approach to recover the lost polarization is being investigated by repolarizing μHe atoms using a spin-exchange optical pumping (SEOP) technique <cit.>, which would drastically improve the measurement accuracy, and where a direct improvement by a factor of ten may be realized. The muon experiment at the Materials and Life Science Experimental Facility (MLF) of the J-PARC was performed under a user program (Proposal No. 2020B0333, 2021B0169, 2022A0159). The authors would like to acknowledge the help and expertise of the staff of J-PARC MLF MUSE and thank K. Shimizu, T. Tanaka, H. Yamauchi, H. Yasuda, and F. Yoshizu for their support in preparing and participating in the experiment. This work was supported by the Japanese JSPS KAKENHI Grant No. JP21H04481.
http://arxiv.org/abs/2306.02323v2
20230604103004
LoRa Backscatter Communications: Temporal, Spectral, and Error Performance Analysis
[ "Ganghui Lin", "Ahmed Elzanaty", "Mohamed-Slim Alouini" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
every axis/.append style= line width=1pt, legend style=font=, at=(0.97,0.85), compat=1.13 5G-NR5G New Radio 3GPP3rd Generation Partnership Project ACaddress coding ACFautocorrelation function ACRautocorrelation receiver ADCanalog-to-digital converter aic[AIC]Analog-to-Information Converter AIC[AIC]Akaike information criterion aric[ARIC]asymmetric restricted isometry constant arip[ARIP]asymmetric restricted isometry property ARQautomatic repeat request AUBasymptotic union bound awgn[AWGN]Additive White Gaussian Noise AWGNadditive white Gaussian noise APSK[PSK]asymmetric PSK waric[AWRICs]asymmetric weak restricted isometry constants warip[AWRIP]asymmetric weak restricted isometry property BCHBose, Chaudhuri, and Hocquenghem BCHC[BCHSC]BCH based source coding BEPbit error probability BFCblock fading channel BG[BG]Bernoulli-Gaussian BGGBernoulli-Generalized Gaussian BPAMbinary pulse amplitude modulation BPDNBasis Pursuit Denoising BPPMbinary pulse position modulation BPSKbinary phase shift keying BPZFbandpass zonal filter BU[BU]Bernoulli-uniform BERbit error rate BSbase station BCbackscatter communication CPCyclic Prefix cdf[CDF]cumulative distribution function CDFcumulative distribution function c.d.f.[CDF]cumulative distribution function CCDFcomplementary cumulative distribution function ccdf[CCDF]complementary CDF c.c.d.f.[CCDF]complementary cumulative distribution function CDcooperative diversity CDMACode Division Multiple Access ch.f.characteristic function CIRchannel impulse response cosamp[CoSaMP]compressive sampling matching pursuit CRcognitive radio cs[CS]compressed sensing cscapital[CS]Compressed sensing CS[CS]compressed sensing CSSchirp spread spectrum CSIchannel state information CCSDSconsultative committee for space data systems CCconvolutional coding Covid19[COVID-19]Coronavirus disease SSBCspectrum sharing backscatter communications CWcontinuous wave COTcommercial off-the-shelf DAAdetect and avoid DABdigital audio broadcasting DCTdiscrete cosine transform DDSdirect digital synthesis dft[DFT]discrete Fourier transform DRdistortion-rate DSdirect sequence DS-SSdirect-sequence spread-spectrum DTRdifferential transmitted-reference DVB-Hdigital video broadcasting – handheld DVB-Tdigital video broadcasting – terrestrial DLdownlink DSSSDirect Sequence Spread Spectrum DFT-s-OFDMDiscrete Fourier Transform-spread-Orthogonal Frequency Division Multiplexing DASdistributed antenna system DNADeoxyribonucleic Acid ECEuropean Commission EED[EED]exact eigenvalues distribution EIRPEquivalent Isotropically Radiated Power ELPequivalent low-pass eMBBenhanced mobile broadband EMFelectric and magnetic fields EUEuropean union EVTextreme value theorem ETSIEuropean Telecommunications Standards Institute FC[FC]fusion center FCCFederal Communications Commission FDfull-duplex FECforward error correction FERframe error rate FFTfast Fourier transform FHfrequency-hopping FH-SSfrequency-hopping spread-spectrum FSFrame synchronization FSsmall[FS]frame synchronization FDMAFrequency Division Multiple Access GAGaussian approximation GFGalois field GGGeneralized-Gaussian GIC[GIC]generalized information criterion GLRTgeneralized likelihood ratio test GPSGlobal Positioning System GMSKGaussian minimum shift keying GSMAGlobal System for Mobile communications Association HAPhigh altitude platform HDhalf-duplex IDRinformation distortion-rate IFFTinverse fast Fourier transform iht[IHT]iterative hard thresholding i.i.d.independent, identically distributed IoTInternet of Things IRimpulse radio lric[LRIC]lower restricted isometry constant lrict[LRICt]lower restricted isometry constant threshold ISIintersymbol interference ITUInternational Telecommunication Union ICNIRPInternational Commission on Non-Ionizing Radiation Protection IEEEInstitute of Electrical and Electronics Engineers ICESIEEE international committee on electromagnetic safety IECInternational Electrotechnical Commission IARCInternational Agency on Research on Cancer IS-95Interim Standard 95 LBLoRa backscatter LEOlow earth orbit LFlikelihood function LLFlog-likelihood function LLRlog-likelihood ratio LLRTlog-likelihood ratio test LOSLine-of-Sight LRTlikelihood ratio test wlric[LWRIC]lower weak restricted isometry constant wlrict[LWRICt]LWRIC threshold LPWANlow power wide area networks LoRaWANlow power long range wide area network NLOSnon-line-of-sight MBmultiband MCmulticarrier MDSmixed distributed source MFmatched filter m.g.f.moment generating function MImutual information MIMOmultiple-input multiple-output MISOmultiple-input single-output maxs[MJSO]maximum joint support cardinality ML[ML]maximum likelihood MMSEminimum mean-square error MMVmultiple measurement vectors MOSmodel order selection M-PSK[M-PSK]M-ary phase shift keying M-APSK[M-PSK]M-ary asymmetric PSK MTCmachine type communication MGFmoment generating function M-QAM[M-QAM]M-ary quadrature amplitude modulation MRCmaximal ratio combiner maxs[MSO]maximum sparsity order M2Mmachine to machine MUImulti-user interference mMTCmassive machine type communications mm-Wavemillimeter-wave MPmobile phone MPEmaximum permissible exposure MACmedia access control NBnarrowband NBInarrowband interference NLAnonlinear sparse approximation NLOSNon-Line of Sight NTIANational Telecommunications and Information Administration NTPNational Toxicology Program NHSNational Health Service NB-IoTnarrowband Internet of things OCoptimum combining OCoptimum combining ODEoperational distortion-energy ODRoperational distortion-rate OFDMorthogonal frequency-division multiplexing OOKON-OFF Keying omp[OMP]orthogonal matching pursuit OSMP[OSMP]orthogonal subspace matching pursuit OQAMoffset quadrature amplitude modulation OQPSKoffset QPSK OFDMAOrthogonal Frequency-division Multiple Access OPEXOperating Expenditures OQPSK/PMOQPSK with phase modulation PAMpulse amplitude modulation PARpeak-to-average ratio pdf[PDF]probability density function PDFprobability density function p.d.f.[PDF]probability distribution function PDPpower dispersion profile PMFprobability mass function p.m.f.[PMF]probability mass function PNpseudo-noise PPMpulse position modulation PRakePartial Rake PSDpower spectral density PSEPpairwise synchronization error probability PSKphase shift keying PDpower density 8-PSK[8-PSK]8-phase shift keying PRprimary receiver PTprimary trasmitter FSKfrequency shift keying QAMQuadrature Amplitude Modulation QPSKquadrature phase shift keying OQPSK/PMOQPSK with phase modulator RD[RD]raw data RDL"random data limit" RFEHRF energy harvesting ric[RIC]restricted isometry constant rict[RICt]restricted isometry constant threshold rip[RIP]restricted isometry property ROCreceiver operating characteristic rq[RQ]Raleigh quotient RS[RS]Reed-Solomon RSC[RSSC]RS based source coding r.v.random variable R.V.random vector RMSroot mean square RFRradiofrequency radiation RISreconfigurable intelligent surface RNARiboNucleic Acid Rxreceiver SA[SA-Music]subspace-augmented MUSIC with OSMP SCBSES[SCBSES]Source Compression Based Syndrome Encoding Scheme SCMsample covariance matrix SEPsymbol error probability SERsymbol error rate SG[SG]sparse-land Gaussian model SIMOsingle-input multiple-output SINRsignal-to-interference plus noise ratio SIRsignal-to-interference ratio SISOsingle-input single-output SMVsingle measurement vector SNR[SNR]signal-to-noise ratio sp[SP]subspace pursuit SSspread spectrum SWsync word SARspecific absorption rate SSBsynchronization signal block SRsecondary receiver STsecondary trasmitter SFspreading factor THtime-hopping ToAtime-of-arrival TRtransmitted-reference TWTracy-Widom TWDTTW Distribution Tail TCMtrellis coded modulation TDDtime-division duplexing TDMAtime division multiple access Txtransmitter UAVunmanned aerial vehicle uric[URIC]upper restricted isometry constant urict[URICt]upper restricted isometry constant threshold UWBultrawide band UWBcap[UWB]Ultrawide band URLLCUltra Reliable Low Latency Communications VCOvoltage-controlled oscillator wuric[UWRIC]upper weak restricted isometry constant wurict[UWRICt]UWRIC threshold UEuser equipment ULuplink URLLCultra reliable low latency communications WiM[WiM]weigh-in-motion WLANwireless local area network wm[WM]Wishart matrix wm[WM]Wishart matrices WMANwireless metropolitan area network WPANwireless personal area network wric[WRIC]weak restricted isometry constant wrict[WRICt]weak restricted isometry constant thresholds wrip[WRIP]weak restricted isometry property WSNwireless sensor network WSSwide-sense stationary WHOWorld Health Organization Wi-Fiwireless fidelity sss[SpaSoSEnc]sparse source syndrome encoding VLCvisible light communication VPNvirtual private network RFradio frequency FSOfree space optics IoSTInternet of space things GSMGlobal System for Mobile Communications 2Gsecond-generation cellular networks 3Gthird-generation cellular networks 4Gfourth-generation cellular networks 5G5th-generation cellular networks gNBnext generation node B base station NRNew Radio UMTSUniversal Mobile Telecommunications Service LTELong Term Evolution QoSQuality of Service LoRa Backscatter Communications: Temporal, Spectral, and Error Performance Analysis Ganghui Lin, Graduate Student Member, IEEE Ahmed Elzanaty, Senior Member, IEEE, and Mohamed-Slim Alouini, Fellow, IEEE Ganghui Lin and Mohamed-Slim Alouini are with the Division of Computer, Electrical and Mathematical Sciences and Engineering, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia (e-mail: [email protected], [email protected]). A. Elzanaty is with the 5GIC & 6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford, GU2 7XH, United Kingdom (e-mail: [email protected]). The source codes can be accessed at https://github.com/SlinGovie/LoRa-Backscatter-Performance-Analysis. July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== LB communication systems can be considered as a potential candidate for ultra LPWAN because of their low cost and low power consumption. In this paper, we comprehensively analyze LB modulation from various aspects, i.e., temporal, spectral, and error performance characteristics. First, we propose a signal model for LB signals that accounts for the limited number of loads in the tag. Then, we investigate the spectral properties of LB signals, obtaining a closed-form expression for the power spectrum. Finally, we derived the SER of LB with two decoders, i.e., the ML and FFT decoders, in both AWGN and double Nakagami-m fading channels. The spectral analysis shows that out-of-band emissions for LB satisfy the ETSI regulation only when considering a relatively large number of loads. For the error performance, unlike conventional LoRa, the FFT decoder is not optimal. Nevertheless, the ML decoder can achieve a performance similar to conventional LoRa with a moderate number of loads. SER; LB; IoT; power spectral density § INTRODUCTION The goal of 5G and beyond networks is to realize three core services, i.e., eMBB, URLLC, and mMTC <cit.>, to support extreme network capacity, latency-sensitive critical missions, and a massive number of connected devices, respectively. In mMTC, a colossal variety of inexpensive devices is needed to communicate at low power consumption. With nearly 30 billion devices expected to be connected by 2030 <cit.>, the prospect of mMTC makes it possible for IoT applications such as smart cities, homes, and agriculture. In this regard, many wireless technologies have recently emerged to realize such low-power communications with low-priced devices. For instance, LoRaWAN, SigFoX, and NB-IoT are three prominent technologies supporting LPWAN <cit.>. However, some application scenarios put forward more stringent requirements on the power consumption that even technologies such as LPWAN cannot meet. In this context, BC is one of the most remarkable technologies for ultra-low-power communications with its ability to operate with sub-milliwatt power consumption. Typical BC systems contain backscatter tags and transceivers. The tag reflects the incoming excitation signals while modulating it with its data by changing the incident signal amplitude, frequency, or phase <cit.>. The innovation of BC lies in its radio-passive property that removes the power-consuming analog components such as RF oscillators, decoupling capacitors, and crystals. Hence, BC devices can operate with emerging flexible and printed batteries or even harvest environmental energy<cit.>. BC can be divided into three categories based on the adopted architecture: (i) monostatic backscatter, where a tag modulates an excitation signal generated by a transceiver, then reflects it back to the transceiver; (ii) bistatic backscatter, where the Tx and Rx are separated; (iii) ambient backscatter, where the excitation signal comes from existing surrounding RF sources in the environment <cit.>. Despite the low-cost, low-power merits that BC owns, one of the most detrimental defects of BC is its limited communication range. The electromagnetic waves suffer from severe multipath fading and strong attenuation in urban or indoor environments, significantly reducing the coverage of BC <cit.>. In some medical or wearable applications, the human body also shortens the communication range<cit.>. One possible solution to this problem is LB, where the backscatter tag generates CSS modulated signals to improve the robustness of BC<cit.>. LoRa modulated signals have exceptionally high sensitivity up to -149 dBm, compensating for the long-range path loss as well as the multipath fading effects. Although LB is similar, to some extent, to LoRa modulation in terms of high sensitivity, they differ in many aspects. First, the LB modulated signals only have a finite number of phases because of the limited number of antenna loads in the backscatter tags. Second, conventional LoRa Tx are active devices that directly transmit signals to the gateway, hopefully through a line of sight link. In contrast, in LB the signals from the Tx pass through the channel between the Tx and tag before backscattered to the Rx, i.e., double-fading channel. The aforementioned aspects make a huge difference in analyzing the performance of LB. More precisely, a double fading channel should be considered in LB, resulting in a more complicated PDF for the channel amplitude. The discrete phases also make the waveforms representing different symbols non-orthogonal at the Nyquist sampling rate, rendering non-equivalency between the ML and FFT decoders. §.§ Related work In the following, we review the state-of-the-art LB schemes and the performance analysis regarding BC and LoRa as well as LB. §.§.§ The State-of-the-art of LB In order to increase the range for BC, a LB communication system with a harmonic cancellation scheme is proposed in <cit.>. The proposed system consists of a single-tone carrier Tx, a backscatter tag, and a Rx. The tag uses the single tone to synthesize CSS modulated signals and reflect them to the Rx. To avoid interference from the RF source, a frequency shift is introduced in the tag by multiplying the incoming signals by an approximated cosine and sine with discontinuous step transitions, resulting in high-frequency components, i.e., the harmonics. The harmonic cancellation is achieved by adding more voltage levels, i.e., more antenna loads in the tag, to better approximate a pure sinusoid. The design presents a reliable wide area network coverage, i.e., 475 m from the Tx and Rx, provided by a low-cost device that consumes 1000x lower power than a normal LoRa Tx. However, the tag still needs a battery as the power source rather than harvesting energy from the environment. In <cit.>, an RFEH LB scheme is proposed. It is reported that the tag can self-start up while harvesting RF energy from the excitation signal as low as -22.5 dBm and send the acquired data to a Rx that is 381 m away. There are two possible reasons for the shorter range compared to <cit.>. One is that the RFEH LoRa tag only uses two antenna loads, resulting in more harmonics. Another reason is that the RFEH LoRa tag harvest energy from the RF source instead of having an embedded battery so that a proportion of the energy is used to power the IC. To resolve the problem of deployment difficulties of traditional HD LB, the first FD LB architecture is proposed in <cit.>. Nevertheless, the FD LB achieves a shorter range due to a different sensitivity protocol and a reduced link budget introduced by a hybrid coupler architecture used to reduce cost. The above-mentioned LB systems all use single-tone carriers and the tag modulates its bits into a subcarrier to generate LoRa packets. However, there are other LB systems that use normal LoRa modulated waves as excitation signals <cit.>. These ambient backscatter designs use simpler modulation methods such as OOK, which leads to less complicated tag design at a cost of either shorter communication range, more complicated decoding algorithms, or lower throughput. §.§.§ Performance Analysis for BC, LoRa, and LB For BC, several works have considered its performance analysis in terms of detection schemes, error rate, and information rate. In <cit.>, the authors consider FSK with a coherent receiver for the bistatic backscatter radio channel. Nevertheless, coherent detection is complex and requires CSI estimation. To avoid this difficulty, non-coherent detection schemes for ambient BC have been considered in <cit.>. With the different detection schemes mentioned above, the exact BER for the ambient backscatter system is derived in <cit.>. Since it uses unknown ambient signals as carriers, the actual capacity is of interest. In <cit.>, the achievable rate of the ambient BC system is analyzed under different channels. Although the performance analysis of several BC systems with conventional modulation schemes (e.g., BPSK, FSK, OOK) have been analyzed in the literature, few works consider long-range CSS modulation such as LB. In the following, we first discuss the performance analysis of conventional LoRa then LB. For conventional LoRa, the power spectrum characteristics of normal LoRa are analyzed in terms of the Fresnel functions<cit.>, in which the derived expressions are not applicable for LB because of its limited number of phases. In <cit.>, the BER of LoRa in an AWGN channel is derived by using the Monte-Carlo approximation, but the method requires lots of computing resources. The exact SER of orthogonal signaling with non-coherent detection is presented in <cit.>, which is applicable for LoRa. However, the equation is only derived in an AWGN scenario and the high-order combination involved in the equation will introduce severe numerical problems. The exact SER of LoRa modulation in various fading scenarios, i.e., Rayleigh, Rician, and Nakagami fading channels, are analyzed in <cit.>. Nevertheless, the expressions are hardly computable because of the remaining high-order combination. In <cit.>, simple asymptotic BER expressions of LoRa modulation over Rician and Rayleigh channels are derived at cost of accuracy. The numerical results show poor accuracy at low SNR. In <cit.>, an accurate closed-form approximation of the BER in both AWGN and Rayleigh fading channels is derived. However, the analysis for other fading models such as Rician and Nakagami-m fading channels is not included. In <cit.>, the author proposed a new approach based on the Marcum function to estimate the BER of LoRa in different propagation environments, namely, AWGN, Rayleigh, Rician, and Nakagami channels. In <cit.>, a modification of a union bound on the error probability is proposed to calculate the BER with less complexity. The approximation is applicable for both coherent and non-coherent detection. The theoretical performance analysis is also extended to Hamming-coded LoRa systems. In <cit.>, a novel modulation scheme is introduced that employs both up-chirp and down-chirp simultaneously, and its BER is derived in the AWGN channel. Numerical results demonstrate that this scheme achieves a doubled throughput without a significant increase in BER. Further scenarios have been studied such as channel coding<cit.>, interference channels <cit.>, and imperfect orthogonality<cit.> for LoRa. Although the performance analysis of conventional LoRa modulation over AWGN and various fading channels have been extensively studied in the literature, they do not consider the double fading channel and the limited number of loads that characterize LB. In <cit.>, the error performance of LB is discussed in AWGN channels. However, the essential characteristics of LB for the signal model are not considered. For instance, the fact that LB modulated signals only have limited available phases is not considered. Also, since the performance was only in an AWGN channel, it does not account for the double-fading effects which are crucial for LB. §.§ Contributions Although some prototypes have proved the feasibility of LB, more theoretical insights are needed in order to support the design and deployment of LB systems. Nevertheless, a comprehensive performance analysis of LB communication systems is still lacking in the literature. In fact, none of the aforementioned work has considered the main features that characterize LB such as finite phases and the double fading channel. In this paper, we comprehensively analyze LB modulation from various aspects, i.e., temporal, spectral, and error performance characteristics. More precisely, we provide the first expression for the LB modulated signal that accounts for the reduced complexity of the system compared to LoRa. The reason is that the number of antenna loads is finite, limiting the number of available phases for the signal. Based on the provided signal expression, we analytically derive a closed-form expression for the power spectrum of baseband LB signals. The derivation of the power spectral density involves integration over the non-linear phase quantization function. The spectral analysis provides a better understanding of the adjacent channel interference for LB modulation and its relation to the number of loads (i.e., system complexity) <cit.>. Also, we derive the optimal decoder for LB, i.e., ML, and compare it with the conventional FFT decoder. For ML and FFT decoder, we conduct error performance analysis in AWGN and fading channels. More precisely, we derive closed-form approximations for the SER in both AWGN and double Nakagami-m fading channels for fixed transmit power. Besides, given a constraint on the average transmit power, we provide the optimal power allocation scheme using water-filling and analyze the corresponding average SER. The performance analysis of LB is quite involving compared to normal LoRa. For instance, unlike normal LoRa, the LB waveforms representing different symbols are not orthogonal when sampled at the Nyquist rate. This leads to a SER expression involving a product of Marcum Q functions with different shape parameters. Additionally, the PDF of the cascaded Nakagami-m fading channel entails integration over a modified Bessel function of the second kind, putting further the difficulty in numerical evaluation. The contribution of this paper can be summarized as follows: * We provide a novel signal expression in the time-domain for LB signal with a generalized number of loads. * We derive a closed-form expression for the power spectrum of LB signals and investigate whether LB meets the regulation of ETSI for adjacent channel interference. * We provide a ML-based decoder for LB. * We derive closed-form approximations for the SER of LB in both AWGN and double Nakagami-m fading channels, considering both FFT and ML decoders. * We study the SER performance of LB for various power allocation schemes, i.e., fixed and optimized power allocation techniques. §.§ Notations Throughout this paper, we denote the Tx-tag and tag-Rx links with subscripts 1 and 2, respectively, vectors with bold small letters, r.v. with calligraphic letters, e.g., ℒ, the PDF of ℒ with f_ℒl, the CDF of ℒ with F_ℒl, the expectation and variance of ℒ with 𝔼ℒ and 𝕍ℒ, respectively, complex Gaussian distribution that has mean μ and variance σ^2 with 𝒞𝒩(μ,σ^2), complex conjugate of a complex variable x with x^*, and inner product of vectors α and β with <α,β>. The main symbols considered in this paper are listed in Table <ref>. §.§ Structure of the paper The remainder of this paper is structured as follows. First, the system model is presented in Section <ref>, and the spectral analysis of LB is provided in Section <ref>. In Section <ref>, the error performance in AWGN and fading channels are investigated. Then, the numerical results are presented in Section <ref>, and Section <ref> concludes this paper. § SYSTEM SETUP AND SIGNAL MODEL In this section, we first explain the system setup and channel model for LB communication systems. Then, we provide the analytical expressions for LB signals, and we propose two decoders for LB. §.§ System Setup and Channel Model We consider a LB communication system with a Tx generating a carrier wave, a backscatter tag, and a Rx, as shown in Fig. <ref>. The Tx sends a single-tone excitation wave to the tag with carrier frequency f_c. Then, the tag modulates the excitation signal with its data using CSS modulation and reflects it to the Rx. We consider a scheme where the tag introduces a frequency offset Δ f to the incoming signal, shifting it to a channel centered at f_c+Δ f[Δ f is typically 3 orders of magnitude lower than f_c.]to avoid interference from the direct link between Tx and Rx <cit.>. LB networks can be scaled by considering different SFs and/or carrier frequencies with negligible interference among users. Also, we consider a flat double fading channel, where the complex channel gains of the Tx-tag and tag-Rx links are represented by h_1 and h_2, respectively. The received signal, sampled at the Nyquist rate, can be expressed as r_a[k]=h_1 h_2 √(E_s) x_a[k]+ω[k], ∀ k ∈{0,1,⋯,M-1}, where E_s denotes the baseband symbol energy, x_a[k] denotes k^th sample of the baseband LB signal with data symbol a, and ω[k] represents the thermal noise drawn from 𝒞𝒩(0,2σ^2). For simplicity, the multiplication of two channel gains can be expressed in an overall channel gain h, i.e., h ≜ h_1h_2. This channel model includes the case of monostatic BC by considering h_1=h_2=√(h). In the following, we explain the backscatter process and provide analytical expressions for LB baseband signals. §.§ Backscatter Process and Signal Model The reflected LoRa signals are synthesized by the RF switch that toggles between different antenna loads to create a set of reflection coefficients. The set of coefficients is selected to change the amplitude and phase of the incoming signal S_in. More precisely, let us define Γ_L and Γ_A as the load impedance and antenna impedance, respectively. The reflected signal S_out can be expressed as S_out(t)=Γ_L(t)-Γ_A/Γ_L(t)+Γ_A S_in(t)=|Γ_T(t)|e^jθ_T(t) S_in(t), where the change of Γ_Lt is discrete so that the possible number of phases θ_Tt and amplitudes |Γ_T(t)| are limited. However, to perfectly synthesize LoRa signals, we need an infinite number of phases so that the backscattered signals are only approximations of the ideal sinusoidal waves. For this reason, increasing the number of phases in LB can enhance the system performance and approach traditional LoRa systems. On the other hand, increasing the number of possible phases raises not only the design complexity by adding more loads, but also the switching frequency, reducing the tag service life. We first elaborate on some typical cases, i.e., using two, four, and eight phases to approximate sine and cosine. Then, we generalize the model. As shown in the first, second, and third columns in Fig. <ref>, we compare the IQ components of approximated sinusoidal waves (marked with red) with pure sinusoidal waves (marked with blue). If we consider only two phases for LB, the in-phase component of the backscattered wave is zero while the quadrature component is a square wave. To avoid interference with the direct link, a frequency shift Δ f should be introduced. In normal non-BC systems, the frequency shift is realized by multiplying the original signals centered at f_c by a sinusoid with frequency Δ f. However, due to the limited number of loads in BC tags, it is introduced through multiplication with square waves with frequency Δ f, resulting in additional harmonics <cit.>. More precisely, while moving the LoRa signals into the channel centered at f_c+Δ f, a mirror copy centered at f_c-Δ f is also created. Also, there are harmonics centered at f_c± 3Δ f, f_c± 5Δ f, f_c± 7Δ f, etc. If the system adopts four loads, both in-phase and quadrature components are square waves. This scheme cancels the mirror copies of the spectra. Nevertheless, the harmonics are preserved at f_c-3Δ f, f_c+5Δ f, f_c-7Δ f, etc<cit.>. When considering eight loads, two more voltage levels are added to the approximated sinusoids. The staircase-like waveforms cancels at least the harmonics centered at f_c-3Δ f and f_c+5Δ f<cit.>. Also, higher-order harmonics can be canceled by adding more phases if required. In this regard, unlike conventional LoRa, LB has a finite number of phases in addition to several harmonics. In the following, we provide the signal expressions for the baseband signals accounting for the finite phases. §.§.§ Continuous-Time Description The baseband LB signals are synthesized by switching between different loads so that they only have a limited number of phases, making them an approximation to the normal LoRa signals. To generalize the model, let us consider the number of loads written in a form of 2^N, where N∈1,2,3,.... The phases are evenly distributed and rotationally symmetric in the phase diagram at position (2n-1)π/2^N, ∀ n∈{1,2,⋯, 2^N} with some examples shown in Fig. <ref>. We introduce to the phase of conventional LoRa signal a mid-rise quantizer to perfectly model the discrete phases of LB. The quantizer maps the infinite phases of the LoRa modulation into a finite set of phases in LB. More precisely, the mid-rise quantization function with 2^N levels for input value x∈-π,π is given by Q_N(x)=⌊ 2^N-1x/π⌋ +1/22^N-1/π, where ⌊·⌋ is the floor function<cit.>. Let us also specify the periodicity of the quantization function as Q_Nx+2π≜Q_Nx for any real input value x, making it consistent with the periodic change of the phase. Let T_s≜ M/B be the symbol duration, the instantaneous phase of LoRa modulation with symbol a for interval 0,T_s can given in<cit.> as ϕ̂_at=2π Bta/M-1/2+Bt/2M-ut-τ_a, where B denotes the bandwidth of LoRa, M=2^SF, SF∈7,8,⋯,12 denotes the spreading factor, and τ_a=(M-a)/B denotes the time instant of the sudden frequency change. Thus, the instantaneous phase of LB with symbol a is calculated as ϕ_at = Q_Nϕ̂_at. Thus, the complex envelope for LB waveform of symbol a with unit power is calculated using (<ref>) as x_at =expjϕ_at =expjQ_N2π Bta/M-1/2+Bt/2M-ut-τ_a. §.§.§ Discrete-Time Description Let us consider the simple receiver implementation that sample the received signal at the chip rate, i.e., 1/T_c=M/T_s=B, the M samples in the interval 0,T_s, for k∈0,1,⋯,M-1, are ϕ_akT_c =Q_Nkπ/M2a-M+k-2Mπ uk+a-M/B =Q_Nkπ/M2a-M+k, where the last equality is due to the fact that 2Mπ u· is always an integer multiple of 2π for all k and Q_Nx is periodic of 2π. Normalizing the total energy of the samples, we have the discrete-time expression of the baseband LB signal with 2^N quantization phases x_a[k]=√(1M)exp{j Q_N[kπM(2a-M+k)]}, k∈{0,1,⋯,M-1}. §.§ LB Decoders We study the error performance of LB under two decoders, i.e., the ML and FFT decoders. We consider that the overall channel amplitude |h|=h_1h_2 is known to the Rx, and it performs non-coherent detection.[Non-coherent detectors for LB can provide a good trade-off between the performance and complexity since the performance improvement using coherent detectors is only 0.7 dB for traditional LoRa <cit.>.] §.§.§ Maximum Likelihood Decoder The optimum detector in an AWGN channel is the ML detector if all the symbols are equiprobable<cit.>. The decoder performs cross-correlation on the received signal with all possible waveforms and chooses the symbol that has the largest absolute value of cross-correlation with the received signal, i.e., â =max_0≤ i ≤ M-1| < r_a, x^*_i>| =max_0≤ i ≤ M-1∑_k=0^M-1r_a[k] ·x_i^*[k], where â is the symbol decision made by the Rx, x_i^*[k] is the complex conjugate of x_i[k]. The ML decoder is the optimum decoder for LB as shown in Appendix A. However, LB is a relatively high-order modulation that reaches M=4096 for SF=12, resulting in a more complicated design and implementation of the decoder. §.§.§ FFT Decoder The process of FFT decoder follows three steps. First, The received signal is firstly multiplied by a complex down-chirp signal x_d[k] r̃_a[k]=r_a[k] x_d[k]=r_a[k] √(1/M)e^-j2πk^2/2M+jπ k, where the dechirped signal r̃_a[k] is called the twisted signal. The symbol decision is made by choosing the maximum value among M-bin dft output on r̃_a[k] â =max_0≤ i ≤ M-1|DFT(r̃_a[k])| =max_0≤ i ≤ M-1|∑_k=0^M-1r̃_a[k] e^-j2π ki/M|. It has been proven that the two decoders are equivalent for norma LoRa<cit.> but this is not valid for LB. Also, the waveforms representing different symbols sampled at the chip rate B are non-orthogonal. Fig. <ref> compares the cross-correlation of LB waveforms (4 quantization phases) with that of normal LoRa at SF=7. It shows that the cross-correlation of LB waveforms has numerous non-zero values while the cross-correlation between different symbols are all zero for normal LoRa[LoRa symbols are non-orthogonal, it can be proven orthogonal only when sampled at Nyquist rate.]. However, most of the non-zero correlation values are relatively small, indicating that the symbols are not strongly correlated. In Table. <ref>, we show the maximum cross-correlation value between different LB waveforms for N∈2, 3, 4, 5 and SF∈7,8, ⋯, 12. The cross-correlation generally decreases as N and SF increase. § SPECTRAL ANALYSIS The power spectrum of the LB modulation is analytically determined in closed form in this section. We consider a source that produces a random symbol sequence where all the symbols are independent, identically distributed with r.v.s ℬ_n. Assuming that the symbols are equiprobable, we have Pℬ_n=a=1/M, a∈0,1,⋯,M-1. The time sequence of the LB waveforms can be expressed with a random process ℐt=∑_n x_st-nT_s;ℬ_ng_T_st-nT_s, where g_T_s is a rectangular function with width T_s. The power spectrum density of the random process ℐt can be derived as 𝒢_ ℐf= 𝒢_ ℐ^cf+𝒢_ ℐ^df. where 𝒢_ ℐ^cf and 𝒢_ ℐ^df are the continuous and a discrete parts of the spectrum, respectively. The continuous and discrete spectrum in (<ref>) can be obtained by applying frequency domain analysis of randomly modulated signals for the random process <cit.> 𝒢_ ℐ^cf =1/T_s∑_a=0^M-11/MS_af^2-∑_a=0^M-11/MS_af^2, 𝒢_ ℐ^df =1/M^2T^2_s∑_l=-∞^∞∑_a=0^M-1S_alB/M^2δf-lB/M, where S_af_a=0^M-1 are the Fourier transforms of the waveforms x_at_a=0^M-1 from the modulator. The Fourier transform of the complex envelope can be expressed as S_af =∫_0^T_se^jQ_N2π Bta/M-1/2+Bt/2M-ut-M-a/Be^-j2π ftdt =∫_0^M-a/Be^j Q_N2π Bta/M-1/2+Bt/2Me^-j2π f tdt+ ∫_M-a/B^B/Me^j Q_N2π Bta/M-3/2+Bt/2Me^-j2π f tdt. Leveraging the periodicity of Q_N·, we rewrite the second term in (<ref>) as ∫_M-a/B^B/Me^jπQ_N2π Bta/M-3/2+Bt/2M+2πM-ae^-j2π f tdt. Let us also define f_1t≜ 2π Bta/M-1/2+Bt/2M, and f_2t≜ 2π Bta/M-3/2+Bt/2M+2πM-a as the two terms within Q_N· of two integral parts. Let us define ft as ft≜{ f_1t , 0 ≤ t<M-a/B f_2t , M-a/B≤ t≤ M/B. It is worth noting that, ft is a continuous function with f_1M-a/B=f_2M-a/B. Thus, the sum of two separated integrals in (<ref>) can be replaced with one integral S_af=∫_0^B/Me^j Q_Nfte^-j2π f tdt. It is challenging to directly compute the integral in (<ref>) with the quantization function inside. However, one possible solution is to divide the integral interval into slots, as shown in Fig. <ref>. The value of Q_Nft remains unchanged in each intervals divided by a time set t_m_m=1^ψ. Details about obtaining t_m_m=1^ψ are provided in Appendix B. The integral in each slot can be calculated as I_mf =∫_t_m^t_m+1e^j Q_nfte^-j2π f tdt =j/2 fe^jπQ_nft_m+t_m+1/2e^-j2π f t_m+1-e^-j2π f t_m, where (t_m+t_m+1)/2 can be replaced by any real value between t_m and t_m+1. Since we have f0=fM/B=0, the overall integral interval begins with t_0 and ends with t_ψ. Thus, the integral in (<ref>) is calculated as a sum of I_mf S_af=j/2π f∑_m=1^ψ-1e^jπQ_Nft_m+t_m+1/2e^-j2π f t_m+1-e^-j2π f t_m. By substituting (<ref>) into (<ref>) and (<ref>), we can calculate both the continuous and discrete parts of the spectrum for baseband LB signals. § PERFORMANCE ANALYSIS In this section, we derive the SER of the LB communication system in terms of different decoders, channel models, and power strategies. §.§ LB SER Performance over AWGN Channels The SER in an AWGN channel is a conditional probability conditioned on the overall channel gain h. Thus, we consider h as a constant in this subsection. We define the r.v. ℒ^D_(a,i) as the absolute value of i-th bin of D∈𝔻≜ML,FFT decoder when symbol a is transmitted. For ML decoder, ℒ^ML_(a,i) is calculated as ℒ^ML_(a,i) =∑_k=0^M-1r_a[k] x_i^*[k] = h√(E_s)ξ_a,i^ML +𝒲^ML_(a,i), where ξ_a,i^ML=∑_k=0^M-1x_a[k] x_i^*[k] is the cross-correlation between the transmitted waveform and the i-th reference signal and 𝒲^ML_(a,i)=∑_k=0^M-1ωkx_i^*[k] is the noise projection of the reference signals, following complex Gaussian distribution with variance 2σ^2. For FFT decoder, ℒ^FFT_(a,i) is ℒ^FFT_(a,i) =DFT( r_a[k] x_d[k] ) = h√(E_s)ξ_a,i^FFT +𝒲^FFT_(a,i), where ξ_a,i^FFT=DFT( x_a[k] x_d[k] ) denotes the i-th bin of the dft for the dechirped reference signals and 𝒲^FFT_(a,i)=DFTωkx_d[k] is the i-th bin of the dft for the dechirped complex white Gaussian noise, which also follows complex Gaussian distribution with variance 2σ^2. Since (<ref>) and (<ref>) have the same pattern, i.e., the absolute value of a sum of a complex constant and a complex Gaussian r.v., ℒ^D_(a,i)= h√(E_s)ξ^D_(a,i)+𝒲^D_(a,i), which follows Rician distribution with shape parameter κ^D_(a,i)=|h√(E_s)ξ_a,i^D|^2/2σ^2, as shown in Appendix C. Without losing generality, if ξ_a,i^D=0, the shape parameter is zero so that the Rician distribution becomes Rayleigh distribution. It is worth noting that the r.v. in the set ℒ^ML_(a,i)_i=0^M-1 are not independent because 𝒲^ML_(a,i) are the noise projection of non-orthogonal waveforms, as shown in Fig. <ref>. On the other hand, the r.v. in the set ℒ^FFT_(a,i) are independent since the noise is projected to the directions of orthogonal basis in the dft process. The dependence of ℒ^ML_(a,i)_i=0^M-1 make it challenging to find their joint distribution. However, they are approximately independent for the following three reasons. First, as shown in Fig. <ref>, most of the cross-correlation values are zero, namely, most of the r.v. within ℒ^ML_(a,i)_i=0^M-1 are independent. Second, the non-zero cross-correlation values are relatively small so that the dependent r.v. are not strongly correlated. Also, the non-zero cross-correlation values decrease as the number of quantization phases or the spreading factor increases. Nevertheless, ignoring the correlation makes the derived expression an upper bound to the exact SER<cit.>. In the following, we assuming that ℒ^ML_(a,i)_i=0^M-1 are independent so that the derivation applies to both decoders. To obtain an expression for the SER, let us define the instantaneous SNR as γ=h^2E_s/T_s2σ^2B=h^2E_s2σ^2M. The PDF and CDF of ℒ^D_(a,i) conditioned on the overall channel gain h can be written for D∈𝔻 as f_ℒ^D_(a,i)|h(l) = l/σ^2exp[-(l^2+C^2)2σ^2] I_0(lCσ^2), F_ℒ^D_(a,i)|h(l) =1-Q_1(Cσ, lσ), where C=h√(E_s)ξ_a,i^D, Q_1(·) is the Marcum Q-function of order one <cit.>, I_0(·) is the modified Bessel function of the first kind and order zero<cit.>. The detection error happens when â≠ a, i.e., ℒ^D_(a,a) is not maximum among the set ℒ^D_(a,i)_i=0^M-1. In other words, the SEP can be calculated by comparing the correct bin, namely, ℒ^D_(a,a), with the maximum of the noisy bins. Thus, we define ℒ̂^D_(a)≜max_ i,i≠ aℒ^D_(a,i) whose CDF of ℒ̂^D_(a) can be computed using order statistics as F_ℒ̂^D_(a)|h(l) =∏_i=0,i≠ a^M-1 F_ℒ^D_(a,i)|h(l) =∏_i=0,i≠ a^M-1[1-Q_1(|h√(E_s)ξ^D_(a,i)|σ, lσ)]. The conditional SEP using D∈𝔻 decoder given a transmit symbol a is calculated as P^D_e|a =P(ℒ̂^D_(a)>ℒ^D_(a,a)| h) =∫_0^∞[1-F_ℒ̂^D_(a)|h(l)] f_ℒ^D_(a,a)|h(l)dl, where the shape parameter of ℒ^D_(a,a) is κ^D_(a,a) =|h√(E_s)ξ^D_(a,a)|^2/2σ^2 = ξ^D_(a,a)^2 M γ. The Rician distributed r.v. ℒ^D_(a,a) can be approximated as a Gaussian r.v. if the shape parameter κ^D_(a,a) is larger than 2 <cit.>. In our setting, as shown in (<ref>), ξ^D_(a,a)≈ 1 and M=2^SF ranging from 2^7=128 to 2^12=4096 is a large number so that ℒ^D_(a,a) can be approximated as a Gaussian r.v. in an acceptable SNR range. For example, if we consider SF=8 and ML decoder, we have ξ^ML_(a,a)=1 and M=256. The proper SNR range for the approximation is γ≥ 1/128 ≈ -21.1 dB, which covers most of the SER range of interest. Hence, we have f_ℒ^D_(a,a)|h(l)≈1√(2πσ_a^2)exp[-(l-μ_a)^2/2σ_a^2], where μ_a and σ_a are the mean and variance of ℒ^D_(a,a), respectively, i.e., μ_a =𝔼ℒ^D_(a,a)= σ√(π/2)· L_1/2-κ^D_(a,a), σ_a^2 =𝕍ℒ^D_(a,a)= 2σ^21+κ^D_(a,a)-μ_a^2, where L_q(·) denotes a Laguerre polynomial<cit.>, and for the case q=1/2, we have L_1/2(x)=e^x/2[(1-x)I_0(-x2)-xI_1(-x2)] . In Table. <ref>, we show some typical values of κ^D_a,a, μ_a, and σ_a^2 as a function of N for both ML and FFT decoders. The complexity of evaluating integral (<ref>) mainly comes from the product of M-1 non-identical Rician CDFs. As a possible solution, we can leverage an asymptotic approximation of the order statistics of a sequence of independent and non-identically distributed (i.n.i.d.) Rician r.v. <cit.> using the EVT. Nevertheless, it shall result in an exponential-in-exponential expression, rendering difficulties in the integration. Alternatively, we consider Gauss-Hermite quadrature to evaluate the integral numerically. In this procedure, the integral is converted to a weighted sum of function values using the Gauss-Hermite quadrature <cit.>. Since the Rician PDF (<ref>) is zero for l<0, the lower limit of integral (<ref>) can be substituted with -∞. Thus, we get P^D_e|a ≈∫_-∞^∞[1-F_ℒ̂^D_(a)|h(l)]√(2πσ_a^2) exp[-(l-μ_a)^2/2σ_a^2] dl. Let us define l̂≜l-μ_a√(2)σ_a and substitute l with √(2)σ_al̂+μ_a so that the integral becomes P^D_e|a ≈∫_-∞^∞[1-F_ℒ̂^D_(a)|h(√(2)σ_al̂+μ_a)]exp-l̂^2/√(π)dl̂ ≈1√(π)∑_t=1^N_GHω_t[1-F_ℒ̂^D_(a)|h(√(2)σ_a x_t+μ_a)], where N_GH is the number of function samples used to approximate the integral, x_t are the roots of the physicists' version of the Hermite polynomial H_N_GH(x), and ω_t are the corresponding weights. The integration formula and corresponding weights can be found in <cit.>. We assume that the transmit symbols are equiprobable so that the average SER of LB in the AWGN channel can be expressed as P^D_e|h =1/M∑_a=0^M-1 P^D_e|a =1/M√(π)∑_a=0^M-1∑_t=1^N_GHω_t[1-F_ℒ̂^D_(a)|h(√(2)σ_a x_t+μ_a)]. §.§ LB SER Performance over Double Nakagami-m Fading Channels We consider a double Nakagami-m fading scenario with different shape parameters m_1, m_2 and spread parameters Ω_1, Ω_2. The spread parameters are related to the link distance, i.e., Ω_1∝ 1/d_1^2 and Ω_2∝ 1/d_2^2, where d_1, d_2 are the distances of the two links. Let us define r.v. ℋ_1 and ℋ_2 as the channel amplitudes of Tx-tag and tag-Rx links, respectively. The PDFs of ℋ_1 and ℋ_2 can be expressed as f_ℋ_1(h_1) =2m_1^m_1Γ(m_1)Ω_1^m_1|h_1|^2m_1-1exp-m_1Ω_1|h_1|^2, f_ℋ_2(h_2) =2m_2^m_2Γ(m_2)Ω_2^m_2|h_2|^2m_2-1exp-m_2Ω_1|h_2|^2. Let us also define the r.v. ℋ≜ℋ_1ℋ_2 as the overall channel amplitude, whose PDF is governed by <cit.> f_ℋ(h)=4(r_1r_2)^v/2Γ(m_1)Γ(m_2)|h|^v-1K_n(2√(r_1r_2) |h|), where Γ· denotes the Gamma function<cit.>, K_n(·) denotes the modified Bessel function of the second kind with order n<cit.>, r_1=m_1/Ω_1, r_2=m_2/Ω_2, v=m_1+m_2, and n=|m_1-m_2|. In the following, we analyze two cases with different power allocation schemes, i.e., fixed transmit power and varying transmit power with an average power constraint. §.§.§ Fixed Transmit Power Considering the fixed symbol energy, the average SER in fading channels is calculated as P^D_e=∫_0^∞P^D_e|h f_ℋ(h) d|h|. Substituting (<ref>) and (<ref>) into (<ref>), and substituting h with h̃/2√(r_1r_2), we obtain P^D_e =2^2-v√(1/π)MΓ(m_1)Γ(m_2)∑_a=0^M-1∑_t=1^N_GHω_tI_1, where I_1=∫_0^∞[1-F_ℒ̂^D_(a)|h̃(√(2)σ_a x_t+μ_a)]h̃^v-1K_n(h̃)dh̃. In the following, we compute I_1 depending on the value of n since for certain cases the Bessel function in (<ref>) can be expressed in closed-form while for other cases it is challenging to find a closed-form expression. n is half-integer We consider the order of the Bessel function as a half-integer, i.e., n=u+1/2 for all u∈0,1,2,⋯. Thus, the modified Bessel function of the second kind with order n can be expressed in closed form<cit.> K_u+1/2(z)=√(π2z)∑_k=0^u(u+k)!k!(u-k)!(2z)^k e^-z. Substituting (<ref>) into I_1 yields I_1 =∫_0^∞√(π/2h̃) [1-F_ℒ̂^D_(a)|h̃(√(2)σ_a x_t+μ_a)] ×∑_k=0^⌊ n⌋(⌊ n⌋+k)!k!(⌊ n⌋-k)!(2h̃)^kh̃^v-1 e^-h̃dh̃. where an a power and an exponential terms h̃^v-1 e^-h̃ are included in the integrand. Thus, unlike using Gauss-Hermite quadrature in (<ref>) that integrates on an integrand with an exponential term e^-l̂^2, we consider Gauss-Laguerre quadrature to evaluate the integral numerically I_1≈√(π/2) ∑_e=1^N_GLω_eG_t(x_e)√(x_e)∑_k=0^⌊ n⌋(⌊ n⌋+k)!k!(⌊ n⌋-k)!(2x_e)^k, where G_t(x_e) is given by G_t(x_e)=1-∏_i=0,i≠ a^M-1[1-Q_1(|x_e√(E_s)ξ^D_(a,i)|2√(r_1r_2)σ, √(2)σ_a x_t+μ_aσ)], N_GL is the number of function samples, x_e are the roots of the generalized Laguerre polynomial L^(v-1)_N_GL(x), and ω_e are the corresponding weights<cit.>. Substituting (<ref>) into (<ref>), we obtain the average SER in double Nakagami-m fading channel for the half-integer order n P^D_e=2^2/3-vMΓ(m_1)Γ(m_2)∑_a=0^M-1∑_t=1^N_GH∑_e=1^N_GLω_tω_eG_t(x_e)√(x_e) ×∑_k=0^⌊ n⌋(⌊ n⌋+k)!k!(⌊ n⌋-k)!(2x_e)^k. n is not half-integer If n is not half-integer, it is challenging to obtain a closed-form expression for the modified Bessel function of the second kind with an order n. Thus, we provide an approximation for the Bessel function using the two Bessel functions with adjacent half-integer orders. The approximation is tight according to numerical results. We also assume that n, namely, the difference between the shape parameters of two channels, is larger than 1/2. This is because the tag is typically placed closer to the Tx to send signals to a Rx farther away. The link with a shorter distance, i.e., the Tx-tag link, is more likely to have a LOS path, while the tag-Rx link is more likely to experience a more severe multipath fading effect. For all n>1/2, there exists only one integer u satisfying the condition u-1/2<n< u+1/2. Both K_u-1/2(x) and K_u+1/2(x) can be written in form of elementary functions with an exponential term, as shown in (<ref>). Let us define a closed-form function K_u,nx as K_u,nx=K_u-1/2(x)K_u+1/2(x)^u-n√(K_u-1/2(x)K_u+1/2(x)), The approximation is provided in <cit.> as K_nx ≈ C_u,n K_u,nx, where C_u,n is a constant given by C_u,n=(u-1/2)^u-n+1/2Γ(n)Γ(u+1/2)< 1. Let us also define the summation term in K_u-1/2(x) and K_u+1/2(x) as Ψ_u-1/2(x) =∑_k=0^u-1(u-1+k)!k!(u-1-k)!(2x)^k, Ψ_u+1/2(x) =∑_k=0^u(u+k)!k!(u-k)!(2x)^k, and Ψ_u,nx as Ψ_u,nx=Ψ_u-1/2(x)Ψ_u+1/2(x)^u-n√(Ψ_u+1/2(x)Ψ_u-1/2(x) ). Substituting (<ref>) into (<ref>) results in K_nx ≈√(π/2x)C_u,n Ψ_u,nx e^-x, which has an exponential term so that the generalized Gauss-Laguerre quadrature can be applied to compute I_1. Following similar steps, the approximation for the average SER under double Nakagami-m fading channels is calculated as P^D_e =2^2/3-vC_u,nMΓ(m_1)Γ(m_2)∑_a=0^M-1∑_t=1^N_GH∑_e=1^N_GLω_tω_e G_t(x_e)/√(x_e)/Ψ_u,nx_e, §.§.§ Limited Average Power We consider a scenario where the transmit power is under certain constraints. Also, the symbol energy E_sγ can be adjusted according to the channel condition, or similarly, instantaneous SNR. Let us define the instantaneous SNR as γ=E_s|h|^22σ^2M=γ̃h^2, where E_s denotes the average symbol energy and γ̃≜E_s/2σ^2M. The energy constraint is governed by ∫_0^∞E_sγf_Γγdγ≤E_s, where f_Γγ is the PDF of the instantaneous SNR.[Note that the randomness in the SNR permits analyzing energy harvesting schemes where the harvested energy can be considered as a r.v..] It is determined only by the channel power gain h^2. Thus, we have p_Γ(γ)=2(r_1r_2/γ̃)^v/2Γ(m_1)Γ(m_2)γ^v/2-1K_n(2√(r_1r_2γ/γ̃)). We adopt water-filling scheme, the optimal power allocation scheme, to adjust the transmit power <cit.>. The scheme is governed by E_s(γ)E_s={1/γ_0-1/γ, γ>γ_0 0 , γ≤γ_0 . where the outage SNR γ_0 is found by numerically solving ∫_γ_0^∞1/γ_0-1/γpγdγ=1. Substituting (<ref>) into (<ref>) yields the expression of the received signal with power allocation r_a[k]=√(E_s|h|^2γ_0-2σ^2M) x_a[k]+ω[k]. Following similar derivation, the average SER in double fading channels with water-filling scheme is calculated as P^D_e= 1/∫_h_0^∞f_ℋ(h)d|h|∫_h_0^∞∫_0^∞[1-F_ℒ̂^D_(a)|h(l)] × f_ℒ^D_(a,a)|h(l)f_ℋ(h)dl d|h|, where h_0 is the amplitude of the outage channel amplitude defined as h_0=√(γ_0/γ̃). The integral is divided by ∫_h_0^∞f_ℋ(h)d|h| since it is a conditional probability that the SNR is higher than the outage. § NUMERICAL RESULTS AND DISCUSSION In this section, we show the performance of LB in terms of the power spectrum and error performance in AWGN and fading channels. §.§ Power Spectrum In Fig. <ref>, we present the double-sided power spectrum of the baseband LB signals as a function of normalized frequency f/B with different spreading factors and the number of phases, i.e., SF∈7,9 and N∈2,3. We only show 𝒢_ℐf for f≥0 because 𝒢_ℐf=𝒢_ℐ-f. We show both the normalized power spectral density, 10log𝒢_ℐ^cf*B, and the discrete part of the spectrum. For the discrete spectrum, we show the power ∑_a=0^M-1S_alB/M^2/M^2T^2_s at frequency lB/M. We see from the continuous part of the spectrum that it has a staircase shape where the power spectrum does not always decrease as frequency increases but maintains relatively stable over some frequency ranges. Also, for a higher number of phases, i.e., loads, the spectrum drops faster compared to that with a smaller number of quantization phases. We can also see that as SF increases, the power spectrum gets increasingly condensed so that more power of the complex envelope is contained between -B/2 and B/2. In Fig. <ref>, we compare the derived total power spectrum with both continuous and discrete parts with that of normal LoRa at SF=9, for various numbers of quantization phases, i.e., N∈2,3,4. The frequency range of interest is split into multiple bins of Δ f = B/1024 width. We verified the analytical results using Monte Carlo simulations generated by estimating the power spectral density in (<ref>) using Welch's method. Only the cases for N∈2,4 are included for space limitation. We see from the figure that the power spectrum of LB modulated signals has a strong similarity with that of normal LoRa in the range of f∈0,B/2. However, the power spectrum of LB signals begins to saturate for f>B/2 while that of normal LoRa decreases continuously. For an increasing number of phases, LB matches normal LoRa for a larger frequency range. For example, at f/B=1, the power spectrum of LB with N=5 is nearly equivalent to normal LoRa, while for N=2, the gap is approximately over 25 dB. Also, we investigate the LB spectrum in the ISM band with the ETSI regulations<cit.>. In Fig. <ref>, we report the one-sided power spectrum calculated with SF=7, the resolution bandwidth RBW=1 kHz and transmission power P_s=14 dBm, i.e., the maximum allowed power. The spectrum exceeds the spectral mask for N=3 while it is within the mask for N=4 at the maximum transmission power. Thus, to satisfy the ETSI regulation on the ISM band, one should either increase the number of loads used in the tag or decrease the transmission power. The same approach may be used to investigate the compliance of LB for different ISM bands, spreading factors, number of quantization phases, and bandwidths in accordance with other regional requirements. §.§ Error Performance in AWGN Channels In Fig. <ref>, we compare the SER using ML decoder in the AWGN channel at SF∈7,8,9 for normal LoRa and LB with the different number of quantization phases, where, as a benchmark, the curves for normal LoRa are calculated by computing integrals numerically: ∫_0^∞[1-[1-exp[-l^2/2σ^2]^M-1]] f_ℒ_a,a^ML(l)dl, and the curves for LB are from Monte Carlo simulation. It can be observed that the SER for LB using ML decoder has a strong similarity to normal LoRa for various values of the SFs and number of loads, even though the LB waveforms are non-orthogonal. We verified the results for small lower values of SF and still find the error performance similar to each other. The results suggest that existing works discussing the error performance of orthogonal LoRa can be also applied to LB using ML when considering detectors. Fig. <ref> presents comparisons of derived LB SER approximation in (<ref>) with the theoretical SER calculated by numerically computing the integral in (<ref>) at SF=8,9 in the AWGN channel. The figures also include the results from the Monte Carlo simulation. As shown in the figure, the derived expression in (<ref>) exhibits a tight approximation to the theoretical results. Furthermore, comparing the SER using ML and FFT decoders, the SER gap between them differs hugely depending on the number of quantization phases. Improved SER using FFT decoder is evident when increasing N. While the gap is around 1 dB at SER of 10^-3 for SF=9 when using only 4 quantization phases, there are nearly no differences between the performance of two decoders with 16 quantization phases, indicating that the error performance with N≥4 is similar to orthogonal LoRa for both ML and FFT detectors in AWGN channel. §.§ Error Performance in Fading Channels With Fixed Transmit Power and Limited Average Power In Fig. <ref>, we plot the SER performance of LB over double Nakagami-m fading channels at SF=7 and N=2,4 where we fix the total distance of the reflected path between the Tx and Rx through the tag, i.e., d=d_1+d_2. In addition, we consider a path loss model where the energy is inversely proportional to the squared distance, i.e., 𝕍h_1 1/d_1^2 and 𝕍h_2 1/d_2^2. In this simulation, we consider the FFT decoder. It is shown that a better SER performance is achieved when the tag is placed closer to the Tx, which is similar to the experimental results in <cit.>. Also, the comparison of the derived SER approximation (<ref>) to the curves calculated by numerical integrating (<ref>) and the Monte Carlo simulation presents an accurate approximation. On the other hand, for the model with limited average power, we show the error performance of LB with a water-filling power allocation scheme in Fig. <ref> using FFT decoder at SF∈7,8 and N∈2,4. The theoretical curves from numerical integrating (<ref>) are consistent with the Monte Carlo simulations. § CONCLUSION In this paper, we provide the first mathematical expression of LB signals with a finite number of loads. Based on the expression, we derived the closed-form expressions for the power spectrum of LB, showing the staircase-shaped spectrum. To satisfy the wireless transmission regulations, measures such as increasing the number of phases or decreasing the transmit power may be needed. An analytical approximation of SER for LB in both AWGN and double Nakagami-m fading channels using two different decoders is derived. The results suggest that the SER performance of LB using ML decoder with a small number of phases is similar to LoRa modulation, while the SER performance using FFT decoder is worse. On the other hand, SER performance using FFT decoder will improve with more quantization phases. In the double fading scenarios, a longer communication range can be achieved by setting the backscatter tag closer to Tx. § APPENDIX A PROOF THAT ML DECODER IS OPTIMUM FOR LB The Tx sends the real part of the passband signal with symbol a to the Rx: R(t)=Re[r_a(t)e^j2π f_ct], where r_a(t)=|h|√(E_s)x_a(t)e^jθ_h+ω(t) is the corresponding baseband signal, θ_h is the phase of complex channel gain h, and f_c is the carrier frequency. We assume that the channel amplitude h and the delay τ are known to Rx. The symbol decision is based on choosing the maximum among a set of posteriori probabilities<cit.> â =max_0≤ a ≤ M-1 P_a p( R(t)| x_a(t), h,τ) =max_0≤ a ≤ M-1∫_0^2π p( R(t)| x_a(t), |h|, τ, θ_h)p(θ_h)dθ_h, where P_a is the probability of transmitting symbol a, p(θ_h) is the PDF of θ_h. We consider equiprobable transmit symbols and uniformly distributed phases as is typical for Rayleigh, Rician, and Nakagami fading channels. The probability p( R(t)| x_a(t), |h|,τ, θ_h) is a joint Gaussian PDF p( R(t)| x_a(t), |h|,τ, θ_h)= KexpReh/N_0e^-jθ_hy_aτ-h^2E_a/N_0, where K is an integral constant and y_aτ=∫_0^T_sr_at+τx_a^*td t is the complex correlation of the received signal and the signal waveform of symbol a and E_a=1/2∫_0^T_sx_at^2dt is the energy of the waveform x_at. Thus, we obtain p( R(t)| x_a(t), h,τ)= exp-h^2E_a/N_0K/2π∫_0^2πexpReh/N_0e^-jθ_hy_aτdθ_h. Substitute (<ref>) into (<ref>) and remove the terms that are independent of a, we obtain â =max_0≤ a ≤ M-1∫_0^2πexpRee^-jθ_hy_aτdθ_h =max_0≤ a ≤ M-1∫_0^2πexpy_aτcosθ_h-y_aτdθ_h =max_0≤ a ≤ M-1I_0y_aτ, where the term E_a is independent of a because the waveforms of different symbols have the same energy. Since I_0 is a monotonically increasing function, the optimum decision is made by choosing the symbol that has the maximum cross-correlation with the received signal. § APPENDIX B HOW TO OBTAIN T_M_M=1^Ψ t_m are times when the value of Q_Nft make discrete changes. The set of t_m can be obtained by solving ft=iπ/2^N-1, i∈ℤ, t∈0,M/B, where the two parts of ft are quadratic functions so that the roots can be easily obtained. It is feasible to directly solve f_1t=iπ/2^N-1 and f_2t=iπ/2^N-1 individually in their respective regions. However, it is worth noting that we have the equality f_2t=f_1t-M/B. We see that f_2t is a shifted version of f_1t, which can be leveraged to simplify the solution. More precisely, the roots of f_2t=iπ/2^N-1 in the region M-a/B≤ t≤M/B are corresponding to the roots of f_1t=iπ/2^N-1 in region -a/B≤ t≤ 0. Hence, all the roots can be obtained by calculating the roots of f_1t=iπ/2^N-1 for t∈-a/B,M-a/B using root formula for quadratic equation z_i^±=M-2a±√(M-2a^2+i M2^3-N)/2B, ⌈-M-2a^2/2^3-nM⌉≤ i ≤⌊aM-a/2^1-nM⌋. Next, add M/B to z_i^± if they are negative. Also, t=M/B should be added to the roots set because it overlaps with root t=0 before being shifted. Finally, rearrange all roots from smallest to largest, namely, t_1≤ t_2 ≤⋯≤ t_ψ. § APPENDIX C PROOF THAT ℒ^D_(A,I) FOLLOW RICIAN DISTRIBUTION WITH SHAPE PARAMETER Κ^D_(A,I)=|H√(E_S)Ξ^D_(A,I)|^2/2Σ^2 The Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable. A r.v. ℒ=√(𝒳^2+𝒴^2) follows the Rician distribution with shape parameter κ=|v|^2/2σ^2 if 𝒳∼𝒩(vsinθ,σ^2) and 𝒴∼𝒩(vcosθ,σ^2), where 𝒳 and 𝒴 are independently distributed normal r.v. and θ=v/v. ℒ^D_(a,i) is the amplitude of a complex r.v. ℒ^D_(a,i) = h√(E_s)ξ^D_(a,i)+𝒲^D_(a,i) =h√(E_s)ξ_a,i^D e^jβe^-jβ+𝒲^D_(a,i)e^-jβ, where e^jβ is the angle of h√(E_s)ξ^D_(a,i), and the second term is the rotated version of complex Gaussian r.v.. Thus, we define 𝒳 =|h√(E_s)ξ^D_(a,i)|+Re𝒲^D_(a,i)e^-jβ, 𝒴 =Im𝒲^D_(a,i)e^-jβ, where ℒ^D_(a,i)=𝒳+j𝒴. Since 𝒳∼𝒩|h√(E_s)ξ^D_(a,i)|,σ^2 and 𝒴∼𝒩0,σ^2, ℒ^D_(a,i) follows Rician distribution with shape parameter κ^D_(a,i)=|h√(E_s)ξ^D_(a,i)|^2/2σ^2. IEEEtran
http://arxiv.org/abs/2306.10656v1
20230619004235
Virtual Human Generative Model: Masked Modeling Approach for Learning Human Characteristics
[ "Kenta Oono", "Nontawat Charoenphakdee", "Kotatsu Bito", "Zhengyan Gao", "Yoshiaki Ota", "Shoichiro Yamaguchi", "Yohei Sugawara", "Shin-ichi Maeda", "Kunihiko Miyoshi", "Yuki Saito", "Koki Tsuda", "Hiroshi Maruyama", "Kohei Hayashi" ]
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
[ Hùng Việt Chu ================= Identifying the relationship between healthcare attributes, lifestyles, and personality is vital for understanding and improving physical and mental conditions. Machine learning approaches are promising for modeling their relationships and offering actionable suggestions. In this paper, we propose Virtual Human Generative Model (VHGM), a machine learning model for estimating attributes about healthcare, lifestyles, and personalities. VHGM is a deep generative model trained with masked modeling to learn the joint distribution of attributes conditioned on known ones. Using heterogeneous tabular datasets, VHGM learns more than 1,800 attributes efficiently. We numerically evaluate the performance of VHGM and its training techniques. As a proof-of-concept of VHGM, we present several applications demonstrating user scenarios, such as virtual measurements of healthcare attributes and hypothesis verifications of lifestyles. § INTRODUCTION The state of human health at a time can be observed in many different ways, for example, by measuring blood pressure and answering a questionnaire on exercise habits. These observable values, hereafter called attributes in this paper, may have complex interactions but collectively represent the current state of the person's health. This paper aims to build a statistical model among these attributes using the latest machine learning techniques. The model is viewed as a high-dimensional (>1,000) joint probability distribution on the attributes. It is trained for the imputation task, i.e., to estimate the missing values in the input values. It can be used in various healthcare-related applications by combining multiple imputation tasks, for example, comparing multiple hypothetical scenarios in exercise habits. Our technical challenge in building such a model is two-fold. One is the multi-modality of healthcare attributes. For example, an attribute could be numeric or categorical, and the values may have different statistical distributions. The other is the small-n-large-p problem. Healthcare data sets tend to be high-dimensional (i.e., large dimensionality p) but with relatively small sample size n. In this paper, we propose Virtual Human Generative Model (VHGM), a deep generative model trained by masked modeling with various healthcare datasets with different sample sizes and attribute dimensionality. Masked language modeling <cit.> is a training method that artificially masks some tokens and trains language models to reconstruct the masked tokens. Recently this training method has been used to train image recognition models <cit.> and tabular models <cit.>. Therefore, we call this training method masked modeling in this paper. Masked modeling allows the trained models to learn the joint distribution of missing features conditioned on input features. We can use this conditional distribution to impute the missing values and their uncertainty. As a deep generative model, we employed Heterogeneous-Incomplete Variational Autoencoder (HIVAE) <cit.>. HIVAE is an extension of VAE <cit.> that can handle heterogeneous variables. Also, the hierarchical latent structures allow modeling more complex posteriors than ordinary VAEs. For the small-n-large-p problem, we tackle this problem by combining multiple table data with different sample sizes and feature dimensions. Specifically, we use a high-quality dataset with large p and small n and multiple datasets with relatively small p and large n. Our intuition is that the former datasets learn basic feature representations and their global interaction, and the latter datasets tweak features that they can handle. This efficiently learns high dimensional with a relatively low sample complexity. By combining several training techniques, VHGM learns the joint distribution of more than 1,800 attributes conditioned on known attributes. §.§ Notation 𝒳 denotes the set of data points corresponding to each row of the training table, which may have missing attributes. ℝ and ℝ_+ denote the set of real and positive values, respectively. For a vector v∈ℝ^d, (v)∈ℝ^d× d is a diagonal matrix whose diagonal elements are v. 𝒫^c is the set of probability distributions on {1, …, c}. That is, π∈𝒫^c is a c-dimensional vector such that such that ∑_i=1^c π_i = 1 and π_i≥ 0 for all i=1, …, c. We denote by 𝒪^c-1 the increasing sequence r∈ℝ^c-1 of length c-1, that is, r_1 < ⋯ < r_c-1. The softmax function : ℝ^c→𝒫^c is defined by [(a)]_i = exp(a_i)/∑_j=1^cexp(a_j). The softplus function : ℝ→ℝ_+ is defined by (x) = log(1 + exp(x)). For probability distributions q and q' on the same space, _z(q(z) || q'(z)) is the KL divergence from q' to q with respect to the variable z. μΣ is a d-dimenional multivariate Gaussian disitribution with the mean μ∈ℝ^d and the covariance matrix Σ∈ℝ^d× d. λ is the Poisson distribution with the mean parameter λ. μΣ is the log-normal distribution with the mean parameter μ and the covariance parameter Σ (i.e., X∼μσ^2 if and only if log X∼μσ^2 for a positive random variable X). For π∈𝒫^c, π denotes the categorical distribution with parameter π. Similarly, π is the Gumbel-Softmax distribution <cit.> with the parameter π. For r ∈𝒪^c-1, r is the distribution of the ordered categorical variable with the threshold parameter r, whose cumulative distribution q(x≤ k) is defined by the logistic function: q(x≤ k) = 1/1+exp(-r_k) for k=1, …, c-1, and q(x = c) = 1 - q(x < c). With the slight abuse of notation, we interchangeably use the probability law and its distribution. For example, f(x) = xμΣ is the probability distribution of the Gaussian distribution μΣ. § PROBLEM DEFINITION We want to build a service that can estimate the missing healthcare attributes from available health information. To realize this, we consider the problem of imputing missing values with uncertainty to table data. Suppose there are p pre-defined attributes, which can be continuous (either real or positive), categorical, and ordinal variables. These attributes may be related to health care, although we do not assume so from a machine learning task perspective. Users may query any combination of attributes as input. The task is to build an algorithm that estimates the values of attributes other than the input ones and how uncertain the estimated values are. We consider a high-dimensional setting where the number of attributes p is more than 1000. § SOLUTION We solve the task above by modeling conditional distributions with deep generative models. We train HIVAE, an extension of VAE, using masked modeling to learn conditional distributions given input attributes. In order to tackle the high dimensionality of features, we integrate small-p-large-n datasets and large-p-small-n datasets for efficient training. §.§ Model HIVAE consists of a pair of an encoder _ϕ and a decoder _θ, which are learnable functions such as multi-layer perceptrons (MLPs) where ϕ and θ are learnable parameters of the encoder and decoder, respectively. The probability distribution q_ϕzx has a hierarchical structure made by the Gaussian mixture. Specifically, the encoder _ϕ: 𝒳→ℝ^×ℝ^ is a stochastic encoder composed of two models _ϕ, and _ϕ, as follows: π_ = enc_(x) ∈𝒫^ s ∼π_ (μ_, σ^2_) = enc_(x, s) ∈ℝ^×ℝ^_+ z ∼μ_(σ^2_). Here, and are the dimensionality of the latent variable s and z, respectively. We put the softmax function as the final layer of _ to ensure that π_∈𝒫^. The Gumbel-Softmax distribution is the differentiable approximation of the categorical distribution. Also, we use the reparametrization trick <cit.> for sampling z. By doing so, the model is differentiable with respect to model parameters and the input x and can be trained in an end-to-end manner. The decoder _θ(s, z) = (γ_1, …, γ_p) outputs the distribution parameters γ_j of each attribute j. It consists of the common decoder _θ, : ℝ^×ℝ^→ℝ^ and the attribute-specific decoder _θ, j: ℝ^×ℝ^→Γ_j: y = _θ, (s, z), γ_j = _θ, j(s, y). Here, is the dimensionality of the variable y and Γ_j is the parameter space for the j-th variable, differing by the variable type: γ_j = (μ_j, σ^2_j) ∈ℝ×ℝ_+ (real), λ_j ∈ℝ_+ (count), (μ_j, σ^2_j) ∈ℝ×ℝ_+ (positive), π_j∈𝒫^c_j (categorical), (r_j, h_j) ∈ℝ^c_j - 1×ℝ (ordinal), where c_j is the number of categories of the j-th variable. Again, we add the softmax function as the final layer of the decoder when the j-th variable is categorical. For ordinal variables, we convert the parameters r_j = (r_j1, …, r_j(cf-1)) and h_j to an increasing sequence r'_j=(r'_i1, …, r'_j(c-1))∈𝒪^c-1 by r'_jk = ∑_j=1^k (r_j) - h_j for k = 1, …, c-1. We treat date and time variables as real variables. The probability distribution p_θxs, z = ∏_j=1^p p_θ, jx_js, z is modelled using γ_j's as follows: p_θ, jx_js, z = x_jμ_jσ^2_j (real), x_jλ_j (count), x_jμ_jσ^2_j (positive), x_jπ_j (categorical), x_jr'_j (ordinal). We used MLPs as encoders _ϕ,, _ϕ,, and decoders _θ,, _θ, j for each index j. §.§ Training §.§.§ Evidence Lower Bound In the usual HIVAE, the loss function ℓ(x), known as the Evidence Lower Bound (ELBO), for a single data point x is as follows: ℓ(x) = 𝔼_s, z∼ q_ϕ(s, z | x)[ log p_θ(x | s, z)] + _s, z(q_ϕ(s, z | x) || p_θ(s, z)). The first term is the reconstruction loss, and the second is the regularization of the posterior distribution modeled by the encoder. Practically we compute the second term using the following decomposition: _s, z(q_ϕ(s, z | x) || p_θ(s, z)) = _s(qsx||p(s)) + 𝔼_s∼ q_ϕsx[_z (q_ϕzs, x || p_θzs)], and models the prior p_θ(s, z) = p_θ(s)p_θzs as the Gaussian mixture prior: p_θ(s) = s/, p_θzs = z_θ, (s)I_. Here, is -dimensional all-one vector and _θ, : ℝ^→ℝ^ is a learnable function. §.§.§ Masked Modeling We employed masked modeling for training the model in a self-supervised manner. Specifically, we set the mask ratio α∈ (0, 1), selected attributes that were not missing the records in each minibatch, and marked the selected attributes as missing. We changed the mask pattern at every iteration to improve generalization to unknown missing patterns, which we call mask augmentation. This effectively increases missing patterns of input records. Thereby, the model is expected to improve generalization (Section <ref>). §.§.§ -annealing It is empirically known, at least from <cit.>, that VAE-type architectures sometimes suffer from performance degradation caused by posterior collapse. Posterior collapse is a phenomenon in which the decoder is strong enough to ignore the latent representations, thereby the posterior distribution modeled by the encoder is insensitive to the input and is almost equal to the prior (i.e., q_ϕ(s, z | x) ≈ p_θ(s, z) for most x). We employed β-annealing, which is known to be an effective method for mitigating posterior collapse. One way to mitigate the posterior collapse is to introduce the hyperparameter β > 0 to the loss function to adjust the regularization strength <cit.>: ℓ(x) = 𝔼_s, z∼ q_ϕ(s, z | x)[ log p_θ(x | s, z)] + β_s, z(q_ϕ(s, z | x) || p_θ(s, z))], β-annealing is an annealing method that gradually increases the regularization parameter β during training. We expect the posterior to learn the flexible representation at the early stage of training, where the regularization is weak. §.§.§ Training Objective In summary, given the dataset 𝒟 = (x_i)_i=1^n where x_i = (x_ij)_j=1^p is the i-th training instance, the following is the training objective at the t-th epoch: L^(t)(θ, ϕ) = ∑_i=1^n∑_j=1^p m^(t)_ijlog p_θx_ijz_i + β^(t)__s (q_ϕsx_i || p_θ(s) ) + β^(t)__z (q_ϕzs_i, x_i || p_θzs_i). Here, m^(t)_ij equals 1 when the j-th element of the i-th instance is masked at the t-th iteration and 0 otherwise. s_i and z_i are the sampled output of the encoder for the i-th data point x_i. β^(t)_s, β^(t)_z > 0 are coefficients of the regularization of s and z, respectively. We used linearly increasing β-annealing, that is, we set β_∗^(t) (∗=,) as follows: β_∗^(t) = β_∗^maxt/t_max. Here, t_max is the number of training epochs, β_∗^max's are hyperparameters. §.§ Sampling We can draw samples from the model in two ways. We refer to them as the predictive-distribution sampling and latent-variable sampling, respectively. Predictive-distribution sampling draws samples from the distribution parameterized by the output γ_i of the model (e.g., the Gaussian distribution for real variables) in Eq. (<ref>). The variability of the predictive-distribution sampling represents the uncertainty of the generative model q(x| z). Latent-variable sampling is the sampling from the distributions of latent variables, in which we sample s and z in the encoder in Eq. (<ref>). The variability of the latent-variable sampling can be interpreted as the uncertainty of the posterior distribution p(z| x) cast into the input space by the decoder. If we want to compute the encoder deterministically, we should skip the latent-variable sampling of s and z and use the distribution parameters π_ and μ_ to the downstream networks, respectively. We can optionally use these sampling methods simultaneously, although we do not do so, as we explain later (Section <ref>). §.§ Datasets Masked modeling requires datasets with large sample sizes. However, it is often difficult in healthcare to practically obtain datasets whose sample size n and the number of attributes p is large. To solve this problem, we combined several tabular datasets with different properties with respect to n and p for training. Table <ref> shows the summary of the table datasets used in this study. The largest sample-size dataset is the commercially-available anonymized dataset on annual health check-ups and health insurance claim records of employees and their dependents in Japan, which has more than 1.2 million records and 205 attributes (Dataset 1). To support a wide range of attributes, we created the dataset, which collected 1,545 attributes from 994 adults (Dataset 2). This dataset collected biochemical and metabolic profiles, bacterial profiles, proteome and metabolite analyses, lifestyle surveys and questionnaires, body functions (physical, motor, and cognitive functions), alopecia, and body odor components <cit.>. We also used two datasets collected for healthcare research (Datasets 3 and 4). Dataset 3 is a dataset created for a study on metabolic syndrome consisting of 11,646 subjects with 61 attributes such as the amount of visceral fat, blood testing results, and questionnaires about eating habits and lifestyle <cit.>. Dataset 4 is an integrated dataset consisting of 12 intervention studies about the effect of chlorogenic acids and green tea catechins on the metabolic syndrome whose sample size is 1,745 in total <cit.>. Each study has different sets of attributes. The unique number of attributes is 163. Figure <ref> shows the overlap of the datasets' attribute sets. § RELATED WORK Two model families – tree-based models and neural networks – are widely used for tabular analysis. Practitioners use tree-based models such as XGBoost for tabular data in many domains, specifically in data mining competitions <cit.>. Several studies showed that tree-based models outperformed neural networks for small to medium row sizes (less than 10K), while neural networks were superior for large-scale data <cit.>. An advantage of neural networks is their adaptability to incorporate domain knowledge so that we can design a network architecture suitable to the target dataset. Transformers <cit.> are an architecture that has been recently claimed to be promising for tabular modeling <cit.>. However, we employ the VAE architecture because it can handle uncertainty in the output. § EXPERIMENTS In this section, we evaluate the accuracy of the prediction model and the effectiveness of the model for enabling novel healthcare applications. Unless otherwise stated, we down-sample or up-sample the datasets for training depending on their sample sizes, as shown in Table <ref>. In particular, we reduced the sample size of Dataset 1 from 1.2 million to 100,000 because we observed that the prediction performance saturated around this sample size. We split the dataset into train, validation, and test splits. We first evaluate the prediction performance of the model in terms of error, its capability to capture pairwise correlation, and the performance under the out-of-distribution (OOD) setting. Then, we conduct ablation studies to validate the effectiveness of masked modeling, mask augmentation, and β-annealing. §.§ Evaluation Metrics Here, we describe how to compute errors for each variable type. To calculate errors, we are given predictions and the ground truths of test entries for a column of interest: Y^pred = (y^pred_i)_i=1^n_test and Y^test=(y^test_i)_i=1^n_test, respectively. For categorical variables, we used average accuracy: categorical_error(Y^pred, Y^test) = 1/n_test∑_i=1^n_test y^pred_i = y^test_i , where · is the Iverson bracket that takes a logic expression as an argument and gives 1 if the expression is true and 0 otherwise. For ordinal variables, we used mean absolute error normalized by the dimension of the label space: ordinal_error(Y^pred, Y^test) = 1/n_test∑_i=1^n_test| y^pred_i - y^test_i |/c , where c denotes the cardinality of the ordinal label space (ref. Eq. (<ref>)). For count, positive, and real variables, we used the root mean squared error normalized by the difference between the maximum and minimum of the ground truths: continuous_error(Y^pred, Y^test) = √(1/n_test∑_i=1^n_test (y^pred_i - y^test_i)^2)/max(Y^test) - min(Y^test). §.§ Hyperparameters Model architecture: we used one hidden layer with 320 hidden nodes. Dimensions of HIVAE for s, z, and y, which are , , and , were set to 70, 98, and p (the number of attributes), respectively. β-annealing: we used β-annealing that increases β as the training progresses, where we initialized β_s and β_z as zero and linearly increases it to 0.30 and 0.04 at epoch 100, respectively. The values of β_s and β_z remained the same after epoch 100 until the training finished. Mask augmentation: for each epoch, we randomly masked out the input features for 80% of the training data. Therefore, the missing pattern for each epoch is different so that the model can learn from different missing patterns. Optimization: we used Adam with weight, where the learning rate was set to 9 × 10^-4 with weight decay parameter as 2.5 × 10^-4. The batch size was set to 1024, and the number of epochs was set to 200, where we employed early stopping with patience equal to 50. The validation objective is the average error of all columns and datasets by calculating the average error of all columns for each dataset and then calculating the average error among four datasets. §.§ Prediction Performances Here, we validate the effectiveness of using HIVAE as a model choice by comparing it with baselines, visualizing its pairwise correlation performance, and its capability to combat the out-of-distribution (OOD) setting. §.§.§ Baseline Comparisons Since the model needs to predict 1,827 attributes, as a sanity check of the model, we first verified whether a single model could make meaningful inferences. For baselines, we compared the model with the mode imputer, which fills missing values with the mode in the training data set for each attribute. We note that the mode imputation can somehow work for continuous attributes (real and positive variables). This is because most numerical attributes have the smallest unit of measurement. Furthermore, we also compared the model with mode-mean imputer, where we used mean values for continuous attributes and rounded mean values for the count and ordinal attributes. We also used mode-median imputer, where we used median for continuous attributes and rounded median values for the count and ordinal attributes. Table <ref> shows the performance results. It can be observed that our model achieves better performance than the baselines. The output values of the baseline imputers only depend on the training dataset and do not depend on the inputs. Therefore, it can be concluded that our model successfully utilizes the information from the input records to make inferences. §.§.§ Pairwise Correlations We next examine whether our model learns the conditional distribution by comparing the pairwise correlation between attributes. We compute the correlation learned by the model between two real attributes i and j as follows: we input records whose entries are empty but the attribute i whose values are equally spaced discretized values, for example, from 10 to 40 for Body Mass Index (BMI). We apply latent-sampling prediction to obtain n pairs of mean and variance parameters (n=100 in our experiments). Figure <ref> (left) compares the pairwise correlations inferred by the model (red) and empirical pairwise correlations (blue). We see that both correlations are close to each other for both highly correlated pairs and pairs with little correlations in most cases. §.§.§ OOD Performance Evaluation This section examines the effect of integrating tables with different characteristics for training. Since user queries can come from a distribution different from the training dataset, the performance in the OOD setting is highly useful for our application. In this experiment, we trained the model using Datasets 2–4 and evaluated the model on Dataset 1. As comparison methods, we use the models trained only on a single dataset (Dataset 2–4). Because each dataset has a different set of columns, for a fair comparison, we used only 18 attributes that are common to all datasets to train and test each model. More specifically, we used only 14 real variables, 2 positive variables, 1 ordinal variable, and 1 categorical variable. In this case, combining datasets can have performance considerably close to training with the in-domain dataset (Dataset 1). Tables <ref>–<ref> show the result across the different train and test missing rates. The results illustrate that combining datasets improve the prediction accuracy in the OOD setting in most cases. Only the case where Dataset 3 is the OOD dataset in Table <ref>, where training the model using only Dataset 1 is preferable to combining Datasets 1, 2, and 4 to train the model. §.§ Ablation Studies Here, we validate the usefulness of the techniques we used for training the model. §.§.§ Masked Modeling Loss vs. Reconstruction Loss In masked modeling, the model is trained to predict the masked entries. We can instead train the model to reconstruct the unmasked entries similar to the denoising autoencoder. Table <ref> shows the performance comparisons between our model trained with and without masked modeling loss. It can be observed that using masked modeling can achieve better performance. Next, the pairwise correlation performance is investigated. Figure <ref> shows the pairwise correlations inferred by the model learned by the reconstruction of unmasked entries (Recon). Unlike the model learned by masked modeling (MM), the y-axis values tend to remain unchanged even if we change the x-axis values for highly correlated pairs. This result suggests that the model learned by minimizing the reconstruction loss cannot effectively capture the pairwise correlation of the attributes. On the other hand, the model learned by masked modeling is observed to be effective for capturing the correlation of attributes. §.§.§ Mask Augmentation In this section, we examine the effect of mask augmentation. In the comparison method, we determine the mask pattern at the beginning of training and fix the pattern during training. Table <ref> shows the difference in prediction accuracy with and without mask augmentation. The result shows that mask augmentation improves the accuracy of the imputation. §.§.§ -annealing Next, we evaluate the effect of β-annealing. We trained the model using the loss function (<ref>), except that we set β^(t)_ = β^(t)_ = 1 for all t. Table <ref> shows the comparison of prediction accuracy with and without β-annealing. The results show that β-annealing improves the prediction accuracy. § SYSTEM §.§ Overview VHGM is to be provided as an integral part of a lifecare application platform. Figure <ref> shows the overview of the platform. Any application program on the platform calls VHGM via APIs. The platform also provides other lifecare-related services, such as measurement services and intervention services, so that innovative lifecare applications can be built by combining these services. With its broad coverage of lifecare attributes, VHGM is expected to play a vital role in interrelating a diverse set of measurements and intervention services. For example, an application would call a measurement service to get blood test results of the end user, estimate the probabilities of various diseases of people who have similar blood test results, and suggest the user take supplements that are provided by an intervention service. §.§ APIs VHDM provides two types of prediction as APIs: latent-deterministic prediction and latent-sampling prediction. The latent-deterministic prediction API takes a record that possibly has missing values and returns the distribution parameters γ = (γ_1, …, γ_p). The inference is deterministic in the sense that it does not use both the predictive-distribution sampling and the latent-variable sampling (Section <ref>). This API also provides the point estimate of each attribute by the mode value of the distribution with the parameter γ. On the other hand, the latent-sampling prediction API takes the sampling size N along with an input record. It computes the set of distribution parameters γ^(1), …, γ^(N) by applying the latent-variable sampling N times. We can evaluate the uncertainty of the posterior distribution by the variability of the γ parameters. For both APIs, we can optionally compute the point estimate of missing values by applying the predictive-distribution sampling using the returned parameters. §.§ Dataset Management and Model Deployment One of the challenges of VHDM is the heterogeneity of datasets for training. Datasets have different sets of attributes, and some attributes, such as basic demographic information, are semantically the same but are different in notations and scales among datasets. In addition, we sometimes need to update datasets, for example, by adding and deleting columns and fixing bugs in metadata and data themselves. We adopt dataset schema to manage multiple datasets and their updates systematically. The dataset schema is a list of metadata of available attributes such as ID, name, variable type, and possible values (for categorical values). We regularly update the schema to define the set of attributes the model employ. Accordingly, datasets are updated and processed to comply with the schema. That is, the attribute set of each dataset is a subset of the schema. Following updating the dataset schema and datasets, we re-train the model and deploy it to the system. § APPLICATIONS §.§ Virtual Measurement The system's most straightforward usage is to estimate an individual's unknown attributes from known ones. This is useful when some attributes are hard to measure while others are not. For example, we can estimate, with some uncertainty, the value of an attribute that is expensive to measure from relatively inexpensive ones, such as systolic blood pressure and demographic attributes (e.g., age and sex). §.§ Relative Positioning in a Group For any group of people characterized by a set of attribute values (e.g., males in their 50s with smoking and drinking habits), VHGM can give a probability distribution of this group for any other attribute. This is useful when one needs to understand their relative positioning in that particular group. For example, we may find a person has a low blood glucose level considering their demography and lifestyle. §.§ Hypothetical Scenarios By changing some attributes to hypothetical values, one can compare different scenarios. For example, a person in his thirties would ask suppose there would be a population of people with the same health information as me except that their age is 50, how would their glucose values differ from mine? We can obtain the answer to this question by entering our health information, changing only the person's age from thirty to fifty, and seeing how the estimated glucose level differs from their actual value (or its estimate). §.§ Exploration of Best Scenario By exploring multiple hypothetical scenarios, we can use this system to search for a population whose specific attributes have desired values. For example, suppose we want to know what groups would have the lowest blood pressure with the same healthcare information as mine? To answer this question, we first define a set of groups and then, using VHGM, search for the group with the lowest blood pressure estimate. The search could be done by a simple grid search algorithm or a more sophisticated black-box optimization tool. §.§ Demo Application We developed a web application to provide inferences based on given values using the VHGM APIs. Figure <ref> shows a browser window of the application. Biological gender, age, height, weight, and BMI were given as input data (Figure <ref>, top left), and all attributes except for the given input were estimated (Figure <ref>, top right). This demonstration also provides hypothetical scenarios. For example, Figure <ref> (bottom left) shows an example output of the scenario: if my age were 60 years old, my weight were 80 kg, and my BMI were 27.7. In this case, visceral fat area, LDL-cholesterol, and fasting blood glucose are highlighted in red, which indicates that they are higher than the original ones. In addition, one can compare two lifestyle scenarios. In Figure <ref> (bottom right), we compare two scenarios of differing walking habits, whether one has walking or an equivalent amount of physical activity more than one hour a day in their daily activity. This application enables us to explore which attributes could change if a hypothetical scenario occurred. § CHALLENGES AND LIMITATIONS §.§ Masking Strategy In this study, we employed a relatively simple masking strategy: we fixed a probability α and chose the cells to be masked uniformly randomly with the ratio of α. It is room for discussion about whether this masking strategy is optimal. In fact, in language models, the performance of masked modeling can be improved by combining several masking strategies <cit.>. Better masking strategies for masked modeling on tabular data are future work. §.§ Causality The model learns the joint distribution of attributes and does not use the information on their causality. Therefore, when we modify some of the input attributes, we should not interpret that the input change causes the output change. It should be noted that VHGM by itself only shows the statistical interactions among attributes and does not represent any causal relations. Thus, the what-if use cases in Section <ref> should not be interpreted as causal inferences. In case causal interpretations are necessary, the output of VHGM must be combined with a priori knowledge about causal relations. §.§ Time-series Analysis The current training data set has no time-series information on the same subject. Therefore, performing a time series analysis with this model is inadequate. Although one application in Section <ref> compares the distributions of attributes between groups of different ages, it does not mean that they are the prediction of the future values; Instead, they are the estimates under the hypothetical assumption on their ages. Therefore, this analysis does not imply the future values of the person. § CONCLUSION In this paper, we proposed Virtual Human Generative Model (VHGM), a statistical model for joint distribution on observable healthcare data, lifestyles, and personalities of people. The core of VHGM is a Heterogeneous-Imcomplete Variational Autoencoder (HIVAE), a deep generative model for tabular data that estimates unknown attributes from known attributes of healthcare with the uncertainty of the estimations. In training, we employed various techniques, such as masked modeling and the integration of tabular datasets of different characteristics. These techniques enabled efficient modeling of simultaneous probability distributions conditioned on inputs for over 1,800 healthcare attributes. The inference service using VHGM is provided as APIs on a platform on which third-party application vendors can develop healthcare applications. We presented the use cases of the VHGM API, virtual measurement of healthcare attributes, and comparing and recommending hypothetical lifestyles. We believe the versatility of VHDM leads to a wide range of healthcare applications, thereby contributing to the social good of enhancing people's quality of life. § ACKNOWLEDGEMENT We are grateful to MinaCare Co., Ltd. and its CEO, Dr. Yuji Yamamoto, for providing the commercial healthcare dataset with flexible terms and conditions. Without their belief in the positive impact of widespread data dissemination on healthcare, this project could not have been materialized. abbrv
http://arxiv.org/abs/2306.01480v1
20230602120733
How strings can explain regular black holes
[ "Piero Nicolini" ]
gr-qc
[ "gr-qc", "hep-th" ]
How strings can explain regular black holes Piero Nicolini Dipartimento di Fisica, Università degli Studi di Trieste, Trieste, Italy Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität Frankfurt am Main, Frankfurt am Main, Germany This paper reviews the role of black holes in the context of fundamental physics. After recalling some basic results stemming from Planckian string calculations, I present three examples of how stringy effects can improve the curvature singularity of classical black hole geometries. empty § INTRODUCTION Nowadays black holes are the focus of the attention of researchers working on a variety of topics in Physics and Mathematics. Astrophysicists have recently observed the shadow of black holes that presumably are harbored in the center of galaxies <cit.>. Gravitational waves due to black hole mergers have recently been detected at LISA/Virgo facilities <cit.>. Mathematical physicists and mathematical relativists are interested in the properties of exact black hole solutions <cit.>. This activity intersects the work of those gravitational physicists that aim to circumvent the problem of dark sectors by means of theories alternative to general relativity <cit.>. The importance of black hole research, however, goes beyond the above research fields. It seems very likely that black holes are fated to be the cornerstone of our understanding of fundamental physics. §.§ Three facts about evaporating black holes To fully appreciate the significance of black holes, it is instructive to go back to the 1970's. At that time, theoretical physicists were interested in understanding nuclei and their phenomenology. Strings and dual models were formulated just few years earlier and they were already expected to die young due to the advent of QCD. Black holes and general relativity were topics of limited interest, because they were disconnected from the quantum realm. Astrophysicists, on the other hand, did not take seriously the existence of black holes, despite the growing evidence accumulated after the initial observation during the suborbital flight of the Aerobee rocket in 1964 <cit.>. Even the curvature singularity was considered just a mathematical problem, whose solution would never lead to physical consequences. Hawking, however, radically changed this perspective. He actually set new goals for theoretical physics, by initiating the study of the Universe from a quantum mechanical view point. Along such a line of reasoning, Hawking showed that, in the vicinity of a black hole, quantum field theory is strongly disturbed by gravity. Particles become an ill-defined, coordinate dependent concept <cit.>. To an asymptotic observer black holes appear like black bodies emitting particles at a temperature T∝ 1/M, i.e. inversely proportional to their mass <cit.>. The existence of a thermal radiation offered the physical support for the thermodynamic interpretation of the laws governing black holes mechanics <cit.>. It, however, left behind many open questions, such as the fate of an evaporating black hole[By black hole evaporation one indicate the process of particle emission during the full life cycle. The 1/M dependence implies an increased emission rate as the hole loses mass. Such a nasty behavior is connected to the negative heat capacity of the black hole C≡ dM/dT<0.] and the information loss paradox.[Microstates of a collapsing star are hidden behind the event horizon. The information is not lost but virtually not accessible. If the hole thermally radiates, it emits particles in a democratic way, de facto destroying the informational content of the initial star. This is the reason why the Hawking radiation is considered an effect that worsens the problem of the information in the presence of an event horizon. ] I list below some additional issues that are too often downplayed: * If an horizon forms, Minkowski space cannot result from the Schwarzschild metric in the limit M→ 0, since it is forbidden by thermodynamics <cit.>; * Quantum back reaction effects can tame a runaway temperature <cit.>, but they can lead to mass inflation effects <cit.>; * Quantum stress tensors imply violation of energy conditions <cit.>. In general, issues of this kind are mostly attributable to a breakdown of Hawking's semiclassical formalism. The last item of the above list is, however, intriguing. Without energy condition violation, standard matter would inevitably collapse into a curvature singularity <cit.>. As a result, already in the mid 1960's there were proposals, e.g. by Gliner <cit.> and Sakharov <cit.>, to improve black hole spacetimes with energy violating source terms. Such proposals culminated with the work of Bardeen, who obtained the first regular black hole solution <cit.>. The related line elements reads: ds^2=-( 1-2M^2 r^2/ (r^2 +P^2)^3/2 )dt^2 +( 1-2M^2 r^2/ (r^2 +P^2)^3/2 )^-1dr^2+r^2dΩ^2. Here the gravitational constant is written in terms of the Planck length G=^2. At short scale, the singularity is replaced by a regular quantum vacuum region controlled by a magnetic monopole P <cit.>.[Additional regular black hole metrics were proposed in the following years, e.g. <cit.>. For a review see <cit.>.] Against this background, the point <ref>) in the above list represented a novelty. The violation was the direct consequence of a major principle, namely the combination of quantum and gravitational effects at short scale. Conversely, for the Bardeen metric, the energy condition violation is the result of an ad hoc choice e.g. the presence of a magnetic monopole. For this reason, already in the 1980's semiclassical gravity seemed to pave the way to a possible short scale completion of the spacetime <cit.>. § CAN ONE PROBE LENGTH SCALES SMALLER THAN √(Α^')? As of today, Superstring Theory can be considered the major contender of the “quantum gravity war”, namely the current debate about the formulation of a consistent quantum theory of gravity. The success of string theory is probably due to its wide spectrum, that covers a vast number of topics and paradigms, from particle physics to cosmology <cit.>. For what concerns black holes, string theory has been applied in a variety of situations, including thermodynamics <cit.> and derivation of new metrics <cit.>. The theory has also interesting spin offs where black holes have a major role, e.g. large extra dimension paradigms <cit.> and the gauge/gravity duality <cit.>. There exist also proposals alternative to black holes like the fuzz ball <cit.>. String theory is notoriously not free from problems. One of the major limitations is the identification of genuine effective theories, namely the string landscape <cit.>. For the present discussion, it is important to recall just one specif character of string theory: its intrinsic non-locality. Such a property should come as no surprise, because strings were introduced to replace quantum field theory and guarantee ultraviolet finiteness in calculations. To understand the nature of such a short scale convergence, string collisions at Planckian energies were extensively studied at the end of the 1980's <cit.>. The net result was simple and, at the same time, surprising. The particle Compton wavelength turned to be modified by an additional term, namely: Δ x ≃1/Δ p + α^'Δ p. Due to the approximations for its derivation, the above uncertainty relation, known as generalized uncertainty principle (GUP), offers just the leading term of stringy corrections to quantum mechanics. Nevertheless, the GUP can capture several important new features. One can start by saying that the GUP depends on the combination of the conventional Compton wavelength λ and the gravitational radius r_g of the particle, being α^'∼ G. In practice, (<ref>) is a genuine quantum gravity result. The GUP inherits the non-local character of string theory, being Δ x ≥√(α^'). For Planckian string tension ∼ 1/^2, this is a equivalent to saying that the Planck length is actually the smallest meaningful length scale in nature. The GUP also shows that quantum gravity has a peculiar characteristic: Quantum gravity effects shows up only in the vicinity of the Planck scale. At energies lower than , there is the conventional particle physics. At energies higher than , there are conventional (classical) black holes with mass M∼Δ p. For this reason, one speaks of “classicalization” in the trans-Planckian regime <cit.>. Particles (strings) and black holes are, therefore, two possible phases of matter. The relation between them is evident by the fact that black holes have a constant “tension”, M/r_g∼^2, like a (Planckian) string <cit.>. In practice, the GUP suggests that matter compression has to halt due to the gravitational collapse into a Planckian black hole. Such a scenario is often termed gravity “ultraviolet self-completeness” and corresponds to the impossibility of probing length scale below in any kind of experiment <cit.>. The diagram of self-completeness can be seen in Fig. <ref>. Despite the great predictive power of the relation (<ref>), many things remain unclear. For instance, the details of the collapse at the Planck scale is unknown. It is not clear whether the Lorentz symmetry is actually broken or deformed, prior, during and after the collapse <cit.>. Also the nature of the confluence of the two curves λ and r_g is debated. One could speculate that there exist a perfect symmetry between λ and r_g and actually particles and black holes coincide <cit.>. Such a proposal, known as “Black Hole Uncertainty Principle Correspondence”, is currently under investigation and requires additional ingredients for being consistent with observation <cit.>. Nevertheless, there could be some room for sub-Planckian black holes, as far as there exists a lower bound for the black hole mass <cit.>. It is also possible to imagine that the confluence is non-analytic <cit.>. Finally, it has been shown that the number of the dimensions, charge and spin can drastically affect the self-completeness paradigm <cit.>. §.§ How to derive a consistent “particle-black hole” metric There exists at least one thing one knows for sure about gravity self-completeness. The Schwarzschild metric simply does not fit in with the diagram in Fig. <ref>. The problem is connected to the possibility of having a black hole for any arbitrarily small mass, i.e., M<. This implies a potential ambiguity since to a given mass, one could associate both a particle and a black hole. More importantly, Schwarzschild black holes for sub-Planckian masses have radii ∼ M/^2, smaller than , a fact that is in contrast with the very essence of self-completeness. The formation of black holes in such a mass regime is the natural consequence of mass loss during the Hawking emission. Customarily, one circumvents the problem by saying that there is a breakdown of semiclassical gravity. Black holes would explode even before attaining sub-Planckian masses <cit.>. The problem, however, persists when one considers alternative formation mechanism, like early Universe fluctuations <cit.> and quantum decay <cit.>. The most natural way to solve the puzzle is to postulate the existence of an extremal black hole at the confluence of λ and r_g. Degenerate horizons are zero temperature asymptotic states that can guarantee the switching off of black hole evaporation.[The switching off is also known as SCRAM phase, in analogy with the terminology in use for nuclear power plants <cit.>.] For instance, Denardo and Spallucci considered charged black holes and determined the parameters to obtain stable configurations <cit.>. Microscopic black holes can, however, share their charge and angular momentum very rapidly both via Hawking and Schwinger emissions <cit.>. What one actually needs is a SCRAM phase following the Schwarzschild phase, similarly to what predicted by Balbinot and Barletta within the semiclassical approximation <cit.>. In conclusion, the issue can be solved only if one is able to derive a metric admitting a Planckian extremal horizon for M=. Such a metric exists and it is known as holographic screen metric or simply holographic metric <cit.>. Its line element reads: ds^2=-( 1-2M^2 r/ r^2 +^2 )dt^2 +( 1-2M^2 r/ r^2+^2 )^-1dr^2+r^2dΩ^2. Eq. (<ref>) is a prototype of a quantum gravity corrected black hole spacetime. Indeed, the holographic metric offers a sort of “preview” of the characteristics of string corrected black hole metrics, I will present in the next sections. In summary one notice that: * For M≫, (<ref>) becomes the Schwarzschild metric up to some corrections, that are consistent with the predictions of Dvali and Gomez's quantum N-portrait <cit.>; * For M≃ 2.06, the Hawking temperature reaches a maximum and the black hole undergoes a phase transition to a positive heat capacity cooling down (SCRAM phase); * For M=, one has r_g= and T=0, namely the evaporation stops and leaves a Planckian extremal black hole as a remnant; * For M<, (<ref>) describes a horizonless spacetime due to a particle sitting at the origin. In practice (<ref>) perfectly separates the two phases of matter, i.e. particles and black holes, and protects the region below in Fig. <ref> under any circumstances. In addition, (<ref>) does not suffer from quantum back reaction, being T/M≪ 1 during the entire evaporation process. Also the issue of the mass inflation at point <ref>) in Sec. <ref> is circumvented. For M>, there are actually an event horizon r_g=r_+ and a Cauchy horizon r_-, r_± = ^2 ( M ±√(M^2 -^2) ), but the latter falls behind the Planck length and it is actually not accessible. From (<ref>), one notices that the horizon structure is the same of the Reissner-Nordström black hole, provided one substitutes the charge with the Planck mass Q/G⟶. Eq. (<ref>) is another key aspect of self-completeness. Gravity does not need the introduction of a cut off. The completeness is achieved by exploiting the coupling constant G as a short scale regulator. At this point, there is, however, a caveat: The spacetime (<ref>) does have a curvature singularity. The regularity was not the goal of the derivation of such a metric. The basic idea has been the introduction of fundamental surface elements (i.e. holographic screens), as building blocks of the spacetime. Each of such surface elements is a multiple of the extremal configuration, that becomes the basic information capacity or information bit. Indeed for the holographic metric the celebrated area law reads S(𝒜_+)=π/𝒜_0( 𝒜_+-𝒜_0 ) +πln( 𝒜_+/𝒜_0 ) where 𝒜_0=4π^2 is the area of the extremal event horizon, and 𝒜_+=n𝒜_0. If one accepts that surfaces (rather than volumes) are the fundamental objects, the question of the regularity of the gravitational field inside the minimal surface is no longer meaningful. The spacetime simply ceases to exist inside the minimal holographic screen. This interpretation is reminiscent of spacetime dissolution observed in quantum string condensates within Eguchi's areal quantization scheme <cit.>. [The classical spacetime is a condensate of quantum strings. At distances approaching √(α^'), long range correlations of the condensate are progressively destroyed. For α^'=G, the whole spacetime boils over and no trace of the string/p-brane condensate is left over.] § WHAT T-DUALITY CAN TELL US ABOUT BLACK HOLES Suppose one has a physical system living on a compact space, whose radius is R. Suppose there exists another physical system defined on another compact space, whose radius is proportional to 1/R. If the observables of the first system can be identified with that of the second system, one can say that such systems are equivalent or dual with respect to the transformation[The duality is termed T-duality, or target space duality.] R⟶ 1/R. For example, by setting R∼ 1/Δ p in (<ref>) one finds Δ x≃ R + α^'/R. The above relation actually maps length scales shorter than √(α^') to those larger that √(α^'), being Δ x(R) = Δ x (1/R), for suitable values of α^'. From this viewpoint, one can say that the GUP is a T-duality relation. This fact is per se intriguing because it offers an additional argument for a stringy interpretation of the holographic metric. The good part is that T-duality allows for an even more genuine contact between string theory and a short scale corrected metric. To do this, one needs to go back to a basic result due to Padmanabhan <cit.>. Standard path integrals can be thought as the sum of amplitudes over all possible particle trajectories. In the presence of gravity, the scenario is slightly modified. Indeed, there exist paths that cannot contribute to the path integral. If paths are shorter than the particle gravitational radius, they must be discarded in the computation of the amplitude. A simple way to achieve this is to introduce a damping term, e^-σ(x,y)/λ⟶ e^-σ(x,y)/λ e^-r_g/σ(x,y), for each path contribution in the sum over the paths.[We temporarily assume Euclidean signature for the ease of presentation.] The above relation implies that the path length σ(x,y) admits a minimum. Interestingly, Padmanabhan performed the sum over the above path contributions and derived a modified propagator <cit.> G(x,y; m^2)=∫d^D p/(2π)^De^-ip·(x-y) G(p), with G(p)= -l_0/√(p^2+m^2) K_1 (l_0 √(p^2+m^2)), where K_1(x) is a modified Bessel function of the second kind, and l_0 is called “zero point length” <cit.>. Eq. (<ref>) is intrinsically non-local: The Bessel function has a damping term for momenta larger than 1/l_0. Therefore l_0 is the minimal length that can be resolved over the manifold. Conversely, for small arguments one finds the conventional quantum field theory result. The virtue of Padmanabhan's calculation (<ref>) is two fold: The propagator is a robust result that descends from general considerations; The functional form in terms of the Bessel function exactly coincides with the correction of string theory to standard, “low energy” quantum field theory. To better understand such a crucial point we briefly sketch the line reasoning at the basis of series of papers authored by Spallucci and Padmanabhan in collaboration with Smailagic <cit.> and Fontanini <cit.>. Let us start by considering a closed bosonic string in the presence of just one additional dimension, that is compactified on circle of length l_0=2π R. The string mass spectrum can be written as M^2=1/2α^'(n^2α^'/R^2+w^2 R^2/α^')+harmonic excitations, where n labels the Kaluza-Klein excitations and w is the winding number of the string around the compact dimension.[In the process of path integral quantization, harmonic oscillators are irrelevant. Therefore we consider them frozen without unwanted consequences.] As expected the above relation enjoys T-duality. It is invariant under simultaneous exchange R↔α^' /R and n↔ w and leads to the identification of √(α^') as invariant length scale. Strings are intrinsically non perturbative objects. As a result, any perturbative expansion destroys the very essence of the theory. The only way to extrapolate a nonpertubative character that can be “adapted” to the field theoretic concept of particle is the study of the string center of mass (SCM) dynamics. From the propagation kernel of the SCM in five dimensions, one can integrate out the fifth dimension to obtain an effective four dimensional propagator K(x-y, 0-nl_0; T)=∑_n∫ [𝒟z][𝒟p][𝒟x^5][𝒟p_5]exp(...)→ K_reg(x-y; T) where x-y and 0-nl_0 are respectively the four dimensional interval and the separation along the fifth dimension. Already at this point, one can observe the regularity due to l_0, being K_reg(x-y; T)∼∑_n e^(iμ_0/2T)[(x-y)^2+n^2 l_0^2] where μ_0 is a parameter which will not appear in the final result. Additional integrations on T and w lead to Green's function G_reg(x-y)∼∑_w∫ dT e^(iT/2μ_0)m_0^2 e^(...w^2) K_reg(x-y; T), where m_0 is the mass of the particle in the limit l_0→ 0. If one considers the leading term of the above expression, namely n=w=1, one finds (<ref>) upon the condition l_0=2π√(α^'). In other words, the zero point length in four dimensions has a T-duality origin and coincides with the minimum length in string theory. The above result can be easily generalized to the case of more than one compact dimension. The conclusion is unaffected: (<ref>) is both general and fundamental! §.§ How to implement T-duality effects Starting from (<ref>), we expect important deviations from conventional Green's function equation {Differential Operator} G(x,y) = Dirac Delta, when x≈ y. For the specific case of black holes, we recall that, in the absence of spin, there are both spherical symmetry and static conditions. It is, therefore, instructive to consider the interaction potential between two static sources with mass m and M due to (<ref>), V(r) = -1/m W[J]/T = -GM ∫d^3 k/(2π)^3 .G(k)|_k^0=0 exp(i k⃗·r⃗) = -GM/√(r^2 + l_0^2). The fact that V≈ -GM/l_0 for r→ 0, is the first signal of a possible removal of the curvature singularity. To verify this is the case, one has to construct an effective energy momentum tensor for the r.h.s. of Einstein equations. The procedure is equivalent to the derivation of black hole solutions by means of non-local gravity actions S=1/2κ∫𝔣(R, , …)√(-g) d^4 x + ∫𝔏( M , F^2, , …)√(-g) d^4 x with κ=8π G, =∇_μ∇^μ, F is the gauge field and … stand for higher derivative terms. Eq. (<ref>) is a compact notation for a class of actions that have been studied to obtain ghost free, ultraviolet finite gravity field equations <cit.>. For the present discussion, the details of such an action are not relevant, since it is only an effective description of the full string dynamics. Accordingly, also the problem of the pathology of the action (e.g. ghosts, anomalies) is of secondary concerns, if one believes in the consistency of Superstring Theory. In conclusion, one can adopt a truncated version of the full non-local action <cit.> and derive the non-local Einstein equations. For F^2 =0, they read R_μν-1/2g_μν R= κ 𝔗_μν where 𝔗_μν= O^-1() T_μν, while the Einstein tensor and T_μν are the conventional Einstein gravity tensors. The only thing that is important to know is the degree of ultraviolet convergence of the theory, encoded in the operator 𝒪(). At this point, one can observe that (<ref>) is consistent with Green's function equation for (<ref>) <cit.>, namely ∇ ^2 G( z, z^ ') = - l_0√( - ∇ ^2) K_1( l_0√( - ∇ ^2))δ ^( 3 )( z - z^ '). The operator can be simply read off from the above equation, taking into account that 𝒪()=𝒪(∇ ^2), if the source is static. In practice, the r.h.s. of (<ref>) is equivalent to the T_t^t of the 𝔗_μν, namely 𝔗_t^t≡ -ρ(𝐱)= (4π)^-1 M l_0√( - ∇ ^2) K_1( l_0√( - ∇ ^2))δ ^( 3 )( x). The effective energy density can be analytically derived and reads ρ(𝐱)=3l_0M/4π(|𝐱|^2+l_0^2)^5/2. For large distances, the above density quickly dies off as ∼ 1/|𝐱|^5. Conversely, at short scales |𝐱|≲ł_0, one finds the “Sea of Tranquility”, i.e., a regular quantum region characterized by creation and annihilation of virtual particles at constant, finite energy. In such a sea, gravity becomes repulsive and prevents the full collapse of matter into a singularity. With a geometric description in terms of differential line elements, the quantum fluctuations of such a sea are not visible. One can only capture the average effect, namely a local de Sitter ball around the origin, whose cosmological constant is ∼ GM/l_0^3. Local energy condition violations certify the correctness of such a scenario. After the above prelude, one can analytically solve (<ref>) and display the full metric <cit.> ds^2=-( 1-2M^2 r^2/ (r^2 +l_0^2)^3/2 )dt^2 +( 1-2M^2 r^2/ (r^2 +l_0^2)^3/2 )^-1dr^2+r^2dΩ^2. The magic of the above result is that it coincides with the Bardeen solution (<ref>), provided P⟶ l_0. This is reminiscent of the relation between the holographic metric and the Reissner-Nordström geometry (<ref>): this time, however, one can say that the Dirac string has been traded with a closed bosonic string. The general properties of the horizon structure and thermodynamics are similar to what seen in the context of the holographic metric – see <ref>) – <ref>) in Sec. <ref>. Horizon extremization allows for a SCRAM phase at the end of the evaporation, making the hole a stable system from a thermodynamic viewpoint. The Hawking temperature reads T= 1/4π r_+ (1 -3 ł_0^2/r_+^2 +ł_0^2), while the entropy is S = 𝒜_+/4[ (1 -8πł_0^2/𝒜_+) √(1 +4πł_0^2/𝒜_+)+12πł_0^2/𝒜_+(arsinh√(𝒜_+/4πł_0^2) -arsinh√(2))], with 𝒜_+ = 4π r_+^2. The great advantage of the metric (<ref>) is the stability. This is a property in marked contrast to the case of the Bardeen metric, than can be, at the most a transient state. Even by postulating the existence of magnetic monopoles at some point of the history of the Universe <cit.>, their coupling has to be much stronger than the QED coupling <cit.> α_m≫α_e∼ 137^-1. This would imply for the Bardeen metric a sudden decay into the Schwarzschild black hole. Charged and charged rotating regular T-duality black holes have recently been derived. The novelty is the replacement of the ring singularity with a finite tension rotating string – for further details see <cit.>. § A SHORT GUIDE TO BLACK HOLES IN NONCOMMUTATIVE GEOMETRY We are going to present a family of black hole solutions, that represents a sort of coronation of the program of regular black holes in string theory. Indeed, after almost 20 years since their derivation, their good properties are still unmatched. One should start by recalling that noncommutative geometry (NCG) is a field in Mathematics whose goal is the study of noncommutative algebras on certain topological spaces. In Physics, NCG has well known applications. For example, at the heart of quantum mechanics there is a noncommutative geometry, i.e., the algebra of quantum operators. The idea that further physically meaningful results can be obtained from NCG, however, remained dormant at least until the 1990's. At that time, Connes proposed the study of fundamental interactions from the spectral triple principle <cit.>.[The spectral triple is made of three items, a real, associative, noncommutative algebra 𝒜, a Hilbert space ℋ and a self adjoint operator ƪ on it.] As a main goal, Connes aimed to construct a “quantum version” of the spacetime, by establishing a relation similar to that between quantum mechanics and classical phase space <cit.> (see also <cit.> for a pedagogical introduction). The most simple way to construct a noncommutative geometry is based on the replacement of conventional coordinates with noncommutative operators [x^i, x^j]=iθ^ij where θ^ij is a constant, real valued, antisymmetric D× D matrix. The above commutator implies a new kind of uncertainty Δ x^i Δ x^j ≥1/2|θ^ij|, that can be used to improve the bad short distance behavior of fields propagating on the noncommutative geometry. To achieve this goal, one can deform field Lagrangians by introducing a suitable non-local product. For instance, a realization of noncommutive algebra of functions is based on the Moyal-produced (also known as star product or Weyl–Groenewold product) f⋆ g ≡. e^(i/2)θ^ij∂/∂ξ^i∂/∂η^jf(x+ξ)g(x+η)|_ξ=η=0, that can be used as a starting point to obtain a noncommutative field theory – for reviews see e.g. <cit.>. Probably the biggest push to the popularity of noncommutative field theory was given by its connection to string theory. Open strings ending on D-branes display a noncommutative behavior in the presence of a non vanishing, (constant) Kalb-Ramond B-field <cit.> θ^ij∼ (2πα^')^2(1/g+2πα^' BB1/g-2πα^' B)^ij, where g is the metric tensor. Noncommutative gravity follows a similar procedure for the metric field, defined over the underlying noncommutative manifold. The program of noncommutative gravity is, however, still in progress. Apart from some specific examples, one still misses a consistent noncommutative version of general relativity. In addition, the existing attempts to derive noncommutative corrections to classical black hole solutions run into the general difficulty of improving curvature singularities – see <cit.>. In 2003, the possibility of obtaining from noncommutative geometry something meaningful for the physics of black hole physics was still perceived as quite remote. This was a time, that followed the “explosive” predictions about the possibility of a plentiful production of mini black holes in particle detectors <cit.>. Operations at the LHC, however, began only five year later. As a result, there was a huge pressure to predict the experimental signatures of such black holes. If the terascale quantum gravity paradigm was correct, it was expected to have repercussions on mini black hole cross section, evaporation and detection <cit.>. Given this background, there was an unconventional attempt to study noncommutative geometry stripped of all elements, apart from its nonlocal character. From (<ref>) one can guess that NCG introduces Gaussian damping terms. To prove this, Smailagic and Spallucci considered the average of noncommutative operators ⟨ x^i ⟩, on states of minimal uncertainty, namely coherent states similar to those introduced by Glauber in quantum optics <cit.>. Such averages were interpreted as the closest thing to the conventional concept of coordinate. Initial results for path integrals on the noncommutative plane led to the conclusion that Dirac delta distributions are smeared out and become Gaussian functions, whose width is controlled by the noncommutative parameter θ <cit.>.[The matrix θ^ij in (<ref>) can be written as θ^ij=θε^ij. The parameter θ has the dimension of a length squared.] The result was later formalized in terms of a nonlocal field theory formulation <cit.>. Green's function equation (<ref>) was determined by applying a non-local operator to the source term namely[Here the signature is Euclidean.] δ^(D)(x-y)⟶ f_θ(x,y) = e^θδ^(D)(x-y). To derive a spacetime that account for noncommutative effects, one has to recall that the metric field can be seen as a “thermometer” that measures the average fluctuations of the manifold. From (<ref>), one can derive the effective energy density, ρ(𝐱)=M/(4πθ)^3/2 e^-|𝐱|^2/4θ, and follow the procedure presented in Sec. <ref>. There are, however, two caveats: * The resulting spacetime is an effective description that captures just one single character of NCG, i.e. non-locality.[For this reasons, one speaks of “noncommutative geometry inspired” solution. Other authors have termed it as “minimalistic approach” <cit.>.] * The matrix θ^ij is assumed to behave like a field to preserve Lorentz symmetry.[Lorentz violation associated to (<ref>) is a debated issue in the literature, e.g., see <cit.>. ] At this point, one can display the central result <cit.> ds^2=-[ 1-2M^2/rγ(3/2; r^2/4θ)/Γ(3/2) ] dt^2 +[ 1-2M^2/rγ(3/2; r^2/4θ)/Γ(3/2) ]^-1dr^2+r^2dΩ^2. Here γ ( 3/2, x)≡∫_0^xdu/u u^3/2e^-u is the incomplete Gamma function. It guarantees regularity of the manifold and quick convergence to the Schwarzschild metric for r≫√(θ). While the horizon structure and the thermodynamics are similar to those of the other quantum gravity improved metrics (<ref>)(<ref>), the above result has some specific characters. The Gaussian function (<ref>) is a non-polynomial smearing, in agreement to what found by Tseytlin in <cit.>. On the other hand, polynomial functions (like GUP, T-duality) can be seen as the result of a truncation of the expansion over the theta parameter <cit.>. The above metric has been obtained also in the context of non-local gravity actions <cit.> and has been extended to the case of additional spatial dimensions <cit.>, charged <cit.> and rotating <cit.> solutions. From the emission spectra of the higher dimensional extension of (<ref>), one learns that mini black holes tend to radiate soft particles mainly on the brane. This is in marked contrast with results coming from the Schwarzschild-Tangherlini metric <cit.>. § CONCLUSIONS The very essence of the message I want to convey is the relation between particles and black holes in Fig. <ref>. It has already been noticed that strings and black holes share common properties <cit.>. In this work, however, the argument is reinforced and employed to improve classical black hole solutions. From this perspective the regularity of black hole metrics is the natural consequence of non-locality of particles, when described in terms of strings. Another key point concerns the particle-black hole at the intersection of the curves for λ and r_g. The nature of this object is probably one of the most important topics in current research in quantum gravity. Indeed, the particle-black hole is essential to guarantee a self-complete character of gravity. Its mass and radius are related to the fundamental units of quantum gravity and string theory, along the common denominator of non-locality. This is evident also from the correspondence between cut offs, √(α^') (string, GUP), l_0 (T-duality, GUP), self completeness, √(θ) (NCG). In this work, we have also mentioned some of the existing difficulties, e.g., the details of the collapse at the Planck scale, the absence of an actual “quantum manifold”. This means that the program of quantum gravity is far from being complete. It is also not clear if the predictions emerging from string theory will have experimental corroboration in the future. The ideas here presented, however, tend to support a less pessimistic scenario. Black holes could offer a testbed for fundamental physics, that is alternative to conventional experiments in high energy particle physics. §.§.§ Acknowledgments The work of P.N. has partially been supported by GNFM, Italy's National Group for Mathematical Physics. P.N. is grateful to Cosimo Bambi for the invitation to submit the present contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, Singapore. P.N. is grateful to Athanasios Tzikas for the support in drawing the picture. 125 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [EHT(2019)]EHT19PR https://www.eso.org/public/news/eso1907/ title Astronomers Capture First Image of a Black Hole, (year 2019)NoStop [Abbott et al.(2016)Abbott et al.]LIV16 author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), 10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016), http://arxiv.org/abs/1602.03837arXiv:1602.03837 [gr-qc]NoStop [Rendall(2000)]Ren00 author author A. D. Rendall, 10.12942/lrr-2000-1 journal journal Living Rev. Rel. volume 3, pages 1 (year 2000), http://arxiv.org/abs/gr-qc/0001008arXiv:gr-qc/0001008NoStop [Sotiriou and Faraoni(2010)]SoF10 author author T. P. Sotiriou and author V. Faraoni, 10.1103/RevModPhys.82.451 journal journal Rev. Mod. Phys. volume 82, pages 451 (year 2010)NoStop [Capozziello and De Laurentis(2011)]CaD11 author author S. Capozziello and author M. De Laurentis, 10.1016/j.physrep.2011.09.003 journal journal Phys. Rept. volume 509, pages 167 (year 2011), http://arxiv.org/abs/1108.6266arXiv:1108.6266 [gr-qc]NoStop [Clifton et al.(2012)Clifton, Ferreira, Padilla, and Skordis]CFP++12 author author T. Clifton, author P. G. Ferreira, author A. Padilla, and author C. Skordis, 10.1016/j.physrep.2012.01.001 journal journal Phys. Rept. volume 513, pages 1 (year 2012), http://arxiv.org/abs/1106.2476arXiv:1106.2476 [astro-ph.CO]NoStop [Bowyer et al.(1965)Bowyer, Byram, Chubb, and Friedman]BBCF65 author author S. Bowyer, author E. T. Byram, author T. A. Chubb, and author H. Friedman, 10.1126/science.147.3656.394 journal journal Science volume 147, pages 394 (year 1965)NoStop [Fulling(1973)]Ful73 author author S. A. Fulling, 10.1103/PhysRevD.7.2850 journal journal Phys. Rev. D volume 7, pages 2850 (year 1973)NoStop [Davies(1975)]Dav75 author author P. C. W. Davies, 10.1088/0305-4470/8/4/022 journal journal J. Phys. A Math. Gen. volume 8, pages 609 (year 1975)NoStop [Unruh(1976)]Unr76 author author W. G. Unruh, 10.1103/PhysRevD.14.870 journal journal Phys. Rev. D volume 14, pages 870 (year 1976)NoStop [Hawking(1975)]Haw75 author author S. W. Hawking, 10.1007/BF02345020 journal journal Comm.Math. Phys. volume 43, pages 199 (year 1975)NoStop [Bekenstein(1973)]Bek73 author author J. D. Bekenstein, 10.1103/PhysRevD.7.2333 journal journal Phys. Rev. volume D7, pages 2333 (year 1973)NoStop [Bardeen et al.(1973)Bardeen, Carter, and Hawking]BCH73 author author J. M. Bardeen, author B. Carter, and author S. W. Hawking, 10.1007/BF01645742 journal journal Commun. Math. Phys. volume 31, pages 161 (year 1973)NoStop [Wald(1984)]Wal84 author author R. M. Wald, 10.7208/chicago/9780226870373.001.0001 title General Relativity (publisher Chicago Univ. Pr., address Chicago, USA, year 1984)NoStop [Balbinot and Barletta(1988)]BaB88 author author R. Balbinot and author A. Barletta, 10.1088/0264-9381/5/1/004 journal journal Class. Quant. Grav. volume 5, pages L11 (year 1988)NoStop [Balbinot and Poisson(1993)]BaP93 author author R. Balbinot and author E. Poisson, 10.1103/PhysRevLett.70.13 journal journal Phys. Rev. Lett. volume 70, pages 13 (year 1993)NoStop [Birrell and Davies(1984)]BiD84 author author N. D. Birrell and author P. C. W. Davies, 10.1017/CBO9780511622632 title Quantum Fields in Curved Space, Cambridge Monographs on Mathematical Physics (publisher Cambridge Univ. Press, address Cambridge, UK, year 1984)NoStop [Penrose(1965)]Pen65 author author R. Penrose, 10.1103/PhysRevLett.14.57 journal journal Phys. Rev. Lett. volume 14, pages 57 (year 1965)NoStop [Gliner(1966)]Gli66 author author E. B. Gliner, @noop journal journal Sov. J. Exp. Th. Phys. volume 22, pages 378 (year 1966)NoStop [Sakharov(1966)]Sak66 author author A. D. Sakharov, @noop journal journal Sov. Phys. JETP volume 22, pages 241 (year 1966)NoStop [Bardeen(1968)]Bar68 author author J. M. Bardeen, in @noop booktitle Proceedings of International Conference GR5 (Tbilisi, USSR) (year 1968) p. pages 174NoStop [Ayon-Beato and Garcia(2000)]AyG00 author author E. Ayon-Beato and author A. Garcia, 10.1016/S0370-2693(00)01125-4 journal journal Phys. Lett. volume B493, pages 149 (year 2000), http://arxiv.org/abs/gr-qc/0009077arXiv:gr-qc/0009077 [gr-qc]NoStop [Dymnikova(1992)]Dym92 author author I. Dymnikova, 10.1007/BF00760226 journal journal Gen. Rel. Grav. volume 24, pages 235 (year 1992)NoStop [Ayon-Beato and Garcia(1999a)]AyG99a author author E. Ayon-Beato and author A. Garcia, 10.1016/S0370-2693(99)01038-2 journal journal Phys. Lett. volume B464, pages 25 (year 1999a), http://arxiv.org/abs/hep-th/9911174arXiv:hep-th/9911174 [hep-th]NoStop [Ayon-Beato and Garcia(1999b)]AyG99b author author E. Ayon-Beato and author A. Garcia, 10.1023/A:1026640911319 journal journal Gen. Rel. Grav. volume 31, pages 629 (year 1999b), http://arxiv.org/abs/gr-qc/9911084arXiv:gr-qc/9911084 [gr-qc]NoStop [Ayon-Beato and Garcia(1998)]AyG99c author author E. Ayon-Beato and author A. Garcia, 10.1103/PhysRevLett.80.5056 journal journal Phys. Rev. Lett. volume 80, pages 5056 (year 1998), http://arxiv.org/abs/gr-qc/9911046arXiv:gr-qc/9911046 [gr-qc]NoStop [Bronnikov(2001)]Bro01 author author K. A. Bronnikov, 10.1103/PhysRevD.63.044005 journal journal Phys. Rev. volume D63, pages 044005 (year 2001), http://arxiv.org/abs/gr-qc/0006014arXiv:gr-qc/0006014 [gr-qc]NoStop [Mbonye and Kazanas(2005)]Mbo05 author author M. R. Mbonye and author D. Kazanas, 10.1103/PhysRevD.72.024016 journal journal Phys. Rev. D volume 72, pages 024016 (year 2005), http://arxiv.org/abs/gr-qc/0506111arXiv:gr-qc/0506111NoStop [Hayward(2006)]Hay06 author author S. A. Hayward, 10.1103/PhysRevLett.96.031103 journal journal Phys. Rev. Lett. volume 96, pages 031103 (year 2006), http://arxiv.org/abs/gr-qc/0506126arXiv:gr-qc/0506126 [gr-qc]NoStop [Dymnikova(2023)]Dym23 author author I. Dymnikova, @noop title Regular rotating black holes and solitons with the de Sitter/phantom interiors, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Bronnikov(2023)]Bro23 author author K. A. Bronnikov, @noop title Regular black holes sourced by nonlinear electrodynamics, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Ansoldi(2008)]Ans08 author author S. Ansoldi, in https://inspirehep.net/record/778724/files/arXiv:0802.0330.pdf booktitle Conference on Black Holes and Naked Singularities Milan, Italy, May 10-12, 2007 (year 2008) http://arxiv.org/abs/0802.0330arXiv:0802.0330 [gr-qc]NoStop [Nicolai(2014)]Nicolai13 author author H. Nicolai, title Quantum Gravity: the view from particle physics, in 10.1007/978-3-319-06349-2_18 booktitle General Relativity, Cosmology and Astrophysics: Perspectives 100 years after Einstein's stay in Prague, series Fundam. Theor. Phys., Vol. volume 177, editor edited by editor J. Bičák and editor T. Ledvinka (publisher Springer International Publishing, address Switzerland, year 2014) pp. pages 369–387, http://arxiv.org/abs/1301.5481arXiv:1301.5481 [gr-qc]NoStop [Maldacena(1999)]Mal99 author author J. M. Maldacena, 10.1023/A:1026654312961, 10.4310/ATMP.1998.v2.n2.a1 journal journal Int. J. Theor. Phys. volume 38, pages 1113 (year 1999), http://arxiv.org/abs/hep-th/9711200arXiv:hep-th/9711200 [hep-th]NoStop [Stelle(1998)]Ste98 author author K. S. Stelle, in @noop booktitle ICTP Summer School in High-energy Physics and Cosmology (year 1998) http://arxiv.org/abs/hep-th/9803116arXiv:hep-th/9803116NoStop [Antoniadis et al.(1998)Antoniadis, Arkani-Hamed, Dimopoulos, and Dvali]AAD+98 author author I. Antoniadis, author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1016/S0370-2693(98)00860-0 journal journal Phys. Lett. B volume 436, pages 257 (year 1998), http://arxiv.org/abs/hep-ph/9804398arXiv:hep-ph/9804398NoStop [Arkani-Hamed et al.(1998)Arkani-Hamed, Dimopoulos, and Dvali]ADD98 author author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1016/S0370-2693(98)00466-3 journal journal Phys. Lett. B volume 429, pages 263 (year 1998), http://arxiv.org/abs/hep-ph/9803315arXiv:hep-ph/9803315NoStop [Arkani-Hamed et al.(1999)Arkani-Hamed, Dimopoulos, and Dvali]ADD99 author author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1103/PhysRevD.59.086004 journal journal Phys. Rev. D volume 59, pages 086004 (year 1999), http://arxiv.org/abs/hep-ph/9807344arXiv:hep-ph/9807344NoStop [Randall and Sundrum(1999a)]RaS99a author author L. Randall and author R. Sundrum, 10.1103/PhysRevLett.83.3370 journal journal Phys. Rev. Lett. volume 83, pages 3370 (year 1999a), http://arxiv.org/abs/hep-ph/9905221arXiv:hep-ph/9905221NoStop [Randall and Sundrum(1999b)]RaS99b author author L. Randall and author R. Sundrum, 10.1103/PhysRevLett.83.4690 journal journal Phys. Rev. Lett. volume 83, pages 4690 (year 1999b), http://arxiv.org/abs/hep-th/9906064arXiv:hep-th/9906064NoStop [Gogberashvili(2000)]Gog98a author author M. Gogberashvili, 10.1209/epl/i2000-00162-1 journal journal Europhys. Lett. volume 49, pages 396 (year 2000), http://arxiv.org/abs/hep-ph/9812365arXiv:hep-ph/9812365NoStop [Gogberashvili(2002)]Gog98b author author M. Gogberashvili, 10.1142/S0218271802002992 journal journal Int. J. Mod. Phys. D volume 11, pages 1635 (year 2002), http://arxiv.org/abs/hep-ph/9812296arXiv:hep-ph/9812296NoStop [Gogberashvili(1999)]Gog99 author author M. Gogberashvili, 10.1142/S021773239900208X journal journal Mod. Phys. Lett. A volume 14, pages 2025 (year 1999), http://arxiv.org/abs/hep-ph/9904383arXiv:hep-ph/9904383NoStop [Banks and Fischler(1999)]BaF99 author author T. Banks and author W. Fischler, @noop title A Model for high-energy scattering in quantum gravity, (year 1999), note unpublished paper, http://arxiv.org/abs/hep-th/9906038arXiv:hep-th/9906038NoStop [Mathur(2005)]Mat05 author author S. D. Mathur, 10.1002/prop.200410203 journal journal Fortsch. Phys. volume 53, pages 793 (year 2005), http://arxiv.org/abs/hep-th/0502050arXiv:hep-th/0502050NoStop [Vafa(2005)]Vaf05 author author C. Vafa, @noop title The String landscape and the swampland, (year 2005), note Based on talks given at the Einstein Symposium in Alexandria, at the 2005 Simons Workshop in Mathematics and Physics, and the talk to have been presented at Strings 2005., http://arxiv.org/abs/hep-th/0509212arXiv:hep-th/0509212NoStop [Amati et al.(1987)Amati, Ciafaloni, and Veneziano]ACV87 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1016/0370-2693(87)90346-7 journal journal Phys. Lett. B volume 197, pages 81 (year 1987)NoStop [Amati et al.(1988)Amati, Ciafaloni, and Veneziano]ACV88 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1142/S0217751X88000710 journal journal Int. J. Mod. Phys. A volume 3, pages 1615 (year 1988)NoStop [Amati et al.(1989)Amati, Ciafaloni, and Veneziano]ACV89 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1016/0370-2693(89)91366-X journal journal Phys. Lett. B volume 216, pages 41 (year 1989)NoStop [Dvali et al.(2011a)Dvali, Folkerts, and Germani]DFG11 author author G. Dvali, author S. Folkerts, and author C. Germani, 10.1103/PhysRevD.84.024039 journal journal Phys. Rev. D volume 84, pages 024039 (year 2011a), http://arxiv.org/abs/1006.0984arXiv:1006.0984 [hep-th]NoStop [Dvali et al.(2011b)Dvali, Giudice, Gomez, and Kehagias]DGGK11 author author G. Dvali, author G. F. Giudice, author C. Gomez, and author A. Kehagias, 10.1007/JHEP08(2011)108 journal journal JHEP volume 08, pages 108 (year 2011b), http://arxiv.org/abs/1010.1415arXiv:1010.1415 [hep-ph]NoStop [Aurilia and Spallucci(2013a)]AuS13 author author A. Aurilia and author E. Spallucci, 10.1155/2013/531696 journal journal Adv. High Energy Phys. volume 2013, pages 531696 (year 2013a), http://arxiv.org/abs/1309.7741arXiv:1309.7741 [hep-th]NoStop [Garay(1995)]Gar95 author author L. J. Garay, 10.1142/S0217751X95000085 journal journal Int. J. Mod. Phys. A volume 10, pages 145 (year 1995), http://arxiv.org/abs/gr-qc/9403008arXiv:gr-qc/9403008NoStop [Aurilia and Spallucci(2013b)]AuS02 author author A. Aurilia and author E. Spallucci, @noop title Planck's uncertainty principle and the saturation of Lorentz boosts by Planckian black holes, (year 2013b), note unpublished essay submitted to the Gravity Research Foundation for the 2002-03 competition., http://arxiv.org/abs/1309.7186arXiv:1309.7186NoStop [Dvali and Gomez(2010)]DvG10 author author G. Dvali and author C. Gomez, @noop title Self-Completeness of Einstein Gravity, (year 2010), note unpublished paper, http://arxiv.org/abs/1005.3497arXiv:1005.3497 [hep-th]NoStop [Adler(2010)]Adl10 author author R. J. Adler, 10.1119/1.3439650 journal journal Am. J. Phys. volume 78, pages 925 (year 2010), http://arxiv.org/abs/1001.1205arXiv:1001.1205 [gr-qc]NoStop [Carr(2016)]Car13 author author B. J. Carr, in 10.1007/978-3-319-20046-0_19 booktitle 1st Karl Schwarzschild Meeting on Gravitational Physics, series Springer Proc. Phys., Vol. volume 170, editor edited by editor P. Nicolini, editor M. Kaminski, editor J. Mureika, and editor M. Bleicher (publisher Springer International Publishing, address Switzerland, year 2016) pp. pages 159–167, http://arxiv.org/abs/1402.1427arXiv:1402.1427 [gr-qc]NoStop [Padmanabhan(2020)]Pad20 author author T. Padmanabhan, 10.1016/j.physletb.2020.135774 journal journal Phys. Lett. B volume 809, pages 135774 (year 2020)NoStop [Carr et al.(2023)Carr, Mureika, and Nicolini]CMN23 author author B. Carr, author J. Mureika, and author P. Nicolini, @noop title Elementary Particles as Black Holes: Linking Experimental Tests in the Microscopic and Macroscopic Regimes , (year 2023), note in preparationNoStop [Carr et al.(2015)Carr, Mureika, and Nicolini]CMN15 author author B. J. Carr, author J. Mureika, and author P. Nicolini, 10.1007/JHEP07(2015)052 journal journal JHEP volume 07, pages 052 (year 2015), http://arxiv.org/abs/1504.07637arXiv:1504.07637 [gr-qc]NoStop [Carr et al.(2020)Carr, Mentzer, Mureika, and Nicolini]CMMN20 author author B. Carr, author H. Mentzer, author J. Mureika, and author P. Nicolini, 10.1140/epjc/s10052-020-08706-0 journal journal Eur. Phys. J. C volume 80, pages 1166 (year 2020), http://arxiv.org/abs/2006.04892arXiv:2006.04892 [gr-qc]NoStop [Mureika and Nicolini(2013)]MuN13 author author J. Mureika and author P. Nicolini, 10.1140/epjp/i2013-13078-0 journal journal Eur. Phys. J. Plus volume 128, pages 78 (year 2013), http://arxiv.org/abs/1206.4696arXiv:1206.4696 [hep-th]NoStop [Knipfer et al.(2019)Knipfer, Köppel, Mureika, and Nicolini]KKM+19 author author M. Knipfer, author S. Köppel, author J. Mureika, and author P. Nicolini, 10.1088/1475-7516/2019/08/008 journal journal JCAP volume 08, pages 008 (year 2019), http://arxiv.org/abs/1905.03233arXiv:1905.03233 [gr-qc]NoStop [Nicolini(2018)]Nic18 author author P. Nicolini, 10.1016/j.physletb.2018.01.013 journal journal Phys. Lett. volume B778, pages 88 (year 2018), http://arxiv.org/abs/1712.05062arXiv:1712.05062 [gr-qc]NoStop [Hawking(1974)]Haw74 author author S. W. Hawking, 10.1038/248030a0 journal journal Nature volume 248, pages 30 (year 1974)NoStop [Hawking(1971)]Haw71 author author S. Hawking, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 152, pages 75 (year 1971)NoStop [Carr and Hawking(1974)]CaH74 author author B. J. Carr and author S. W. Hawking, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 168, pages 399 (year 1974)NoStop [Bousso and Hawking(1995)]BoH95 author author R. Bousso and author S. W. Hawking, 10.1103/PhysRevD.52.5659 journal journal Phys. Rev. volume D52, pages 5659 (year 1995), http://arxiv.org/abs/gr-qc/9506047arXiv:gr-qc/9506047 [gr-qc]NoStop [Nicolini(2009)]Nic09 author author P. Nicolini, 10.1142/S0217751X09043353 journal journal Int. J. Mod. Phys. volume A24, pages 1229 (year 2009), http://arxiv.org/abs/0807.1939arXiv:0807.1939 [hep-th]NoStop [Denardo and Spallucci(1978)]DeS78 author author G. Denardo and author E. Spallucci, 10.1007/BF02726800 journal journal Nuovo Cim. B volume 44, pages 381 (year 1978)NoStop [Gibbons(1975)]Gib75 author author G. W. Gibbons, 10.1007/BF01609829 journal journal Commun. Math. Phys. volume 44, pages 245 (year 1975)NoStop [Page(2006)]Pag06 author author D. N. Page, 10.1086/508858 journal journal Astrophys. J. volume 653, pages 1400 (year 2006), http://arxiv.org/abs/astro-ph/0610340arXiv:astro-ph/0610340NoStop [Nicolini and Spallucci(2014)]NiS14 author author P. Nicolini and author E. Spallucci, 10.1155/2014/805684 journal journal Adv. High Energy Phys. volume 2014, pages 805684 (year 2014), http://arxiv.org/abs/1210.0015arXiv:1210.0015 [hep-th]NoStop [Dvali and Gomez(2012)]DvG12 author author G. Dvali and author C. Gomez, @noop title Black Hole Macro-Quantumness, (year 2012), note unpublished paper, http://arxiv.org/abs/1212.0765arXiv:1212.0765 [hep-th]NoStop [Dvali and Gomez(2013a)]DvG13 author author G. Dvali and author C. Gomez, 10.1002/prop.201300001 journal journal Fortsch. Phys. volume 61, pages 742 (year 2013a), http://arxiv.org/abs/1112.3359arXiv:1112.3359 [hep-th]NoStop [Dvali and Gomez(2013b)]DvG13+ author author G. Dvali and author C. Gomez, 10.1016/j.physletb.2013.01.020 journal journal Phys. Lett. volume B719, pages 419 (year 2013b), http://arxiv.org/abs/1203.6575arXiv:1203.6575 [hep-th]NoStop [Ansoldi et al.(1999)Ansoldi, Aurilia, and Spallucci]AAS99a author author S. Ansoldi, author A. Aurilia, and author E. Spallucci, 10.1016/S0960-0779(98)00115-5 journal journal Chaos Solitons Fractals volume 10, pages 197 (year 1999), http://arxiv.org/abs/hep-th/9803229arXiv:hep-th/9803229NoStop [Nicolini(2022)]Nic22 author author P. Nicolini, 10.1007/s10714-022-02995-4 journal journal Gen. Rel. Grav. volume 54, pages 106 (year 2022), http://arxiv.org/abs/2208.05390arXiv:2208.05390 [hep-th]NoStop [Padmanabhan(1997)]Pad97 author author T. Padmanabhan, 10.1103/PhysRevLett.78.1854 journal journal Phys. Rev. Lett. volume 78, pages 1854 (year 1997), http://arxiv.org/abs/hep-th/9608182arXiv:hep-th/9608182NoStop [Padmanabhan(1998)]Pad98 author author T. Padmanabhan, 10.1103/PhysRevD.57.6206 journal journal Phys. Rev. D volume 57, pages 6206 (year 1998)NoStop [Smailagic et al.(2003)Smailagic, Spallucci, and Padmanabhan]SSP03 author author A. Smailagic, author E. Spallucci, and author T. Padmanabhan, @noop title String theory T duality and the zero point length of space-time, (year 2003), note unpublished paper, http://arxiv.org/abs/hep-th/0308122arXiv:hep-th/0308122NoStop [Spallucci and Fontanini(2005)]SpF05 author author E. Spallucci and author M. Fontanini, title Zero-point length, extra-dimensions and string T-duality, in @noop booktitle New Developments in String Theory Research, editor edited by editor S. A. Grece (publisher Nova Publishers, year 2005)NoStop [Fontanini et al.(2006)Fontanini, Spallucci, and Padmanabhan]FSP06 author author M. Fontanini, author E. Spallucci, and author T. Padmanabhan, 10.1016/j.physletb.2005.12.039 journal journal Phys. Lett. B volume 633, pages 627 (year 2006), http://arxiv.org/abs/hep-th/0509090arXiv:hep-th/0509090NoStop [Krasnikov(1987)]Kra87 author author N. V. Krasnikov, 10.1007/BF01017588 journal journal Theor. Math. Phys. volume 73, pages 1184 (year 1987)NoStop [Tomboulis(1997)]Tom97 author author E. T. Tomboulis, @noop title Superrenormalizable gauge and gravitational theories, (year 1997), note unpublished paper, http://arxiv.org/abs/hep-th/9702146arXiv:hep-th/9702146NoStop [Modesto(2012)]Mod12 author author L. Modesto, 10.1103/PhysRevD.86.044005 journal journal Phys. Rev. D volume 86, pages 044005 (year 2012), http://arxiv.org/abs/1107.2403arXiv:1107.2403 [hep-th]NoStop [Biswas et al.(2012)Biswas, Gerwick, Koivisto, and Mazumdar]BGKM12 author author T. Biswas, author E. Gerwick, author T. Koivisto, and author A. Mazumdar, 10.1103/PhysRevLett.108.031101 journal journal Phys. Rev. Lett. volume 108, pages 031101 (year 2012), http://arxiv.org/abs/1110.5249arXiv:1110.5249 [gr-qc]NoStop [Barvinsky(2003)]Bar03 author author A. O. Barvinsky, 10.1016/j.physletb.2003.08.055 journal journal Phys. Lett. B volume 572, pages 109 (year 2003), http://arxiv.org/abs/hep-th/0304229arXiv:hep-th/0304229NoStop [Hamber and Williams(2005)]HaW05 author author H. W. Hamber and author R. M. Williams, 10.1103/PhysRevD.72.044026 journal journal Phys. Rev. D volume 72, pages 044026 (year 2005), http://arxiv.org/abs/hep-th/0507017arXiv:hep-th/0507017NoStop [Moffat(2011)]Mof10 author author J. W. Moffat, 10.1140/epjp/i2011-11043-7 journal journal Eur. Phys. J. Plus volume 126, pages 43 (year 2011), http://arxiv.org/abs/1008.2482arXiv:1008.2482 [gr-qc]NoStop [Modesto et al.(2011)Modesto, Moffat, and Nicolini]MMN11 author author L. Modesto, author J. W. Moffat, and author P. Nicolini, 10.1016/j.physletb.2010.11.046 journal journal Phys. Lett. volume B695, pages 397 (year 2011), http://arxiv.org/abs/1010.0680arXiv:1010.0680 [gr-qc]NoStop [Giacchini and de Paula Netto(2023)]GiN23 author author B. L. Giacchini and author T. de Paula Netto, @noop title Regular black holes from higher-derivative effective delta sources, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Gaete and Nicolini(2022)]GaN22 author author P. Gaete and author P. Nicolini, 10.1016/j.physletb.2022.137100 journal journal Phys. Lett. B volume 829, pages 137100 (year 2022), http://arxiv.org/abs/2202.09311arXiv:2202.09311 [hep-th]NoStop [Nicolini et al.(2019)Nicolini, Spallucci, and Wondrak]NSW19 author author P. Nicolini, author E. Spallucci, and author M. F. Wondrak, 10.1016/j.physletb.2019.134888 journal journal Phys. Lett. B volume 797, pages 134888 (year 2019), http://arxiv.org/abs/1902.11242arXiv:1902.11242 [gr-qc]NoStop [Preskill(1979)]Pre79 author author J. Preskill, 10.1103/PhysRevLett.43.1365 journal journal Phys. Rev. Lett. volume 43, pages 1365 (year 1979)NoStop [Preskill(1984)]Pre84 author author J. Preskill, 10.1146/annurev.ns.34.120184.002333 journal journal Ann. Rev. Nucl. Part. Sci. volume 34, pages 461 (year 1984)NoStop [Gaete et al.(2022)Gaete, Jusufi, and Nicolini]GJN22 author author P. Gaete, author K. Jusufi, and author P. Nicolini, 10.1016/j.physletb.2022.137546 journal journal Phys. Lett. B volume 835, pages 137546 (year 2022), http://arxiv.org/abs/2205.15441arXiv:2205.15441 [hep-th]NoStop [Connes(1995)]Con95 author author A. Connes, 10.1063/1.531241 journal journal J. Math. Phys. volume 36, pages 6194 (year 1995)NoStop [Connes(1996)]Con96 author author A. Connes, 10.1007/BF02506388 journal journal Commun. Math. Phys. volume 182, pages 155 (year 1996), http://arxiv.org/abs/hep-th/9603053arXiv:hep-th/9603053NoStop [Schucker(2005)]Sch05 author author T. Schucker, 10.1007/978-3-540-31532-2_6 journal journal Lect. Notes Phys. volume 659, pages 285 (year 2005), http://arxiv.org/abs/hep-th/0111236arXiv:hep-th/0111236NoStop [Szabo(2003)]Sza01 author author R. J. Szabo, 10.1016/S0370-1573(03)00059-0 journal journal Phys. Rept. volume 378, pages 207 (year 2003), http://arxiv.org/abs/hep-th/0109162arXiv:hep-th/0109162NoStop [Douglas and Nekrasov(2001)]DoN01 author author M. R. Douglas and author N. A. Nekrasov, 10.1103/RevModPhys.73.977 journal journal Rev. Mod. Phys. volume 73, pages 977 (year 2001), http://arxiv.org/abs/hep-th/0106048arXiv:hep-th/0106048NoStop [Seiberg and Witten(1999)]SeW99 author author N. Seiberg and author E. Witten, 10.1088/1126-6708/1999/09/032 journal journal JHEP volume 09, pages 032 (year 1999), http://arxiv.org/abs/hep-th/9908142arXiv:hep-th/9908142NoStop [Dimopoulos and Landsberg(2001)]DiL01 author author S. Dimopoulos and author G. L. Landsberg, 10.1103/PhysRevLett.87.161602 journal journal Phys. Rev. Lett. volume 87, pages 161602 (year 2001), http://arxiv.org/abs/hep-ph/0106295arXiv:hep-ph/0106295NoStop [Giddings and Thomas(2002)]GiT02 author author S. B. Giddings and author S. D. Thomas, 10.1103/PhysRevD.65.056010 journal journal Phys. Rev. D volume 65, pages 056010 (year 2002), http://arxiv.org/abs/hep-ph/0106219arXiv:hep-ph/0106219NoStop [Mureika et al.(2012)Mureika, Nicolini, and Spallucci]MNS12 author author J. Mureika, author P. Nicolini, and author E. Spallucci, 10.1103/PhysRevD.85.106007 journal journal Phys. Rev. D volume 85, pages 106007 (year 2012), http://arxiv.org/abs/1111.5830arXiv:1111.5830 [hep-ph]NoStop [Nicolini et al.(2015)Nicolini, Mureika, Spallucci, Winstanley, and Bleicher]NMSW15 author author P. Nicolini, author J. Mureika, author E. Spallucci, author E. Winstanley, and author M. Bleicher, in 10.1142/9789814623995_0478 booktitle 13th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (year 2015) pp. pages 2495–2497, http://arxiv.org/abs/1302.2640arXiv:1302.2640 [hep-th]NoStop [Glauber(1963)]Gla63 author author R. J. Glauber, 10.1103/PhysRev.131.2766 journal journal Phys. Rev. volume 131, pages 2766 (year 1963)NoStop [Smailagic and Spallucci(2003a)]SmSp03 author author A. Smailagic and author E. Spallucci, 10.1088/0305-4470/36/39/103 journal journal J. Phys. volume A36, pages L517 (year 2003a), http://arxiv.org/abs/hep-th/0308193arXiv:hep-th/0308193 [hep-th]NoStop [Smailagic and Spallucci(2003b)]SmSp03b author author A. Smailagic and author E. Spallucci, 10.1088/0305-4470/36/33/101 journal journal J. Phys. A volume 36, pages L467 (year 2003b), http://arxiv.org/abs/hep-th/0307217arXiv:hep-th/0307217NoStop [Spallucci et al.(2006)Spallucci, Smailagic, and Nicolini]SSN06 author author E. Spallucci, author A. Smailagic, and author P. Nicolini, 10.1103/PhysRevD.73.084004 journal journal Phys. Rev. D volume 73, pages 084004 (year 2006), http://arxiv.org/abs/hep-th/0604094arXiv:hep-th/0604094NoStop [Vassilevich(2010)]Vas09 author author D. V. Vassilevich, in 10.1142/9789814277839_0017 booktitle Fundamental Interactions: A Memorial Volume for Wolfgang Kummer, editor edited by editor D. Grumiller, editor A. Rebhan, and editor D. Vassilevich (publisher World Scientific, address Singapore, year 2010) pp. pages 293–302, http://arxiv.org/abs/0902.07670902.0767NoStop [Carroll et al.(2001)Carroll, Harvey, Kostelecky, Lane, and Okamoto]CHJK01 author author S. M. Carroll, author J. A. Harvey, author V. A. Kostelecky, author C. D. Lane, and author T. Okamoto, 10.1103/PhysRevLett.87.141601 journal journal Phys. Rev. Lett. volume 87, pages 141601 (year 2001), http://arxiv.org/abs/hep-th/0105082arXiv:hep-th/0105082NoStop [Carlson et al.(2002)Carlson, Carone, and Zobin]CCZ02 author author C. E. Carlson, author C. D. Carone, and author N. Zobin, 10.1103/PhysRevD.66.075001 journal journal Phys. Rev. D volume 66, pages 075001 (year 2002), http://arxiv.org/abs/hep-th/0206035arXiv:hep-th/0206035NoStop [Morita(2003)]Mor03 author author K. Morita, 10.1143/PTP.108.1099 journal journal Prog. Theor. Phys. volume 108, pages 1099 (year 2003), http://arxiv.org/abs/hep-th/0209234arXiv:hep-th/0209234NoStop [Nicolini et al.(2006)Nicolini, Smailagic, and Spallucci]NSS06 author author P. Nicolini, author A. Smailagic, and author E. Spallucci, 10.1016/j.physletb.2005.11.004 journal journal Phys. Lett. volume B632, pages 547 (year 2006), http://arxiv.org/abs/gr-qc/0510112arXiv:gr-qc/0510112 [gr-qc]NoStop [Tseytlin(1995)]Tse95 author author A. A. Tseytlin, 10.1016/0370-2693(95)01228-7 journal journal Phys. Lett. B volume 363, pages 223 (year 1995), http://arxiv.org/abs/hep-th/9509050arXiv:hep-th/9509050NoStop [Kober and Nicolini(2010)]KoN10 author author M. Kober and author P. Nicolini, 10.1088/0264-9381/27/24/245024 journal journal Class. Quant. Grav. volume 27, pages 245024 (year 2010), http://arxiv.org/abs/1005.3293arXiv:1005.3293 [hep-th]NoStop [Rizzo(2006)]Riz06 author author T. G. Rizzo, 10.1088/1126-6708/2006/09/021 journal journal JHEP volume 09, pages 021 (year 2006), http://arxiv.org/abs/hep-ph/0606051arXiv:hep-ph/0606051 [hep-ph]NoStop [Spallucci et al.(2009)Spallucci, Smailagic, and Nicolini]SSN09 author author E. Spallucci, author A. Smailagic, and author P. Nicolini, 10.1016/j.physletb.2008.11.030 journal journal Phys. Lett. volume B670, pages 449 (year 2009), http://arxiv.org/abs/0801.3519arXiv:0801.3519 [hep-th]NoStop [Ansoldi et al.(2007)Ansoldi, Nicolini, Smailagic, and Spallucci]ANS++07 author author S. Ansoldi, author P. Nicolini, author A. Smailagic, and author E. Spallucci, 10.1016/j.physletb.2006.12.020 journal journal Phys. Lett. volume B645, pages 261 (year 2007), http://arxiv.org/abs/gr-qc/0612035arXiv:gr-qc/0612035 [gr-qc]NoStop [Smailagic and Spallucci(2010)]SmS10 author author A. Smailagic and author E. Spallucci, 10.1016/j.physletb.2010.03.075 journal journal Phys. Lett. volume B688, pages 82 (year 2010), http://arxiv.org/abs/1003.3918arXiv:1003.3918 [hep-th]NoStop [Modesto and Nicolini(2010)]MoN+10 author author L. Modesto and author P. Nicolini, 10.1103/PhysRevD.82.104035 journal journal Phys. Rev. volume D82, pages 104035 (year 2010), http://arxiv.org/abs/1005.5605arXiv:1005.5605 [gr-qc]NoStop [Nicolini and Winstanley(2011)]NiW11 author author P. Nicolini and author E. Winstanley, 10.1007/JHEP11(2011)075 journal journal JHEP volume 11, pages 075 (year 2011), http://arxiv.org/abs/1108.4419arXiv:1108.4419 [hep-ph]NoStop ['t Hooft(1990)]tHo90 author author G. 't Hooft, 10.1016/0550-3213(90)90174-C journal journal Nucl. Phys. B volume 335, pages 138 (year 1990)NoStop
http://arxiv.org/abs/2306.03816v2
20230606160222
Parametrization, Prior Independence, and Posterior Asymptotic Normality in the Partially Linear Model
[ "Christopher D. Walker" ]
math.ST
[ "math.ST", "econ.EM", "stat.TH" ]
Microscopic Theory of the Magnetic Susceptibility of Insulators Alistair H. Duff^1, Aidan Lau^1, and J.E. Sipe July 31, 2023 =============================================================== We prove a semiparametric Bernstein-von Mises theorem for a partially linear regression model with independent priors for the low-dimensional parameter of interest and the infinite-dimensional nuisance parameters. Our result mitigates a prior invariance condition that arises from a loss of information in not knowing the nuisance parameter. The key device is a reparametrization of the regression function that is in the spirit of profile likelihood, and, as a result, the prior invariance condition is automatically satisfied because there is no loss of information in the transformed model. As these prior stability conditions can impose strong restrictions on the underlying data-generating process, our results provide a more robust posterior asymptotic normality theorem than the original parametrization of the partially linear model. Keywords: Partially linear model; Bernstein-von Mises; Adaptive Parametrization; Prior Invariance. § INTRODUCTION §.§ Overview This paper is concerned with Bayesian inference for the partially linear regression model. We observe (Y_i,X_i,W_i) drawn from some distribution P_0, where Y_i∈ℝ is an outcome variable, and X_i∈ℝ and W_i∈ℝ^d are covariates. The partially linear model states that the outcome Y_i is related to covariates X_i and W_i via the regression model Y_i = X_iβ_0 + η_0(W_i) + U_i, where U_i is a scalar unobservable that satisfies E_P_0[U_i|X_i,W_i] =0, β_0 is a scalar parameter, and η_0 is an unknown function. This semiparametric regression model has a wide variety of applications. For instance, it arises in an important literature in economics that focuses on control function estimation of production functions with endogenous input choices (e.g., <cit.>, <cit.>, <cit.>, and <cit.>). Our analysis views β_0 as the parameter of interest while treating η_0 as an infinite-dimensional nuisance parameter. A key challenge is that estimation of the nuisance parameter may severely bias estimates of the parameter of interest to the extent that desirable properties such as √(n)-consistency and asymptotic normality are ruined. Specific to the Bayesian approach, the natural assumption that β and η are a priori independent can lead to asymptotic normality for the posterior for β being subject to a prior invariance condition. It arises because the ordinary score function for β is not an efficient score, and, as a result, a change of variables needs to be applied to account for information loss associated with not knowing the nuisance parameter η. Emphasized in <cit.>, the condition requires that the prior for the nuisance parameter exhibit a lack of sensitivity to shifts in the direction of the least favorable direction. Its satisfaction usually imposes the restriction that the nuisance prior is adequate for simultaneously estimating η_0 and approximating the least favorable direction. Consequently, this prior invariance condition leads to a general observation that semiparametric Bernstein-von Mises theorems are fragile because they require carefully selected priors for the nuisance parameters and/or potentially strong restrictions on the underlying data-generating process. As Bayesian methods often have excellent numerical performance and credible sets are often easier to compute than asymptotically efficient frequentist confidence sets, the mitigation of these stability conditions has motivated recent literature proposing dependent priors tailored towards the estimation of low-dimensional parameters in semiparametric models (e.g., <cit.>, <cit.>, <cit.>, and <cit.>). We propose a fully Bayesian inference procedure for the partially linear model that automatically satisfies the prior invariance condition and maintains prior independence between the parameter of interest and the nuisance. The starting point is an interest-respecting transformation of the regression function Y_i = m_01(W_i) + [X_i-m_02(W_i)]β_0 + U_i, where m_01(W_i) and m_02(W_i) denote E_P_0[Y_i|W_i] and E_P_0[X_i|W_i], respectively. The transformed regression model (<ref>) is referred to as the (β,m)-parametrization of the partially linear model throughout the paper. First utilized in <cit.> to develop a √(n)-consistent and asymptotically normal least squares estimator of β_0, (<ref>) separates the parameter into a structural component β_0 and a reduced-form component m_0 (i.e., a functional of P_0 that is identified without knowledge of β_0). Combining this reparametrization with a robustness property of the Gaussian likelihood, we develop a model for the joint distribution of (Y_i,X_i) given W_i such that m_0=(m_01,m_02) can be recovered without knowledge of β_0 and the derivative of the log-likelihood function (with respect to β) coincides with the efficient score under homoskedasticity. We are then able to establish that, under independent priors for β and m=(m_1,m_2), the marginal posterior for β satisfies a Bernstein-von Mises theorem that bypasses the prior invariance condition. To our knowledge, we present the first Bernstein-von Mises theorem for the (β,m)-parametrization of the partially linear model. The theorem does not assume that the distribution (Y_i,X_i) given W_i is normal even though the Gaussian likelihood is used to derive the posterior distribution. Consequently, the findings demonstrate that Bayesian credible sets for the (β,m)-parametrization of the partially linear model with Gaussian errors coincide with asymptotically efficient frequentist confidence sets for β_0 even under misspecification. The intuition is that the Gaussian distribution is adequate for estimating parameters that are identified via mean and variance restrictions because the log-likelihood function resembles a (weighted) least squares objective function. Given advancements in Bayesian computation and the relative ease of obtaining credible sets relative to asymptotically efficient frequentist confidence sets, the asymptotic normality of the posterior is useful because it validates the Bayesian approach as a method of performing asymptotically valid frequentist inference. Moreover, our results offer a `proof of concept' for an approach to posterior asymptotic normality that mitigates prior invariance. The key idea behind the (β,m)-parametrization is profile likelihood (e.g., <cit.>, <cit.>). For β fixed, the maximization of the population Gaussian likelihood with respect to η generates the least favorable curve η_β = m_01-β m_02. Consequently, (<ref>) mimics the structure of the Gaussian profile likelihood problem because it can be rearranged as E_P_0[Y_i|X_i,W_i] = X_iβ_0+η_β_0(W_i). Since the profile likelihood corresponds to that of a least favorable parametric submodel, a parametrization that mimics profiling automatically incorporates the least favorable direction, and, as a result, there is no loss of information in not knowing the nuisance parameters. As a result, the (β,m)-parametrization forms the basis for an adaptive semiparametric model in the sense of <cit.>. Adaptive models have the property that the ordinary score for β coincides with the efficient score and the least favorable direction is equal to zero. Consequently, as long as we can consistently estimate m_0 at a suitable rate (i.e., faster than n^-1/4), the adaptivity of the (β,m)-parametrization reduces the proof of the Bernstein-von Mises theorem to verifying a local asymptotic normality condition for m-indexed parametric submodels. There is no concern about prior invariance because such a condition only arises when there is information loss in not knowing the nuisance parameter. This leads to a general conclusion that prior independence can be an inconsequential assumption for information loss models provided there exists a feasible reparametrization that resembles a least favorable submodel. §.§ Related Literature This paper is related to several active lines of research in statistics and econometrics. One strand is the literature on semiparametric Bernstein-von Mises theorems. Contributions to this literature include <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The partially linear regression model in the (β,η)-parametrization with independent priors for β and η is considered in <cit.>, <cit.>, <cit.>, and <cit.>. Among these papers, the direct point of comparison is <cit.>. <cit.> emphasized the importance of the prior invariance condition for semiparametric Bernstein-von Mises theorems with Gaussian process priors for nuisance parameters. Our findings are also related to a research agenda focused on semiparametric Bayesian inference procedures designed for the estimation of low-dimensional target parameters. <cit.> mitigate the prior invariance condition in the (β,η)-parametrization of the partially linear model by shifting the center of the prior for η by -βm̂_n2, where m̂_n2 denotes a suitably consistent frequentist estimator of m_02. <cit.>, <cit.>, and <cit.> focus on estimating the average treatment effect (ATE) in a binary treatment potential outcomes model with a selection-on-observables assumption. These papers debias the ATE posterior using priors for the treatment and control group conditional means that depend on plug-in estimators for the propensity score. Like m̂_n2 for the partially linear model, a frequentist estimator of the propensity score shifts the center of the prior for the treatment/control group conditional means by a scaled estimate of propensity score so as to mimic shifts in the direction of the least favorable direction. Our approach is to utilize an adaptive parametrization of the partially linear model in which information about the least favorable direction is encoded directly into the likelihood function, and, as a result, the prior invariance condition is satisfied with independent priors for the parameter of interest and the nuisance. Moreover, we do not use frequentist estimators to specify priors, and, for that reason, our approach more closely adheres to traditional Bayesian analysis. Our findings complement the empirical analysis of <cit.>, where the reparametrization is applied to the case where η_0, m_01, and m_02 are affine functions of W_i in order to eliminate regularization bias. <cit.> combines a different type of reparametrization and a prior correction to derive a Bernstein-von Mises theorem for a single coordinate in a high-dimensional linear regression model under sparsity. A practical benefit of Bernstein-von Mises theorems is that they justify the use of Bayesian procedures for frequentist uncertainty quantification. For that reason, our theoretical results are also relevant to the literature on frequentist semiparametric estimation. The transformed regression model (<ref>) mimics the profile maximum likelihood problem for (<ref>) under Gaussian errors, and, as a result, the asymptotic expansions of the posterior are similar to seminal papers on profile maximum likelihood estimation (e.g., <cit.>, <cit.>). Moreover, the (β,m)-parametrization was first utilized in <cit.> to derive a √(n)-consistent and asymptotically normal least squares estimator of β_0 based on kernel estimators of m_01 and m_02. The connection to the `Robinson transformation' means that our findings dovetail with the literature on two-step semiparametric estimation based on Neyman orthogonal moment conditions (e.g., <cit.>) because the score in the (β,m)-parametrization is an orthogonal moment condition. Finally, the theoretical results in this paper are closely related to the literature on quasi-Bayesian inference (e.g., <cit.>, <cit.>, and <cit.>), an approach to frequentist inference where an extremum estimation objective function is used in place of the likelihood function and Bayesian computation is utilized rather than minimizing the objective function to obtain parameter estimates. §.§ Outline of Paper The paper is organized as follows. Section <ref> formalizes the data-generating process and proposes a Bayesian inference procedure targeted toward the estimation of the low-dimensional parameter of interest. Section <ref> derives conditions under which the marginal posterior for β satisfies a Bernstein-von Mises theorem and presents a set of lower-level conditions for a high-level nuisance posterior consistency assumption. Section <ref> provides an extensive discussion of the findings from Section <ref>. Section <ref> concludes. All proofs are contained in the Appendix. § DATA GENERATING PROCESS AND BAYESIAN INFERENCE §.§ Data Generating Process The observed data is an i.i.d sample {(Y_i,X_i,W_i)}_i=1^n from some distribution P_0, where Y_i∈𝒴⊆ℝ, X_i∈𝒳⊆ℝ, and W_i∈𝒲⊆ℝ^d for i=1,...,n. It is maintained that the conditional distribution of (Y_i,X_i) given W_i has a density p_0(·|W_i) with respect to the Lebesgue measure. Throughout the paper, we use L_2(𝒲) to denote the space of measurable functions f: 𝒲→ℝ with E_P_0|f(W_i)|^2<∞. [Partially Linear Model] The distribution of the data P_0 satisfies the following restrictions: * E_P_0[Y_i|X_i,W_i] = X_iβ_0+η_0(W_i) for some (β_0,η_0) ∈ℬ×ℋ with ℬ⊆ℝ and ℋ⊆ L_2(𝒲). * E_P_0[(X_i-E_P_0[X_i|W_i])^2]∈ [c,c] for some 0<c< c<∞. * Var_P_0[Y_i|X_i,W_i] = σ_01^2 for some σ_01^2∈ (0,∞). * E_P_0[Y_i^2] ∈ (0,∞), E_P_0[X_i^2] ∈ (0,∞), E_P_0[(X_i-E_P_0[X_i|W_i])^4] ∈ (0,∞), and the density p_0 satisfies E_P_0|log p_0(Y_i,X_i|W_i)| ∈ (0,∞). Assumption <ref> describes the restrictions imposed on P_0. The first part of the assumption states that P_0 is compatible with the partially linear model. The assumption that there exists 0<c < c<∞ such that E_P_0(X_i-m_02(W_i))^2∈ [c,c] ensures that β_0 is strongly point identified <cit.>. The third condition is a homoskedasticity restriction. For expositional reasons, we assume that σ_01^2 is known and discuss the possibility of unknown σ_01^2 in Section <ref>. The final restriction is comprised of regularity conditions that are sufficient for the existence of population objective functions and the application of stochastic limit theorems (e.g., Law of Large Numbers and Central Limit Theorems). §.§ Bayesian Inference §.§.§ Sampling Model The sampling model is based on a robustness property of the Gaussian distribution. The starting point is a transformation of the regression function E_P_0[Y_i|X_i,W_i] = m_01(W_i) + [X_i-m_02(W_i)]β_0, where m_01(W_i) and m_02(W_i) denote E_P_0[Y_i|W_i] and E_P_0[X_i|W_i], respectively. We assume that m_0 = (m_01,m_02) ∈ℳ⊆ L_2(𝒲)× L_2(𝒲) and use notation m = (m_1,m_2) for arbitrary elements of ℳ. The transformed regression function (<ref>) characterizes an interest-respecting reparametrization of the partially linear model. It was first applied in <cit.> to derive a √(n)-consistent and asymptotically normal least squares estimator of β_0. As (<ref>) treats m_01 and m_02 as free parameters, a model for the joint distribution of (Y_i,X_i) given W_i is required because m_01 and m_02 are not separately identified from the conditional distribution of Y_i given (X_i,W_i). To that end, the following conditional statistical model serves as the basis for inference: (Y_i,X_i)|{W_i}_i=1^n,β,m ind∼𝒩(m(W_i),V(β)) for i=1,...,n, where 𝒩(m(W_i),V(β)) denotes a bivariate normal distribution with mean m(W_i) = (m_1(W_i),m_2(W_i)) and covariance matrix V(β) = [ σ_01^2 + β^2σ_02^2 βσ_02^2; βσ_02^2 σ_02^2 ] with σ_01^2∈ (0,∞) defined in Assumption <ref> and σ_02^2∈ (0,∞) being a known scalar. Under i.i.d sampling, the likelihood function is L_n(β,m) = ∏_i=1^np_β,m(Y_i,X_i|W_i), where p_β,m(Y_i,X_i|W_i) is the density of 𝒩(m(W_i),V(β)). There are several features of the sampling model that are worth highlighting. First, the parameter (β_0,m_0) is separated into a structural component β_0 that reflects some theory and a reduced-form component m_0 that is simply a functional of the distribution of the observed data (i.e., a conditional expecation). As a result, the ability to recover m_0 should not depend on knowledge of β_0. The first part of Theorem <ref> validates this because it states that, for any β∈ℬ, the conditional mean m_0 uniquely minimizes (in an almost-sure sense) the average Kullback-Leibler (KL) divergence between p_0(·|W_i) and the β-indexed submodel {𝒩(m(W_i),V(β)):m ∈ℳ}. Second, the Gaussian model is adequate for recovering β_0 even though we do not assume that p_0(·|W_i) is contained within the family {𝒩(m(W_i),V(β)): (β,m) ∈ℬ×ℳ}. This reflects a well-known property that if the conditional means and variances of a Gaussian distribution match those under P_0, then the data-generating conditional means and variances minimize the average KL divergence between the true conditional distribution of (Y_i,X_i) given W_i and the possibly misspecified Gaussian model. The second part of Theorem <ref> formalizes this point, and, as a result, confirms that the sampling model is adequate for recovering β_0 even though the sampling model forms a small subset of distributions implied by Assumption <ref>. Finally, the sampling model is tailored towards efficient estimation of β_0 because the log-likelihood function ℓ_n(β,m) = log L_n(β,m) has derivative ℓ̃_n(β,m) with respect to β that is equal to ℓ̃_n(β,m) = ∑_i=1^nℓ̃(Y_i,X_i,W_i,β,m) with ℓ̃(Y_i,X_i,W_i,β,m)= (Y_i-m_1(W_i)-(X_i-m_2(W_i))β)(X_i-m_2(W_i))/σ_01^2. Under homoskedasticity, the function ℓ̃(Y_i,X_i,W_i,β,m) is an efficient score for the partially linear model because semiparametrically efficient estimators β̂_n for β_0 admit the representation √(n)(β̂_n - β_0) = Ĩ_0(m_0)^-11/√(n)ℓ̃_n(β_0,m_0) + o_P_0(1), where Ĩ_0(m_0) = E_P_0[(X_i-m_02(W_i))^2/σ_01^2] is the efficient Fisher information (see, for example, <cit.>). The efficiency guarantees in Section <ref> are largely due to this feature of the sampling model. Suppose that Assumption <ref> holds. Then, for any β∈ℬ fixed, m_0=(m_01,m_02) is the unique solution to the minimization problem min_m ∈ℳ𝔼_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))], where KL(p_0(·|W_i),𝒩(m(W_i),V(β))) is the KL divergence between p_0(·|W_i) and 𝒩(m(W_i),V(β)). Moreover, (β_0,m_0) is the unique solution to the minimization problem min_(β,m) ∈ℬ×ℳ𝔼_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))]. §.§.§ Prior and Posterior Our approach to inference is Bayesian. Consequently, we need to specify a prior distribution Π over the parameter space ℬ×ℳ. We assume that Π= Π_ℬ×Π_ℳ, where Π_ℬ×Π_ℳ is a product probability measure defined on a σ-algebra that contains the Borel σ-algebra generated by the product topology. In other words, we adhere to common practice in Bayesian analysis of strict semiparametric models and maintain that β∼Π_ℬ, m ∼Π_ℳ, and β m. Inferences about β (and m) are then obtained via the posterior distribution Π((β,m) ∈ A|{(Y_i,X_i,W_i)}_i=1^n) = ∫_A L_n(β,m) d Π(β,m)/∫_ℬ×ℳ L_n(β,m)d Π(β,m), where A is a measurable set. § BERNSTEIN-VON MISES THEOREM This section shows that the marginal posterior for √(n)(β-β_0) is asymptotically normal. It is comprised of two subsections. The first provides a statement of the theorem as well as providing a discussion of the assumptions that underlie the result. The second offers some primitive conditions for a high-level nuisance posterior consistency requirement. §.§ Main Result We state some assumptions that are sufficient for the Bernstein-von Mises theorem. A discussion follows. [Prior] The following conditions hold: * ∫_ℬ×ℳexp(ℓ_n(β,m))dΠ(β,m)< ∞ with P_0-probability equal to 1. * Π_ℬ has a Lebesgue density π_ℬ that is continuous and positive over a neighborhood ℬ_0 of β_0. Let ||m||_ℳ,2=max{||m_1||_L_2(𝒲),||m_2||_L_2(𝒲)} for each m ∈ℳ, where ||m_j||_L_2(𝒲)^2 = E_P_0|m_j(W_i)|^2 for j =1,2. The notation B_ℳ(m_0,δ) refers to a ball in the pseudometric space (ℳ,||·||_ℳ,2) centered at m_0 with radius δ>0, namely, B_ℳ(m_0,δ)= {m ∈ℳ: ||m-m_0||_ℳ,2<δ}. [Nuisance Posterior Consistency] The following conditions hold: * There exists sets ℳ_n⊆ℳ such that Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 0 as n→∞. * There exists sequence δ_n→ 0 such that Π(ℳ_n∩ B_ℳ(m_0,Kδ_n)^c|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 0 for every constant K>0 sufficiently large. [Nuisance Posterior Contraction Rate] The sequence {δ_n}_n=1^∞ satisfies δ_n = o(n^-1/4). Let {G̃_n}_n=1^∞ be a sequence of empirical processes G̃_n: ℳ→ℝ with G̃_n(m) = 1/√(n)∑_i=1^n(ℓ̃(Y_i,X_i,W_i,β_0,m)- E_P_0[ℓ̃(Y_i,X_i,W_i,β_0,m)]) for each m ∈ℳ. Let {Ĩ_n}_n=1^∞ be a sequence of random functions Ĩ_n: ℳ→ℝ with Ĩ_n(m) = 1/n∑_i=1^n[(X_i-m_2(W_i))^2/σ_01^2] for each m ∈ℳ. Finally, let Ĩ_0: ℳ→ℝ with Ĩ_0(m) = E_P_0[(X_i-m_2(W_i))^2/σ_01^2] for each m ∈ℳ. [Complexity of Nuisance Functions] The following conditions hold: * {G̃_n}_n=1^∞ satisfies sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|G̃_n(m)-G̃_n(m_0)| P_0→ 0 as n→∞. * {Ĩ_n}_n=1^∞ satisfies sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_0(m)| P_0→ 0 as n→∞. Provided that P_0 is compatible with Assumption <ref>, Assumptions <ref>–<ref> are sufficient for asymptotic normality of the marginal posterior for β. Before elaborating on this statement, we briefly discuss the content of the assumptions. Assumption <ref> imposes some regularity conditions on the prior. The first part ensures that the posterior distribution is well-defined while the second states that the prior for β has a density satisfying some continuity and positivity restrictions. Restrictions like Part 2 of Assumption <ref> are routinely applied for proving semiparametric Bernstein-von Mises theorems (see Chapter 12 of <cit.>). Assumption <ref> states that the nonparametric part of the posterior concentrates its mass on sets of the form ℳ_n∩ B_ℳ(m_0,Kδ_n). The restriction to a sequence of approximating spaces ℳ_n is a common device used to establish posterior consistency in large parameter spaces (e.g., <cit.>, <cit.>). Assumption <ref> restricts the nuisance posterior contraction rate δ_n. Finally, Assumption <ref> concerns the complexity of the sets ℳ_n∩ B_ℳ(m_0,Kδ_n). Part 1 is a stochastic equicontinuity restriction on the sequence of empirical processes {G̃_n}_n=1^∞. Conditions like this are typically required for large sample analysis of semiparametric likelihood estimators. For instance, <cit.> impose a stochastic equicontinuity restriction to prove large sample properties of profile maximum likelihood estimators, and, more recently, <cit.> make such an assumption to derive a Bernstein-von-Mises theorem for the ATE under selection-on-observables. The second part states that {Ĩ_n(m): m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)} obeys a uniform law of large numbers. We highlight the role of the assumptions in establishing the semiparametric Bernstein-von Mises theorem. Assumption <ref> guarantees that Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n) is equal to ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ A |{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) + o_P_0(1) for all events A, where Π(β∈ A |{(Y_i,X_i,W_i)}_i=1^n,m) = ∫_A L_n(β,m)d Π_ℬ(β)/∫_ℬL_n(β,m)d Π_ℬ(β) is the posterior in the m-indexed parametric submodel {⊗_i=1^n𝒩(m(W_i),V(β)): β∈ℬ} with prior Π_ℬ. The implication of (<ref>) is that it suffices to focus on the restricted parameter space ℳ_n∩ B_ℳ(m_0,Kδ_n) for the purpose of deriving the large sample properties of the posterior for β. Once (<ref>) is established, we impose Assumptions <ref> and <ref> to derive asymptotic properties for the integrand Π(β∈· |{(Y_i,X_i,W_i)}_i=1^n,m) that hold uniformly in m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). There are two points worth highlighting. First, the restrictions allow us to show that for any sequence M_n→∞, sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) P_0→ 0 as n→∞, where B_ℬ(β_0,M_n/√(n)) = {β∈ℬ: |β - β_0|<M_n/√(n)}. Combined with (<ref>), the convergence in probability (<ref>) implies that Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n) is equal to ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ A |{(Y_i,X_i,W_i)}_i=1^n,β∈ B_ℬ(β_0,M_n/√(n)),m)d Π(m|{(Y_i,X_i,W_i)}_i=1^n)+ o_P_0(1) for all events A. Although both assumptions are utilized in the derivation, Assumption <ref> has a particularly important role in establishing Condition (<ref>) because it ensures that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|√(n)E_P_0[ℓ̃(Y_i,X_i,W_i,β_0,m)]| = o(1). Often referred to as a `no-bias' condition (e.g., Section 25.8 in <cit.>), this property guarantees that perturbations of the nuisance functions (e.g., due to estimation error) do not transmit to the low-dimensional parameter of interest in a way that ruins √(n)-consistency and asymptotic normality. The specific choice δ_n = o(n^-1/4) is because |E_P_0[ℓ̃(Y_i,X_i,W_i,β_0,m)]| is bounded from above by a term that behaves like ||m-m_0||_ℳ,2^2, and, as a result, we require that √(n)δ_n^2→ 0 to control the bias. The reason that we emphasize Assumption <ref> is because theoretical results from <cit.> suggest parametric rate consistency of the conditional posterior Π(β∈·|{(Y_i,X_i,W_i)}_i=1^n,m) for ℬ^*(m), the set of parameters that minimize the average Kullback-Leibler divergence between p_0(·|W_i) and the parametric submodel {𝒩(m(W_i),V(β)): β∈ℬ}. For that reason, Assumption <ref> can be viewed as a technical condition that ensures the appropriate degree of uniformity to deal with the integral whereas Assumption <ref> is of substantive importance because it guarantees the Hausdorff distance between {β_0} and ℬ^*(m) converges to 0 at a rate faster than 1/√(n) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n), and, as a result, consistency for the conditional posterior can be stated in terms of β_0. Said differently, Assumption <ref> implies that the conditional posterior for β is locally insensitive to the value of the nuisance parameter. The second reason that Assumptions <ref> and <ref> are important is that they guarantee a local asymptotic normality condition. Specifically, the restrictions allow us to show that ℓ_n(β,m)-ℓ_n(β_0,m) = 1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0) - 1/2Ĩ_n(m_0)n(β-β_0)^2 + o_P_0(1) for all (β,m) ∈ B_ℬ(β_0,M_n/√(n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)) with M_n→∞ arbitrarily slowly. Combined with (<ref>), (<ref>), and Part 2 of Assumption <ref>, the local asymptotic normality condition (<ref>) guarantees that Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n) is asymptotically equivalent to ∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0) -1/2Ĩ_n(m_0)n(β-β_0)^2) d β/∫_ B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0) -1/2Ĩ_n(m_0)n(β-β_0)^2)d β for all events A. Some algebra then reveals that (<ref>) is asymptotically equivalent to 𝒩(Δ̃_n,0,Ĩ_n(m_0)^-1) in total variation with Δ̃_n,0 = Ĩ_n^-11/√(n)ℓ̃_n(β_0,m_0). Putting this all together, we are able to conclude that the total variation distance between 𝒩(Δ̃_n,0,Ĩ_n(m_0)^-1) and the induced posterior for √(n)(β-β_0) converges to 0 in probability. Theorem <ref> formalizes this discussion. As Ĩ_0(m_0)^-1 is the semiparametric efficiency bound for the partially linear model under homoskedasticity (e.g., <cit.>, <cit.>) and ℓ̃(Y_i,X_i,W_i,β_0,m_0) is the corresponding efficient score, the result guarantees that Bayesian credible sets are asymptotically efficient frequentist confidence sets, and, in some cases, that Bayesian point estimators of location (e.g., posterior mean, posterior median, posterior mode) are efficient.[The `in some cases' qualification reflects that the location functional needs to be suitably continuous in total variation for the statement to hold.] Suppose Assumptions <ref>, <ref>, <ref>, <ref>, and <ref> hold and that β_0∈int(ℬ). Then, | |Π(√(n)(β-β_0) ∈· |{(Y_i,X_i,W_i)}_i=1^n) - 𝒩(Δ̃_n,0,Ĩ_n(m_0)^-1) | |_TVP_0⟶ 0, as n→∞, where ||· ||_TV denotes the total variation distance. §.§ Sufficient Conditions for Assumption <ref> Assumption <ref> states that the nonparametric posterior concentrates around sets ℳ_n∩ B_ℳ(m_0,Kδ_n). Theorem <ref> states that m_0 minimizes the average Kullback-Leibler divergence between the true conditional distribution p_0(·|W_i) and 𝒩(m(W_i),V(β)). The assumption is reasonable because the posterior is generally consistent for the parameter value that minimizes the Kullback-Leibler divergence between the true distribution and a misspecified model <cit.>. Nevertheless, it is useful to provide some lower-level conditions that guarantee Assumption <ref>. We start with an example that invokes theorems directly from <cit.> and then provide a general set of sufficient conditions for Assumption <ref> based on the curvature of the log-likelihood ratio. Suppose that 𝒲 = [0,1], ℬ is compact in (ℝ,|·|) with Π_ℬ equal to the uniform distribution on ℬ, and that ℳ = C_r_1^α_1([0,1]) × C_r_2^α_2([0,1]) with C_r^α([0,1]) denoting a Hölder ball with known smoothness α>1/2 and radius r>0. That is, f ∈ C_r^α([0,1]) if and only if ||f||_C^α≤ r, where ||f||_C^α = max_k ∈ℤ_+:|k| ≤αsup_w ∈ [0,1]|D^kf(w)| + max_k ∈ℤ_+: |k|=αsup_w,w' ∈ [0,1]: w ≠ w'|D^kf(w)-D^kf(w')|/|w-w'|^α-α with α denoting the largest integer strictly smaller than α, and D^k is the differential operator (i.e., D^kf= ∂^kf(w)/∂ w^k). In words, C^α_r([0,1]) includes functions that are at most α-times continuously differentiable with the highest derivative satisfying a Lipschitz condition. Further, suppose that, for every M>0, there exists a constant C_M∈ (0,∞) such that sup_w ∈ [0,1]E_P_0[exp(M·||Z_i-m_0(W_i)||_e)|W_i=w]≤ C_M with Z_i = (Y_i,X_i) and ||·||_e denoting the Euclidean norm. As ℳ is uniformly bounded, the tail restriction on the projection error (<ref>) permits a modification of Theorem 4.1 in <cit.> to multivariate regression.[Theorem 4.1 in <cit.> assumes that Z_i = m_0(W_i)+ν_i with ν_i W_i and E[ν_i] =0, and, as a result, the tail condition in their paper is weaker than (<ref>). As Assumption <ref> does not impose these independence conditions, a condition like (<ref>) is required to extend their theorem to our setting.] Consequently, the task is to find a prior Π_ℳ and a sequence {δ_n}_n=1^∞ such that δ_n→ 0, n δ_n^2→∞, and, for a constant C>0, Π_ℳ(m ∈ B_ℳ(m_0,Kδ_n) ) ≥exp(-C n δ_n^2) log N(δ_n,ℳ,||·||_ℳ,2) ≤ n δ_n^2, where log N(δ,ℳ,||·||_ℳ,2) is the metric entropy of ℳ with respect to ||·||_ℳ,2 for δ>0. Indeed, conditions (<ref>) and (<ref>) allow us to invoke Theorem 4.1 of <cit.> and conclude that Π(m ∈ℳ: ||m-m_0||_ℳ,2≥ K δ_n^2|{(Y_i,W_i,X_i)}_i=1^n,β_0) P_0→ 0 for every sufficiently large constant K>0. It should be noted that condition (<ref>) fixes β = β_0. Accounting for uncertainty about β is unlikely to change the nuisance posterior contraction rate in a meaningful way because compactness of ℬ and the fact that Π_ℬ has a continuous Lebesgue density bounded away from zero means that estimation of the finite-dimensional component β_0 plays a minor role in the rate of contraction. We provide prior for which (<ref>), (<ref>), and the `no-bias' condition holds. Let J ≥ 1 be an integer. We assume that Π_ℳ = Π_J×Π_J with Π_J denoting the law of J-fold integrated Brownian motion released at zero with J+1/2≥max{α_1,α_2}. Draws from Π_J are continuous stochastic processes {S(w): w ∈ [0,1]} that satisfy S(w) = ∑_l=0^JZ_lw^l/w!+ (I_0+^JB)(w), where {B(t): t ∈ [0,1]} is standard Brownian motion, Z_1,...,Z_Jiid∼𝒩(0,1), {B(t):t ∈ [0,1} (Z_1,...Z_J), (I_0+^1f)(w) = ∫_0^wf(t)dt, and I(^J_0+f)(w) = ∫_0^w(I^J-1_0+f)(t)dt for J ≥ 2. Integrated Brownian motion is the prior is suitable for estimating Hölder functions because realizations {S(w):w ∈ [0,1]} possess Hölder smoothness of degree J+1/2. Moreover, <cit.> assume integrated Brownian motion for the nuisance parameter in the (β,η)-parametrization so the example is helpful as a point of comparison. Appendix <ref> shows that if max{α_1,α_2}-1/2≤ J ≤ 2 min{α_1,α_2}-1/2, then there exists a sequence {δ_n}_n=1^∞ such that (<ref>) holds and that √(n)δ_n^2→ 0 as n→∞. Specifically, δ_n = max{n^-α_1/(2J+2),n^-α_2/(2J+2)}. The lower bound on J implies (<ref>) while the satisfaction of the `no-bias' condition 1/√(n)E_P_0[ℓ̃_n(β_0,m)] = o(1) for m ∈ℳ∩ B_ℳ(m_0,Kδ_n) is the reason that there is an upper bound for J. Indeed, a prior on the nuisance function that places a high probability on functions that are much smoother than m_0 may result in a bias for the lower-dimensional parameter of interest β_0, and, for that reason, the upper bound on J can be interpreted as ruling out certain forms of oversmoothing. We conclude the section with a general set of sufficient conditions for Assumption <ref>. However, before doing so, it is useful to define some more notation. For any (β,m),(β',m') ∈ℬ×ℳ, define a product semimetric as follows: ρ((β,m),(β',m')) = √(|β-β'|^2+ ||m_1-m_1'||_L_2(𝒲)+||m_2-m_2'||_L_2(𝒲)). Let B_ℬ×ℳ((β',m'),δ) = {(β,m) ∈ℬ×ℳ: ρ((β,m),(β',m')) <δ} for (β',m') ∈ℬ×ℳ and δ>0 be an open ball in (ℬ×ℳ,ρ). Let W_n(β,m) = 1/nℓ_n(β,m) - E_P_0[ℓ_n(β,m)] for any (β,m) ∈ℬ×ℳ. Finally, for any (β',m')∈ℬ×ℳ and δ>0, we define neighborhoods V_2((β',m'),δ) = {(β,m) ∈ℬ×ℳ: E_P_0[logp_β',m'(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤δ^2, E_P_0[(logp_β',m'(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))^2] ≤δ^2}. Under Assumption <ref> and Part 1 of Assumption <ref>, the following conditions are sufficient for Part 1 of Assumption <ref>: * For any constant C>0, Π(E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))]≤ C) > 0 * For any constant C>0, Π_ℳ(ℳ_n^c)≤exp(-nC). Suppose Assumption <ref>, Part 1 of Assumption <ref>, and the following conditions hold: * ℬ is a compact subset of (ℝ,|· | ). * There exists a sequence δ_n such that n δ_n≥ 1, nδ_n^2→∞, and Π((β,m) ∈ V_2((β_0,m_0),δ_n)≥exp(-n δ_n^2C̃) for n sufficiently large and some constant C̃>0. * There exists a sequence {r_n}_n=1^∞ such that r_n→ 0, n r_n^2→∞, and, for any constant C>0, P_0(sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kr_n/√(2))^c|W_n(β,m)-W_n(β_0,m_0)| ≥ CK^2r_n^2) → 0 as n→∞. Then Part 2 of Assumption <ref> holds for δ_n = max{r_n,δ_n}. If the following condition also holds E_P_0[p_β,m(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)]^nΠ_ℳ(ℳ_n^c)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) = o(exp(-2n δ_n^2)), then Part 1 of Assumption <ref> holds. Propositions <ref> and <ref> are sufficient for Assumption <ref>. Proposition <ref> states that the posterior concentrates around sieves ℳ_n if the marginal prior for m places exponentially small mass on functions contained in ℳ_n^c and the joint prior for (β,m) places positive probability on parameter values that are `close' to (β_0,m_0).[The terminology `sieves' follows <cit.> and refers to a general sequence of approximating spaces ℳ_n that are dense in ℳ but are less complex. It nests finite-dimensional linear sieve spaces that are used for series estimation, however, our theory is far more general than the linear case because it holds for any sieve ℳ_n for which the Assumptions <ref> and <ref> hold.] Closeness is measured in terms of the expected log-likelihood ratio between (β_0,m_0) and (β,m) and is a natural modification of the Kullback-Leibler support condition that is regularly used in posterior consistency proofs for correctly specified models (e.g., <cit.>). Indeed, E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] is equal to E_P_0[KL(p_0(·|W_i), 𝒩(m(W_i),V(β))] -E_P_0[KL(p_0(·|W_i), 𝒩(m_0(W_i),V(β_0))] meaning that it reduces to the Kullback-Leibler support condition in the special case where p_0(·|W_i) = 𝒩(m_0(W_i),V(β_0)). The exponentially small prior mass is a commonly encountered sufficient condition for posterior consistency/contraction rates (e.g., <cit.>, <cit.>, and <cit.>) with applications that include linear regression with nonparametric heteroskedasticity (e.g., <cit.>) and linear inverse problems (e.g., <cit.>). Proposition <ref> provides sufficient conditions for the nonparametric posterior to concentrate around ℳ_n∩ B_ℳ(m_0,Kδ_n). Specifically, the compactness assumption on ℬ (i.e., Condition 1) and the identification condition (i.e., Part 2 of Assumption <ref>) allow us to show that ρ^2((β,m),(β_0,m_0)) ≍ E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))] for any (β,m)∈ℬ×ℳ. In other words, the expected log-likelihood is suitably continuous in (β,m), and, as a result, there exists a constant C>0 such that ℓ_n(β,m)-ℓ_n(β_0,m_0) ≤ n[W_n(β,m)-W_n(β_0,m_0)] -C/2 K^2n δ^2_n for all (β,m) ∈ℬ×ℳ_n such that ρ((β,m),(β_0,m_0))≥ K δ_n/√(2). Condition 3 then allows us to conclude that n[W_n(β,m)-W_n(β_0,m_0]≤C/4K^2nδ_n^2 with P_0-probability approaching one so that the numerator of Π(m ∈ B_ℳ(m_0,Kδ_n)^c|{(Y_i,X_i,W_i)}_i=1^n) is bounded from above by exp(-CK^2nδ_n^2/4). Condition 2 combined with a lower bound for the marginal likelihood allows us to show that Π(m ∈ B_ℳ(m_0,Kδ_n)^c|{(Y_i,X_i,W_i)}_i=1^n)≤exp((2+C̃)n δ_n^2-CK^2nδ_n^2/4) and therefore obtain Part 2 of Assumption <ref> for every K>0 sufficiently large. Condition 2 is a prior mass condition that refines the modified Kullback-Leibler support condition in a way that restricts the variance of the log-likelihood ratio. Similar restrictions are regularly employed for posterior contraction rates under misspecification (see equation (2.3) in <cit.>). The overall takeaway is that the nuisance posterior contraction rate depends on the speed at which (β,m) ↦ W_n(β,m) concentrates around 0 (i.e, r_n) and the amount of mass that the prior places around (β_0,m_0) (i.e., δ_n). § DISCUSSION OF THE RESULTS There are several points related to the theoretical results that are worth discussing. §.§ Comparison of Parametrizations We compare the conditions for the semiparametric Bernstein-von Mises theorem under (β,η)- and (β,m)-parametrizations with all parameters independent a priori. The starting point is the (β,η)-parametrization Y_i|{(X_i,W_i)}_i=1^n,β,ηind∼𝒩(X_iβ+ η(W_i),σ_01^2) for i=1,...,n.[It suffices to focus on the conditional distribution of Y_i given W_i and X_i because a factorization of the likelihood and prior independence implies that (β,η) is a posteriori independent to the parameters that index the distribution of X_i given W_i.] Let q_β,η(Y_i|X_i,W_i) denote the probability density function of 𝒩(X_iβ+η(W_i),σ_01^2). As the derivative of log q_β,η(Y_i|X_i,W_i) with respect to β does not constitute an efficient score, the semiparametric Bernstein-von Mises theorem requires a reparametrization to account for the information loss associated with not knowing η. Let η_β(W_i) = η_0(W_i)-m_02(W_i)(β-β_0) be the least favorable curve (à la <cit.>), ζ(W_i) = η(W_i)-η_β(W_i), and let 𝒵 be the induced parameter space for ζ. We can show that q_β,η(Y_i|X_i,W_i) is equal to q_β,ζ^*(Y_i|X_i,W_i) with P_0-probability equal to one, where q_β,ζ^*(Y_i|X_i,W_i) is the probability density function of the infeasible normal distribution 𝒩(η_0(W_i)+X_iβ_0+(X_i-m_02(W_i))(β-β_0)+ζ(W_i),σ_01^2). The solution to max_ζ∈𝒵E_P_0[log q_β,ζ^*(Y_i|X_i,W_i)] does not depend on β (i.e., it is the zero function) meaning that the (β,ζ)-parametrization is an adaptive reparametrization in the sense of the <cit.>. A general version of this infeasible adaptive reparametrization is one of the key theoretical devices exploited in <cit.> to prove a semiparametric Bernstein-von Mises theorem that applies to strict semiparametric models. For the purpose of comparing the (β,η)- and (β,m)-parametrization, we consider the hypothetical thought experiment where q_β,ζ^*(Y_i|X_i,W_i) is feasible and βζ a priori. The exercise reveals that the conditions for a Bernstein-von Mises theorem are substantively similar to those behind Theorem <ref> (see Appendix <ref>). In particular, assumptions analogous to Assumption <ref> and Part 1 of Assumption <ref> are sufficient for asymptotic normality of the posterior in the (β,ζ)-parametrization. Such a conclusion is perhaps not surprising because the first part of Theorem <ref> indicates that the (β,m)-parametrization is adaptive, and, as a result, our procedure can be thought of as a feasible version of the (β,ζ)-parametrization. Nonetheless, it highlights that the essential difference between the (β,m)- and (β,η)-parametrizations must arise from the infeasibility of using q_β,ζ^*(Y_i|X_i,W_i) directly for inference. The infeasibility of the (β,ζ)-parametrization means that its utilization for the semiparametric Bernstein-von Mises theorem in the (β,η)-parametrization is subject to a prior invariance condition while the (β,m)-parametrization is not. Exploiting the (β,ζ)-parametrization to derive a semiparametric Bernstein-von Mises theorem requires a change of variables (β,η)↦ (β,ζ) and, as a result, the induced prior for ζ must be analyzed. If β∼Π_ℬ, η∼Π_ℋ, and βη a priori, then the prior distribution for ζ, denoted Π_β, is necessarily dependent on β because it is equal to the distribution of η-(β-β_0)m_02 under Π_ℋ with β fixed. For a general class of semiparametric models, <cit.> showed that this change of variables leads to a requirement that the prior for the nuisance parameter exhibits an invariance under a shift in the direction of the least-favorable direction. Specifically, | log ((d Π_β/d Π_β_0)(η))/1+n(β-β_0)^2| = o(1) uniformly over shrinking neighborhoods of the true parameter (β_0,η_0). Intuitively, (<ref>) permits a change of measure from dΠ_β(η) to d Π_β_0(η) without consequence because the rate condition means that the likelihood ratio (d Π_β/dΠ_β_0)(η) is absorbed into the remainder of an expansion of the posterior. As Π_β_0 = Π_ℋ, the device effectively reduces analysis to the case where β and ζ are a priori independent. There are no prior invariance concerns in the (β,m)-parametrization. As mentioned earlier, Part 1 of Theorem <ref> implies that our approach employs a feasible adaptive reparametrization of the partially linear model. Combining this with β m a priori, the semiparametric Bernstein-von Mises theorem effectively reduces to the parametric case along submodels {𝒩(m(W_i),V(β)): β∈ℬ} with a requirement that result holds uniformly over ℳ_n∩ B_ℳ(m_0,Kδ_n). Along these submodels, the derivative of the log-likelihood function (with respect to β) coincides with an efficient score, and, as a result, it is enough to apply a standard second-order Taylor expansion of β↦ℓ_n(β,m) around β_0 to arrive at a local asymptotic normality condition that guarantees semiparametric efficiency. In other words, the semiparametric Bernstein-von Mises theorem for (β,m)-parametrization bypasses a prior invariance condition entirely because the (β,m)-parametrization has the property that there is no information loss in not knowing the nuisance parameter. We interpret this as a strength of our procedure because prior invariance conditions like (<ref>) are fragile. A specialization of Example 12.11 from <cit.> illustrates this point. Consider the (β,η)-parametrization with βη, β∼Π_ℬ, and η∼𝒢𝒫(0,K) with 𝒢𝒫(0,K). Mutual absolute continuity of Π_β and Π_β_0 requires that m_02∈ℍ with ℍ denoting the reproducing kernel Hilbert space (RKHS) associated with 𝒢𝒫(0,K) (see Lemma 3.3 of <cit.>). As ℍ is generally a much smaller space than ℋ, this is quite a strong requirement. Fortunately, it is possible to find a sequence {m_n2}_n=1^∞ in ℍ that converges to m_02 meaning that condition (<ref>) can be stated in terms of Π_n,β, the distribution of η-(β-β_0)m_n2 under Π_ℋ with β fixed. However, this only offers partial mitigation of the problem because it still imposes a requirement that the least favorable direction -(β-β_0)m_02 is sufficiently well approximated by the RKHS of 𝒢𝒫(0,K), a condition that may severely restrict the class of DGPs for which the semiparametric Bernstein-von Mises theorem applies. In other words, a prior invariance condition requires a carefully selected nonparametric prior that is suitable for both estimation of the nuisance parameter and approximation of the least favorable direction. Although we are able to bypass the prior invariance condition, there is still a nontrivial `no-bias' condition that our approach must contend with due to the estimation of m_02. The infeasible adaptive parametrization q^*_β,ζ has a score function ∂/∂βlog q^*_β,ζ(Y_i|X_i,W_i) that satisfies E_P_0[∂/∂βlog q^*_β_0,ζ(Y_i|X_i,Wi)] = 0 for all ζ. For this reason, the mitigation of the prior invariance condition is key to the semiparametric Bernstein-von Mises theorem for (β,η)-parametrization because the `no-bias' condition is trivially satisfied when using the infeasible adaptive reparametrization. Assuming the (β,η)-parametrization with 𝒲 = [0,1], η_0,m_02∈ C_r^α([0,1]) with r>0 and α> 1/2, and Π_ℋ = Π_J for some J ∈ℤ_+ that satisfies J + 1/2 ≥α, <cit.> are able to employ a `rate-free' Bernstein-von Mises theorem where satisfaction of prior invariance is internal to the proof. In contrast, the example from Section <ref> required an upper bound for the smoothness hyperparameter in order to mitigate the `no-bias' condition. Based on this observation, one might question the value of the (β,m)-parametrization for Bayesian inference, however, Bickel and Kleijn's `rate-free' Bernstein-von Mises theorem crucially depends on total boundedness for the full parameter space. Without such a restriction on the parameter space, these prior stability issues arise (see Lemma 4.1 in their paper) meaning that the bias from estimating η may ruin the asymptotic normality of the marginal posterior for β without careful prior selection. Our findings state that the posterior is asymptotically normal as long as Π_ℳ is chosen in such a way that the nuisance posterior concentrates around sets that satisfy Assumptions <ref> and <ref>. These empirical process assumptions and rate conditions are substantively similar to those that arise in frequentist two-step semiparametric estimation (e.g., <cit.>). Consequently, the `no-bias' condition that the (β,m)-parametrization must contend is ultimately a reflection of the semiparametric model whereas the prior invariance condition that generally arises in the (β,η)-parametrization is a bias problem that is specific to the Bayesian mode of inference. §.§ Relevance to Literature on Prior Corrections The idea of bypassing prior invariance via a feasible adaptive parametrization means that our findings are relevant to recent literature on prior corrections that are catered toward the estimation of low-dimensional parameters in nonparametric models. A dominant theme in this literature is to use a data-driven adjustment to the prior for the nuisance parameter that incorporates information about the least favorable direction. The correction guarantees that the nuisance prior is stable under a shift in the direction of the least favorable direction, and, as a result, the aforementioned prior invariance condition is satisfied. <cit.> consider the partially linear model and propose a dependent prior for η given β that incorporates information about m_02. Specifically, η|β∼η̃-m̂_n2β with η̃∼Π_ℋ, β∼Π_ℬ, η̃β, and m̂_n2 is a plug-in estimator for m_02 (e.g., Nadaraya-Watson estimator). The sequence of estimators {m̂_n2}_n=1^∞ effectively mimics the approximating sequence {m_n2}_n=1^∞, and, as a result, the prior invariance conditions hold as long as the estimator m̂_n2 is consistent for m_02 at an appropriate rate (see Lemma 4.2 of <cit.>). More recently, <cit.>, <cit.>, and <cit.> consider a binary treatment potential outcome model under selection-on-observables and propose a correction to the priors for the treatment and control group conditional means that incorporates a plug-in estimator of the propensity score. The estimator of the propensity score has a similar role to m̂_n2 in the partially linear model in that it mitigates a prior invariance condition required for a semiparametric Bernstein-von Mises theorems (see the comparison of Theorem 1 and 2 in <cit.>). Our findings complement these important papers. Theorem <ref> proves that a feasible adaptive parametrization of the partially linear model circumvents a potentially fragile prior invariance condition even if the parameter of interest and the nuisance functions are a priori independent. Connecting to the findings of <cit.>, we can show that our approach is equivalent to the Bayesian statistical model Y_i|{(X_i,W_i)}_i=1^n,β,η,m_2ind∼𝒩(X_iβ+η(W_i),σ_01^2) X_i|{W_i}_i=1^n,β,η,m_2ind∼𝒩(m_2(W_i),σ_02^2) for i=1,...,n with η |β,m_2∼ m_1-β m_2, β∼Π_ℬ, m ∼Π_ℳ, and β m. In other words, the (β,m)-parametrization with independent priors for β and m is equivalent to the (β,η)-parametrization with a dependent prior for η that incorporates information about the least favorable direction. For that reason, our approach can be interpreted as offering a fully Bayesian version of the dependent prior in <cit.>. Although equivalent Bayesian models, the (β,m)-parametrization with independent priors for β and m is amenable to computation whereas a dependent prior for η may pose implementation challenges. For instance, the (β,m)-parametrization with an assumption that m is a realization of a Gaussian process that is independent of β yields a simple Gibbs sampling scheme. To update m holding β fixed, we note that Gaussian processes form a class of conjugate priors for a conditional mean in a normal regression model, and, as a result, the conditional posterior for m given β is a Gaussian process. To update β given m, we can use the distribution 𝒩(m_1(W_i)+ (X_i-m_2(W_i))β,σ_01^2) meaning that sampling from the conditional posterior for β given m corresponds to that of a linear regression model with recentered dependent and independent variables Y_i-m_1(W_i) and X_i-m_2(W_i), respectively. Moreover, <cit.> provide extensive simulation evidence that highlights the strong performance of the (β,m)-parametrization in the case where η_0(W_i) = W_i'η_0 for some η_0∈ℝ^d, m_0(W_i)=(W_i'm_01,W_i'm_02) for some m_01,m_02∈ℝ^d, and regularization priors are assumed for the nuisance function coefficients. §.§ Comments on Model Misspecification We conclude this section with some remarks about model misspecification. Theorem <ref> implies that if P_0 satisfies the restrictions of Assumption <ref>, then Bayesian credible sets are asymptotically efficient frequentist confidence sets. It should be emphasized that Theorem <ref> does not assume that the data is normally distributed under P_0. This reflects the model implication Y_i|{(X_i,W_i)}_i=1^n,β,m ind∼𝒩(m_1(W_i)+(X_i-m_2(W_i))β,σ_01^2) for i=1,...,n, and, as a result, the normal model imposes the mean and variance restrictions from Assumption <ref>. For that reason, we can invoke a well-known property that if the conditional means and variances of the Gaussian distribution match those under P_0, then the data-generating conditional means and variances minimize the average KL divergence between the true conditional distribution of (Y_i,X_i) given W_i and the possibly misspecified Gaussian model that is the basis for inference. A similar type of reasoning applies to the modeling choice X_i|{W_i}_i=1^n,β,m ind∼𝒩(m_2(W_i),σ_02^2). However, we do not need to be concerned about whether Var_P_0[X_i|W_i ] equals σ_02^2 because the model for the distribution of X_i and W_i is only included to facilitate consistent estimation of m_02. For that reason, it is enough that the normal model correctly specifies the mean. These robustness properties are regularly invoked for Bayesian analysis of nonparametric regression models (e.g., <cit.>). The efficiency guarantee requires that Var_P_0[Y_i|X_i,W_i] = σ_01^2. An extension to allow σ_01^2 to be unknown should not change the results in a meaningful way provided that the prior for σ_1^2 is supported on a compact interval [a,b] ⊂ (0,∞) with density that is bounded away from 0 because unknown σ_01^2 does not change the reparametrization because the least favorable curve is still η_β(W_i)= m_01(W_i)-β m_02(W_i). A heteroskedastic partially linear model is beyond scope because it fundamentally changes the profile likelihood problem. It should be noted that the insight that the posterior delivers an efficient estimator for β_0 even under misspecification means that our findings complement <cit.>, where the robustness properties of the Gaussian distribution are used to obtain an efficient Bayesian estimator of the coefficients in a linear regression model with nonparametric heteroskedasticity. § CONCLUSION We prove a semiparametric Bernstein-von Mises theorem for the partially linear model that bypasses a potentially fragile prior invariance condition. The key device is a reparametrization of the model that incorporates information about the least favorable direction in a manner analogous to profile likelihood. As a result, prior invariance is automatically satisfied because there is no loss of information in not knowing the nuisance parameter. The findings complement recent literature that uses data-driven dependent priors to overcome these prior stability conditions. Moreover, the results provide another example of a situation where Bayesian credible sets coincide with asymptotically efficient frequentist confidence sets even though the sampling model is misspecified. We are currently exploring the extent to which the profiling approach can be applied to a general class of strict semiparametric models. § PROOF OF THEOREM <REF> Suppose that Assumption <ref> holds. Then, for any β∈ℬ fixed, m_0=(m_01,m_02) is the unique solution to the minimization problem min_m ∈ℳ𝔼_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))], where KL(p_0(·|W_i),𝒩(m(W_i),V(β))) is the KL divergence between p_0(·|W_i) and 𝒩(m(W_i),V(β)). Moreover, (β_0,m_0) is the unique solution to the minimization problem min_(β,m) ∈ℬ×ℳ𝔼_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))]. Outline. The proof has three steps. Step 1 relates the objective function to a weighted least squares criterion. Step 2 argues that the least squares criterion is uniquely minimized at m_0. Step 3 uses the results from the first two steps to prove the second statement of the theorem. Step 1. Let p_β,m(· |W_i) denote the probability density function of 𝒩(m(W_i),V(β)). First, we note that E_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))] = E_P_0[logp_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] =E_P_0[log p_0(Y_i,X_i|W_i)]-E_P_0[log p_β,m(Y_i,X_i|W_i)] Let Z_i = (Y_i,X_i)'. Applying the formula for a bivariate normal probability density function, we have that p_β,m(Y_i,X_i|W_i) = 1/√((2π)^2 V(β))exp(-1/2(Z_i-m(W_i))'V(β)^-1(Z_i-m(W_i))) which implies that E_P_0[log p_β,m(Y_i,X_i|W_i)] = -1/2log 4 π^2 - 1/2log V(β) - 1/2E_P_0[(Z_i-m(W_i))'V(β)^-1(Z_i-m(W_i))]. Assumption <ref> and ℳ⊆ L_2(𝒲)× L_2(𝒲) ensures that E_P_0|log p_0(Y_i,X_i|W_i)| and E_P_0|log p_β,m(Y_i,X_i|W_i)| are finite. The former is a direct application of Part 4 of Assumption <ref> and the latter holds because E_P_0[Y_i^2] ∈ (0,∞), E_P_0[X_i^2] ∈ (0,∞), and V(β) >0 for any β∈ℬ as σ_01^2,σ_02^2∈ (0,∞). Consequently, there exists a constant C ∈ (-∞,∞) that does not depend on m such that E_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))] = C+1/2E_P_0[(Z_i-m(W_i))'V(β)^-1(Z_i-m(W_i))]. Next, we note that Z_i = m_0(W_i)+ε_i with ε_i = Z_i-m_0(W_i) being orthogonal to any measurable function of W_i. As such, we can write 1/2E_P_0[(Z_i-m(W_i))'V(β)^-1(Z_i-m(W_i))] = 1/2E_P_0[(ε_i+m_0(W_i)-m(W_i))'V(β)^-1(ε_i+ m_0(W_i)-m(W_i))] =1/2E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] +1/2E_P_0[ε_i'V(β)^-1ε_i] Part 4 of Assumption <ref> and the observation that V(β)^-1 is symmetric and positive-definite guarantees that E_P_0[ε_i'V(β)^-1ε_i] ∈ (0,∞). Positive-definiteness of V(β) follows from Sylvester's criterion because σ_01^2+β^2σ_01^2>0 and V(β) = σ_01^2σ_02^2>0. Consequently, we can combine the above with (<ref>) to conclude that there exists a constant C ∈ (-∞,∞) that does not depend on m such that E_P_0[KL(p_0(·|W_i),𝒩(m(W_i),V(β)))] = C + 1/2E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))]. Consequently, it suffices to solve min_m ∈ℳE_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))]. Step 2. We establish that m_0 uniquely solves (<ref>). First, m_0 minimizes the objective function because E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))]≥ 0 for all m ∈ℳ with equality at m = m_0. For uniqueness, we note that the positive-definiteness of V(β)^-1 implies E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] ≥λ_min(V(β)^-1) {E_P_0[(m_1(W_i)-m_01(W_i))^2] + E_P_0[(m_2(W_i)-m_02(W_i))^2] } and E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] ≤λ_max(V(β)^-1) {E_P_0[(m_1(W_i)-m_01(W_i))^2] + E_P_0[(m_2(W_i)-m_02(W_i))^2] }, where λ_min(V(β)^-1)>0 is the smallest eigenvalue of V(β)^-1 and λ_max(V(β)^-1)>0 is the largest eigenvalue of V(β)^-1. Since E_P_0[(m_1(W_i)-m_01(W_i))^2] + E_P_0[(m_2(W_i)-m_02(W_i))^2] >0 if and only if P_0(m_j(W_i) ≠ m_0j(W_i))>0 for some j ∈{1,2}, we conclude m_0 is the unique minimizer (in an almost-sure sense). Indeed, the existence of another minimizer m^*∈ℳ such that P_0(m_0(W_i) ≠ m^*(W_i))>0 leads to a contradiction because the lower bound (<ref>) is positive and, as a result, m^* cannot be a solution to (<ref>). Step 3. We now prove the second assertion of the theorem. Based on the results of Steps 1 and 2, it is sufficient to solve min_β∈ℬE_P_0[KL(p_0(·|W_i),𝒩(m_0(W_i),V(β)))]. Moreover, the factorization of a bivariate normal distribution into its conditional and marginal distributions further simplifies the problem to solving min_β∈ℬE_P_0[1/2σ_01^2(Y_i-m_01(W_i)-(X_i-m_02(W_i))β)^2] Adding and subtracting (X_i-m_02(W_i))β_0 and invoking Part 4 of Assumption <ref>, we can find a constant C ∈ (-∞,∞) such that E_P_0[1/2σ_01^2(Y_i-m_01(W_i)-(X_i-m_02(W_i))β)^2] = C + 1/2σ_01^2(β-β_0)^2E_P_0[(X_i-m_02(W_i))^2], and, as a result, it suffices to solve the least squares problem min_β∈ℬ{1/2σ_01^2(β-β_0)^2E_P_0[(X_i-m_02(W_i))^2]}. As the objective function is nonnegative and equal to 0 when β = β_0, it follows that β_0 is a solution to the minimization problem (<ref>). Uniqueness follows from Part 2 of Assumption <ref> because the restriction E_P_0[(X_i-m_02(W_i))^2] ∈ [c,c] implies that the objective function is equal to 0 if and only if β = β_0. This completes the proof. § PROOF OF THEOREM <REF> Let {ℳ_n∩ B_ℳ(m_0,Kδ_n)}_n=1^∞ be the sets defined in Assumption <ref>. If Assumptions <ref>, <ref>, and <ref> hold, then * The following holds uniformly over ℳ_n∩ B_ℳ(m_0,Kδ_n): 1/√(n)ℓ̃_n(β_0,m)=G̃_n(m_0)+o_P_0(1). * For M_n→∞ arbitrarily slowly, the following expansion of the log-likelihood function holds uniformly over B_ℬ(β_0,M_n/√(n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)): ℓ_n(β,m) - ℓ_n(β_0,m) = G̃_n(m_0)√(n)(β-β_0) - 1/2Ĩ_n(m_0)n(β-β_0)^2 + o_P_0(1). * Let c_0=c/σ_01^2 and c/σ_01^2, where c and c are the constants Part 2 of Assumption <ref>. The following holds with P_0-probability approaching 1 as n→∞: c_0≤inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m)≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≤c_0. Outline. The proof has three steps. Step 1 verifies the first part of the lemma. Step 2 establishes the second part of the lemma. Step 3 proves the third part of the lemma. Step 1. Adding and subtracting E_P_0[ℓ̃_n(β_0,m)] and then G̃_n(m_0), we have that 1/√(n)ℓ̃_n(β_0,m) = G̃_n(m) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)] =G̃_n(m)-G̃_n(m_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)] + G̃_n(m_0). Consequently, the triangle inequality and subadditivity of supremum yields sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) | 1/√(n)ℓ̃_n(β_0,m) - G̃_n(m_0) | ≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) |G̃_n(m)-G̃_n(m_0) | + sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|1/√(n)E_P_0[ℓ̃_n(β_0,m)]| Part 1 of Assumption <ref> guarantees that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) |G̃_n(m)-G̃_n(m_0) | P_0→ 0, and, as a result, it remains to show that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|1/√(n)E_P_0[ℓ̃_n(β_0,m)]|→ 0 in order to complete the proof of Part 1 of the lemma. Using the definition of ℓ̃_n(β_0,m) and the assumption of i.i.d sampling, we have that E_P_0[ℓ̃_n(β_0,m)] = n/σ_01^2E_P_0[(Y_i-m_1(W_i)-(X_i-m_2(W_i))β_0)(X_i-m_2(W_i))] =n/σ_01^2E_P_0[(m_01(W_i)-m_1(W_i)-(m_02(W_i)-m_2(W_i))β_0+U_i)(m_02(W_i)-m_2(W_i)+ε_i)], where U_i = Y_i - E_P_0[Y_i|X_i,W_i] and ε_i = X_i-E_P_0[X_i|W_i]. Applying the law of iterated expectations and using orthogonality properties of projection errors, one can show that E_P_0[U_iε_i] = 0, E_P_0[U_i(m_02(W_i)-m_2(W_i))] = 0, and E_P_0[ε_i(m_01(W_i)-m_1(W_i)-(m_02(W_i)-m_2(W_i))β_0)]=0. Consequently, E_P_0[ℓ̃_n(β_0,m)] = n/σ_01^2E_P_0[(m_01(W_i)-m_1(W_i)-(m_02(W_i)-m_2(W_i))β_0)(m_02(W_i)-m_2(W_i))], and, as a result, 1/√(n)|E_P_0[ℓ̃_n(β_0,m)]| ≲√(n)||m_1-m_01||_L_2(𝒲)||m_2-m_02||_L_2(𝒲) + √(n)||m_2-m_02||_L_2(𝒲)^2 ≲√(n)||m-m_0||_L_2(𝒲)× L_2(𝒲)^2 by the Cauchy-Schwarz inequality. Consequently, sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|1/√(n)E_P_0[ℓ̃_n(β_0,m)]|≲sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)√(n)||m-m_0||_L_2(𝒲)× L_2(𝒲)^2≤√(n)δ_n^2 As Assumption <ref> implies √(n)δ_n^2→ 0, it follows that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|1/√(n)E_P_0[ℓ̃_n(β_0,m)]| → 0 as n→∞. We conclude Part 1 of the Lemma. Step 2. Let M_n→∞ arbitrarily slowly. Using the definition of p_β,m and adding/subtracting G̃_n(m_0) and Ĩ_n(m_0), we have that ℓ_n(β,m) - ℓ_n(β_0,m) = 1/√(n)ℓ̃_n(β_0,m)√(n)(β-β_0) - 1/2Ĩ_n(m)n(β-β_0)^2 =(1/√(n)ℓ̃_n(β_0,m)-G̃_n(m_0))√(n)(β-β_0) + G̃_n(m_0)√(n)(β-β_0) - 1/2(Ĩ_n(m)-Ĩ_n(m_0))n(β-β_0)^2 -1/2Ĩ_n(m_0)n(β-β_0)^2 As such, we have that | ℓ_n(β,m) - ℓ_n(β_0,m) - G̃_n(m_0)√(n)(β-β_0) + 1/2Ĩ_n(m_0)n(β-β_0)^2 | ≤ | 1/√(n)ℓ̃_n(β_0,m)-G̃_n(m_0)|·|√(n)(β-β_0)| + 1/2|Ĩ_n(m)-Ĩ_n(m_0)|· n(β-β_0)^2 for all (β,m) ∈ B_ℬ(β_0,M_n/√(n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)). This implies that sup_(β,m) ∈ B_ℬ(β_0,M_n/√(n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)) | ℓ_n(β,m) - ℓ_n(β_0,m) - G̃_n(m_0)√(n)(β-β_0) + 1/2Ĩ_n(m_0)n(β-β_0)^2 | ≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) | 1/√(n)ℓ̃_n(β_0,m)-G̃_n(m_0)|M_n + 1/2sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_n(m_0)|M_n^2. By Part 1 of the Lemma and Part 2 of Assumption <ref>, there exists a sequence of positive constants c_n such that c_n→ 0 and max{sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) | 1/√(n)ℓ̃_n(β_0,m)-G̃_n(m_0)|,sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_n(m_0)| }≤ c_n with P_0-probability approaching 1. By selecting M_n→∞ arbitrarily slowly, we have that c_n(M_n+M_n^2)→ 0, and, as a result, we are able to conclude that ℓ_n(β,m) - ℓ_n(β_0,m) = G̃_n(m_0)√(n)(β-β_0) - 1/2Ĩ_n(m_0)n(β-β_0)^2 + o_P_0(1) holds uniformly over B_ℬ(β_0,M_n/√(n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)). This completes the proof of Part 2 of the lemma. Step 3. We first show that Ĩ_n(m) = Ĩ_n(m_0) + o_P_0(1) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). Adding/subtracting Ĩ_0(m) and Ĩ_0(m_0), we have that |Ĩ_n(m)-Ĩ_n(m_0)| = |Ĩ_n(m)-Ĩ_0(m)+Ĩ_0(m)-Ĩ_0(m_0)+Ĩ_0(m_0)-Ĩ_n(m_0)| ≤ 2 sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_0(m)|+ sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_0(m)-Ĩ_0(m_0)| with the inequality applying the triangle inequality and the definition of the supremum. As this upper bound does not depend on m, it follows that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_n(m_0)| ≤ 2 sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_0(m)| + sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_0(m)-Ĩ_0(m_0)| Applying Part 2 of Assumption <ref>, we have that 2 sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m)-Ĩ_0(m)| P_0→ 0 as n→∞. Moreover, we use the definition of Ĩ_0(m) to obtain sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_0(m)-Ĩ_0(m_0)| = 1/σ_01^2sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)E_P_0[(m_2(W_i)-m_02(W_i))^2] ≲δ_n^2 with the inequality reflecting E_P_0[(m_2(W_i)-m_02(W_i))^2] = ||m_2-m_02||_L_2(𝒲)^2≤ ||m-m_0||_L_2(𝒲)× L_2(𝒲)^2≤δ_n^2 for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). As δ_n→ 0, it follows that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_0(m)-Ĩ_0(m_0)| → 0 as n→∞. Consequently, we have that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|Ĩ_n(m) - Ĩ_n(m_0)|P_0→ 0 as n→∞. This implies the existence of a sequence of nonnegative numbers {c_n}_n=1^∞ such that c_n→ 0, and, with P_0-probability approaching 1, -c_n≤Ĩ_n(m) -Ĩ_n(m_0) ≤ c_n holds uniformly over ℳ_n∩ B_ℳ(m_0,Kδ_n). As a result, we are able to conclude that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≤Ĩ_n(m_0) + c_n and inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≥Ĩ_n(m_0) - c_n with P_0-probability approaching 1. As Ĩ_n(m_0) P_0→Ĩ_0(m_0) by the weak law of large numbers and Ĩ_0(m_0) ∈ [c_0,c_0] by Part 2 of Assumption <ref>, it follows that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≤c_0+ c_n and inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≥c_0-c_n with P_0-probability approaching 1. This completes the proof of the third part of the lemma because c_n→ 0 as n→∞. Let {ℳ_n∩ B_ℳ(m_0,Kδ_n)}_n=1^∞ be the sets defined in Assumption <ref>. If Assumptions <ref>, <ref>, <ref>, and <ref> hold, then for every M_n→∞ sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) P_0→ 0 as n→∞. Outline. The proof is comprised of three steps. Step 1 obtains an expression for the posterior based on a quadratic expansion of the log-likelihood function. Step 2 shows that the numerator is uniformly bounded from above by exp(-(c_0/4) M_n^2) with P_0-probability approaching 1, where c_0>0 is the constant defined in Part 3 of Lemma <ref>. Step 3 shows that the denominator is uniformly bounded from below by exp(-(c_0/8)M_n^2) with P_0-probability approaching 1. The arguments for Step 2 and Step 3 are similar to those encountered on pp. 233-234 and the proof of Lemma 6.3, respectively, in <cit.> with appropriate modifications to account for the different parametrization. Step 1. We start with the definition of the posterior: Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) = ∫_B_ℬ(β_0,M_n/√(n))^cexp(ℓ_n(β,m) - ℓ_n(β_0,m))dΠ_ℬ(β)/∫_ℬexp(ℓ_n(β,m) - ℓ_n(β_0,m))dΠ_ℬ(β). As p_β,m(Y_i,X_i|W_i) is the density of 𝒩(m(W_i),V(β)), we obtain the following expansion of the log-likelihood function around the true parameter β_0 ℓ_n(β,m) - ℓ_n(β_0,m) = 1/√(n)ℓ̃_n(β_0,m)√(n)(β-β_0) - 1/2n(β-β_0)^2Ĩ_n(m) = G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m). with the second equality adding and subtracting 1/√(n)E_P_0[ℓ̃_n(β_0,m)]. Consequently, Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) =∫_B_ℬ(β_0,M_n/√(n))^cexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β)/∫_ℬexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β). Step 2. Our first task is to show that, with P_0-probability approaching 1, ∫_B_ℬ(β_0,M_n/√(n))^cexp((G̃_n(m)+ 1/√(n)E_P_0[ℓ̃_n(β_0,m)])√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≤exp(-c_0/4M_n^2) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). Using the definition of the supremum and that probabilities are bounded from above by 1, we have that ∫_B_ℬ(β_0,M_n/√(n))^cexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≤sup_β∈ B_ℬ(β_0,M_n/√(n))^cexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m)) ≤exp(sup_β∈ B_ℬ(β_0,M_n/√(n))^c[G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m)]) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). From Part 1 of Lemma <ref>, we know that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) |G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]- G̃_n(m_0) | P_0→ 0. Consequently, with P_0-probability approaching 1, we have that G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)] ≤c_0/4M_n uniformly over m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) because G̃_n(m_0) = O_P_0(1) by the Lindeberg–Lévy Central Limit Theorem. Its application is valid because E[ℓ̃(Y_i,X_i,W_i,β_0,m_0)] = 0 and Var_P_0(ℓ̃(Y_i,X_i,W_i,β_0,m_0))= Ĩ_0(m_0) ∈ [c_0,c_0] by Part 2 of Assumption <ref>. Applying Part 3 of Lemma <ref>, we know that inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≥c_0 holds with P_0-probability approaching 1. Consequently, with P_0-probability approaching 1, exp(sup_β∈ B_ℬ(β_0,M_n/√(n))^c[G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m)]) ≤exp(c_0sup_β∈ B_ℬ(β_0,M_n/√(n))^c[M_n/4√(n)(β-β_0)- 1/2n(β-β_0)^2]) Putting this all together, we conclude that with P_0-probability approaching 1: ∫_B_ℬ(β_0,M_n/√(n))^cexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≤exp(-c_0/4M_n^2) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). Step 3. It remains to show that with P_0-probability approaching 1, ∫_ℬexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≥exp(-c_0/8M_n^2) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). Combined with the result from Step 1, it is sufficient to conclude the result because it implies that with P_0-probability approaching 1: sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) ≤exp(-c/8M_n^2)→ 0 as n→∞. Let C = c_0/8c_0 with 0<c_0<c_0<∞ defined as in Part 3 of Lemma <ref>. As B_ℬ(β_0,√(C/n)) is contained in ℬ_0 for n large, it follows that there exists a constant C̃> 0 such that inf_β∈ B_ℬ(β_0,√(C/n))π_ℬ(β) ≥C̃. Consequently, we have that inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_ℬexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≥C̃inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_B_ℬ(β_0,√(C/n))exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0) -1/2n(β-β_0)^2Ĩ_n(m))d β for n large (with dβ symbolizing integration with respect to Lebesgue measure). As Ĩ_n(m) ∈ [c_0,c_0] for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n) holds with P_0-probability approaching 1 (Part 3 of Lemma <ref>), it follows that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Ĩ_n(m) ≤c_0M_n^2 with P_0-probability approaching 1 for any M_n→∞. Consequently, with P_0-probability approaching 1, -1/2n(β-β_0)^2Ĩ_n(m) ≥ -c_0/16M_n^2 for all (β,m) ∈ B_ℬ(β_0,√(C/n))× (ℳ_n∩ B_ℳ(m_0,Kδ_n)). Consequently, for n large, P_0(inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_ℬexp(ℓ_n(β,m)-ℓ_n(β_0,m))d Π_ℬ(β) < exp(-c_0/8M_n^2)) ≤ P_0(inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_B_ℬ(β_0,√(C/n))exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)])√(n)(β-β_0))dβ < C̃^-1exp(-c_0/16M_n^2)) Applying Jensen's inequality, we have that ∫_B_ℬ(β_0,√(C/n))exp((G̃_n(m)+√(n)E_P_0[ℓ̃_n(β_0,m))√(n)(β-β_0))dβ ≥exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)])∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). As the infimum is the greatest lower bound, we then obtain inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_B(β^*(m),√(C/n))exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0))dβ ≥inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)]∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β) ≥exp(inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)(G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)]∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β) Applying Part 1 of Lemma <ref>, we know that inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)(G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)]) = G̃_n(m_0)+o_P_0(1). Consequently, we conclude that, for n large, P_0(inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)∫_B_ℬ(β_0,√(C/n))exp((G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)])√(n)(β-β_0))d β < C̃^-1exp(-c_0/16M_n^2)) ≤ P_0(exp(inf_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)(G̃_n(m)+1/√(n)E_P_0[ℓ̃_n(β_0,m)])∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β) < C̃^-1exp(-c/16M_n^2)) ≤ P_0(exp(G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β)<C̃^-1exp(-c_0/16M_n^2)) = P_0(G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β<-logC̃-c_0/16M_n^2) ≤ P_0(G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β<-c_0/32M_n^2) with the last inequality holding because -logC̃≤ (c/32)M_n^2 for n large. As G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β<-c_0/32M_n^2 |G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β | > c_0/32M_n^2, we can apply Chebyshev's inequality to conclude that P_0(G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β<-c_0/32M_n^2 ) ≲1/M_n^4E_P_0|G̃_n(m_0)|^2 | ∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β |^2 ≲1/M_n^4E_P_0|G̃_n(m_0)|^2 with the second inequality using Jensen's inequality and that sup_β∈ B_ℬ(β_0,√(C/n))|√(n)(β-β_0)|^2≤ C. As E_P_0[G̃_n(m_0)] = 0, it follows that E_P_0|G̃_n(m_0)|^2 = Var_P_0[G̃_n(m_0)] = Var(ℓ̃(Y_i,X_i,W_i,β_0,m_0)) = Ĩ_0(m_0) ≤c_0 with the inequality reflecting Part 2 of Assumption <ref>. Consequently, we have that P_0(G̃_n(m_0)∫_B_ℬ(β_0,√(C/n))√(n)(β-β_0)d β<-c_0/32M_n^2 ) = O(M_n^-4). Putting this all together, we conclude that with P_0-probability approaching 1: ∫_ℬexp(G̃_n(m)√(n)(β-β_0) + 1/√(n)E_P_0[ℓ̃_n(β_0,m)]√(n)(β-β_0)- 1/2n(β-β_0)^2Ĩ_n(m))dΠ_ℬ(β) ≥exp(-c_0/8M_n^2) for all m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n). This completes the proof. Suppose Assumptions <ref>, <ref>, <ref>, <ref>, and <ref> hold, and that β_0∈int(ℬ). Then, | |Π(√(n)(β-β_0) ∈· |{(Y_i,X_i,W_i)}_i=1^n) - 𝒩(Δ̃_n,0,Ĩ_n(m_0)^-1) | |_TVP_0⟶ 0, as n→∞, where ||· ||_TV denotes the total variation distance. Outline. The proof is comprised of four steps. Step 1 and Step 2 utilize Assumption <ref> and Lemma <ref> to show that the total variation distance between the marginal posterior Π(β∈·{(Y_i,X_i,W_i)}_i=1^n) and the finite measure ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈·{(Y_i,X_i,W_i)}_i=1^n,β∈ B_ℬ(β_0,M_n/√(n)),m)d Π(m|{(Y_i,X_i,W_i)}_i=1^n) converges in probability to 0. Step 3 uses the assumption about the density of Π_ℬ and Part 2 of Lemma <ref> to reduce the problem to a parametric problem (where β is the only free parameter). Step 4 concludes by showing asymptotic normality. Step 1. Applying the law of total probability, we have that Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n) = ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) + ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) + ∫_ℳ_n^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n). for any event A. As 0≤Π(β∈·|{(Y_i,X_i,W_i)}_i=1^n,m) ≤ 1, it follows that sup_A ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) ≤Π(m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)^c|{(Y_i,X_i,W_i)}_i=1^n) and sup_A ∫_ℳ_n^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) ≤Π(m ∈ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n) Applying Assumption <ref>, it follows that sup_A ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 0 and sup_A ∫_ℳ_n^cΠ(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 0, and, as a result, sup_A | Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n)-∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) | P_0→ 0 as n→∞. As such, it suffices to analyze the behavior of the set function A ↦ F_n,1(A) with F_n,1(A) = ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) for A. Step 2. Let M_n→∞ arbitrarily slowly. Applying the law of total probability, we have that sup_A |F_n,1(A)-F_n,1(A ∩ B_ℬ(β_0,M_n/√(n)))| = sup_A F_n,1(A ∩ B_ℬ(β_0,M_n/√(n))^c) ≤sup_A sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(A ∩ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m)Π(ℳ_n∩ B_ℳ(m_0,Kδ_n)|{(Y_i,X_i,W_i)}_i=1^n) ≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) with the first inequality applies Hölder's inequality, and the second inequality uses that A ∩ B_ℬ(β_0,M_n/√(n))^c⊆ B_ℬ(β_0,M_n/√(n))^c for all A and that 0 ≤Π(ℳ_n∩ B_ℳ(m_0,Kδ_n)|{(Y_i,X_i,W_i)}_i=1^n) ≤ 1. Applying Lemma <ref>, we have that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) P_0→ 0 as n→∞, and, as a result, sup_A |F_n,1(A)-F_n,1(A ∩ B_ℬ(β_0,M_n/√(n)))| P_0→ 0 as n→∞. Moreover, we know that |∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_B_ℬ(β_0,M_n/√(n))L_n(β,m)d Π_ℬ(β)/∫_ℬL_n(β,m)d Π_ℬ(β) -1 ) | = ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) d Π(m|{(Y_i,X_i,W_i)}_i=1^n) ≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) Π(m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)|{(Y_i,X_i,W_i)}_i=1^n) ≤sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) with the first inequality applying Hölder's inequality, and the second using that probability are bounded by 1. Applying Lemma <ref>, we have that sup_m ∈ℳ_n∩ B_ℳ(m_0,Kδ_n)Π(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,m) P_0→ 0 as n→∞, and, as a result, |∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_B_ℬ(β_0,M_n/√(n))L_n(β,m)d Π_ℬ(β)/∫_ℬL_n(β,m)d Π_ℬ(β) -1 ) | P_0→ 0 as n→∞. Consequently, we are able to conclude that sup_A |F_n,1(A)-∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_A ∩ B_ℬ(β_0,M_n/√(n))L_n(β,m) d Π_ℬ(β)/∫_B_ℬ(β_0,M_n/√(n))L_n(β,m) d Π_ℬ(β))d Π(m|{(Y_i,X_i,W_i)}_i=1^n)| P_0→ 0, and that it suffices to analyze the behavior of the mapping A ↦ F_n,2(A) with F_n,2(A) = ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_A ∩ B_ℬ(β_0,M_n/√(n))L_n(β,m) d Π_ℬ(β)/∫_B_ℬ(β_0,M_n/√(n))L_n(β,m) d Π_ℬ(β))d Π(m|{(Y_i,X_i,W_i)}_i=1^n) for all A. Step 3. Exponentiating and adding/subtracting ℓ_n(β_0,m), we have that F_n,2(A)=∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β)/∫_B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β))d Π(m|{(Y_i,X_i,W_i)}_i=1^n) for all A. Let R̅_n = sup_(β,m) ∈ B_ℬ(β_0,M_n/√(n))× B_ℳ(m_0,Kδ_n)exp(ℓ_n(β,m)-ℓ_n(β_0,m) - G̃_n(m_0)√(n)(β-β_0)+1/2Ĩ_n(m_0)n(β-β_0)^2) and R_n = inf_(β,m) ∈ B_ℬ(β_0,M_n/√(n))× B_ℳ(m_0,Kδ_n)exp(ℓ_n(β,m)-ℓ_n(β_0,m) - G̃_n(m_0)√(n)(β-β_0)+1/2Ĩ_n(m_0)n(β-β_0)^2) Part 2 of Lemma <ref> implies that R̅_nP_0→ 1 and R_nP_0→ 1 as n→∞. Consequently, there exists a positive sequence c_n,1 such that c_n,1→ 0 and | R̅_n/R_n - 1 | ≤ c_n,1 with P_0-probability approaching 1. Next, the assumption that Π_ℬ has a density that is continuous and positive in a neighborhood of β_0 guarantees that there exists a positive sequence c_n,2 such that c_n,2→ 0 and | sup_β∈ B_ℬ(β_0,M_n/√(n))π_ℬ(β) /inf_β∈ B_ℬ(β_0,M_n/√(n))π_ℬ(β) - 1 | ≤ c_n,2. As a result, the following inequalities hold with probability approaching 1 as n→∞: ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β)/∫_B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β))d Π(m|{(Y_i,X_i,W_i)}_i=1^n) ≤ (1+c_n,1)(1+c_n,2)∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(G̃_n(m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β/∫_B_ℬ(β_0,M_n/√(n))exp(G̃_n(m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β and ∫_ℳ_n∩ B_ℳ(m_0,Kδ_n)(∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β)/∫_B_ℬ(β_0,M_n/√(n))exp(ℓ_n(β,m)-ℓ_n(β_0,m)) d Π_ℬ(β))d Π(m|{(Y_i,X_i,W_i)}_i=1^n) ≥ [(1+c_n,1)(1+c_n,2)]^-1∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(G̃_n(m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β/∫_B_ℬ(β_0,M_n/√(n))exp(G̃_n(m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β for all events A. As G̃_n(m_0) = 1/√(n)ℓ̃_n(β_0,m_0), it follows from above that sup_A | F_n,2(A)- ∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β/∫_B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β| P_0→ 0 as n→∞. Step 4. Completing the square exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) = exp(-1/2Ĩ_n(m_0)(n(β-β_0)^2 -2Ĩ_n(m_0)^-11/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0))) ∝exp( -1/2Ĩ_n(m_0)(√(n)(β-β_0) -Ĩ_n(m_0)^-11/√(n)ℓ̃_n(β_0,m_0))^2), where `∝' reflects a term that does not depend on β. Letting Δ̃_n,0 = Ĩ_n(m_0)^-11/√(n)ℓ̃_n(β_0,m_0), it follows that ∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) d β/∫_B_ℬ(β_0,M_n/√(n))exp(1/√(n)ℓ̃_n(β_0,m_0)√(n)(β-β_0)-1/2Ĩ_n(m_0)n(β-β_0)^2) =∫_A ∩ B_ℬ(β_0,M_n/√(n))exp(-1/2Ĩ_n(m_0)(√(n)(β-β_0) -Δ̃_n,0)^2) d β/∫_ B_ℬ(β_0,M_n/√(n))exp(-1/2Ĩ_n(m_0)(√(n)(β-β_0) -Δ̃_n,0)^2) d β. for all events A. Letting 𝒩^C(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) denote a Gaussian distribution with mean β_0+Δ̃_n,0/√(n) and variance n^-1Ĩ_n(m_0)^-1 truncated to C, we conclude from (<ref>) and Steps 1,2, and 3 that | | Π(β∈· |{(Y_i,X_i,W_i)}_i=1^n) - 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) | |_TV = o_P_0(1) Since 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) and 𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) both admit densities with respect to the Lebesgue measure on ℝ, we have that | | 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) -𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) | |_TV = 1/2∫ | d 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(t) - d𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(t) | dt = 1/2 | 1/∫_B_ℬ(β_0,M_n/√(n)) d𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(s)ds - 1 |, where the first equality uses that total variation distance is 1/2 times the L_1-distance between densities, and the second equality uses that d 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(t) = d 𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(t)/∫_B_ℬ(β_0,M_n/√(n))d 𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(s)ds. Using properties of Gaussian distributions, we have that ∫_B_ℬ(β_0,M_n/√(n)) d𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(s)ds = Φ(M_n-Δ̃_n,0/√(Ĩ_n(m_0)^-1))-Φ(-M_n-Δ̃_n,0/√(Ĩ_n(m_0)^-1)), where Φ(·) is standard normal cumulative distribution function. The central limit theorem, the law of large numbers, and the continuous mapping theorem guarantee that Δ̃_n,0d→𝒩(0,Ĩ(m_0)^-1) as n→∞. Similarly, the law of large numbers and the continuous mapping theorem guarantee that Ĩ_n(m_0)^-1P_0→Ĩ(m_0)^-1 as n→∞. As M_n→∞, Δ̃_n,0 = O_P_0(1), and Ĩ_n(m_0)^-1 = O_P_0(1), it follows from (<ref>) that ∫_B_ℬ(β_0,M_n/√(n)) d𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(s)ds = 1 + o_P_0(1). As such, we have that 1/2 | 1/∫_B_ℬ(β_0,M_n/√(n)) d𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1)(s)ds - 1 | P_0⟶ 0, and, by (<ref>), that | | 𝒩^B_ℬ(β_0,M_n/√(n))(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) -𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) | |_TVP_0⟶ 0 Consequently, we apply the triangle inequality to conclude that | | Π(β∈· |{(Y_i,X_i,W_i)}_i=1^n) - 𝒩(β_0+Δ̃_n,0/√(n),n^-1Ĩ_n(m_0)^-1) | |_TVP_0⟶ 0 As total variation is invariant to shifts in location and change in scale, it follows that (<ref>) is equivalent to | | Π(√(n)(β-β_0) ∈· |{(Y_i,X_i,W_i)}_i=1^n) - 𝒩(Δ̃_n,0,Ĩ_n(m_0)^-1) | |_TVP_0⟶ 0 . This completes the proof. § ELABORATION ON SECTION <REF> AND PROOF OF PROPOSITIONS <REF> AND <REF> §.§ Elaboration on the Example in Section <ref> We start with verification of (<ref>). First, we show that (<ref>) holds. Let ||f||_∞ = sup_w ∈ [0,1]|f(w)|. As ||·||_L_2(𝒲)≤ ||·||_∞, it follows that (Π_J×Π_J)(m ∈ B_ℳ(m_0,Kδ_n)) ≥Π_J(||m_1-m_01||_∞<δ_n)Π_J(||m_2-m_02||_∞<δ_n) Viewing {S(w): w ∈ [0,1]} as a random element of (C([0,1]),||·||_∞), we can apply Theorem 2.1 and 4.1 in <cit.> to conclude that there exists a constant C>0 such that Π(||m_j-m_0j||_∞< δ_n,j) ≥exp(-Cnδ_n,j^2) for δ_n,j = n^-α_j/2J+2 for j ∈{1,2}. Consequently, we have that there exists a constant C>0 such that Π_ℳ(m ∈ B_ℳ(m_0,Kδ_n)) ≥exp(-C nδ_n^2) for δ_n = max{n^-α_1/2J+2, n^-α_2/2J+2}. Next we verify the entropy condition (<ref>). Assume that α_2< α_1. As ||· ||_L_2(𝒲)≤ ||·||_∞, it follows that open balls with respect to L_2-norm contain open balls with respect to supremum norm, and, as a result, log N(δ_n,ℳ,||·||_ℳ,2) ≤log N(δ_n,C_r_1^α_1,||·||_∞) + log N(δ_n,C_r_2^α_2,||·||_∞). Applying Theorem 2.7.1 of <cit.>, we have that log N(δ_n,C_r_j^α_j,||·||_∞) ≲δ_n^-1/α_j for each j ∈{1,2}, and, as a result, log N(δ_n,ℳ,||·||_ℳ,2) ≲ nδ_n^2 if δ_n^-(2α_j+1)/α_j≤ n for each j ∈{1,2}. If α_j = α_2, then the condition simplifies to n^2α_2+1/2J+2≤ n and holds if α_2≤ J+1/2. If α_j = α_1, then δ_n^-(2α_1+1)/α_1 = n^2α_1+1/2J+2·α_2/α_1≤ n^2α_1+1/2J+2 because α_2/α_1∈ (0,1). We then conclude that the condition holds if 2α_1+1 ≤ 2J +2, which is equivalent to α_1≤ J+ 1/2. Consequently, J + 1/2≥max{α_1,α_2} implies (<ref>). A symmetric argument can be applied for the case with α_2≥α_1. We now verify Assumption <ref>. Suppose that α_2<α_1. Then √(n)δ_n^2 = n^1/2-2α_2/2J+2. Consequently, √(n)δ_n^2→ 0 if and only if 1/2<2α_2/2J+2, which is equivalent to J < 2 α_2 - 1/2. Similarly, α_1<α_2 implies δ_n = n^-α_1/2J+2, and, as a result, √(n)δ_n^2→ 0 if and only if 1/2-2α_1/2J+2< 0 if and only if J < 2 α_1-1/2. Assumption <ref> is therefore satisfied if J < 2 min{α_1,α_2}-1/2 for the case where α_2<α_1. In regards to Assumption <ref>, the class ℳ is P_0-Glivenko-Cantelli by Corollary 2.7.2 in <cit.>. Consequently, one use preservation theorems to conclude that {(X_i-m(W_i))^2:m ∈ℳ} is P_0-Glivenko-Cantelli (e.g., <cit.>). Consequently, Part 2 of Assumption <ref> holds. For δ>0, the class {ℓ̃(·β_0,m) - ℓ̃(·,β_0,m_0): m ∈ℳ∩ B_ℳ(m_0,δ)} is P_0-Donsker and the uniform boundedness of ℳ implies m↦ℓ(·,β_0,m) is mean square continuous. Consequently, Lemma 3.3.5 in <cit.> can be applied to verify Part 1 of Assumption <ref>. §.§ Proof of Proposition <ref> Under Assumption <ref> and Part 1 of Assumption <ref>, the following conditions are sufficient for Part 1 of Assumption <ref>: * For any constant C>0, Π(E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))]≤ C) > 0 * For any constant C>0, Π_ℳ(ℳ_n^c)≤exp(-nC). Outline. The proof adapts the argument of Theorem 6.17 in <cit.> and has two steps. Step 1 derives a lower bound for the marginal likelihood function and Step 2 uses this lower bound to derive an upper bound for Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n) that converges in probability to 0. Step 1. Let C= E_P_0[KL(p_0(·|W_i),p_β_0,m_0(·|W_i))] ∈ (0,∞) and let A_n be the event that ∫_ℬ×ℳL_n(β,m)/L_n,0d Π(β,m) ≥Π(E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C)exp(-2Cn). with L_n,0 = ∏_i=1^np_0(Y_i,X_i|W_i). We argue that P_0(A_n)→ 1 as n→∞. As the integrand is nonnegative, we have that ∫_ℬ×ℳL_n(β,m)/L_n,0d Π(β,m) ≥∫_{(β,m): E_P_0[log [p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C}L_n(β,m)/L_n,0d Π(β,m) As Condition 1 of the proposition guarantees that Π(E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C)>0, the renormalized restriction Π_0,C of Π to {(β,m): E_P_0[log [p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C} is well-defined. Consequently, it follows that log∫_{(β,m): E_P_0[log [p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C}L_n(β,m)/L_n,0d Π(β,m) = logΠ(E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C)+ log∫_ℬ×ℳL_n(β,m)/L_n,0 dΠ_0,C(β,m) ≥logΠ(E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C) + ∫_ℬ×ℳlog[L_n(β,m)/L_n,0] dΠ_0,C(β,m) with the inequality following from an application of Jensen's inequality. Under i.i.d sampling, we have that ∫_ℬ×ℳlog[L_n(β,m)/L_n,0] dΠ_0,C(β,m) = -n 1/n∑_i=1^n(∫_ℬ×ℳlog[p_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] d Π_0,C(β,m)) By the weak law of large numbers, we have that 1/n∑_i=1^n(∫_ℬ×ℳlog[p_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] d Π_0,C(β,m)) P_0→ E_P_0( ∫_ℬ×ℳlog[p_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] d Π_0,C(β,m)). as n→∞. Applying Fubini's theorem and then dividing/multiplying by p_β_0,m_0(Y_i,X_i|W_i), we have that E_P_0∫_ℬ×ℳlog[p_0(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)] d Π_0,C(β,m) = ∫_ℬ×ℳ E_P_0[KL(p_0(·|W_i),p_β,m(Y_i,X_i|W_i)] d Π_0,C(β,m) =E_P_0[KL(p_0(·|W_i),p_β_0,m_0(Y_i,X_i|W_i)] + ∫_ℬ×ℳ E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))] d Π_0,C(β,m) ≤ 2C with the inequality holding because E_P_0[KL(p_0(·|W_i),p_β_0,m_0(·|W_i))] = C and Π_0,C(E_P_0[log(p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i))] ≤ C) =1 by definition of Π_0,C. Combining this with (<ref>), the following holds with P_0-probability approaching 1: ∫_ℬ×ℳlog[L_n(β,m)/L_n,0] dΠ_0,C(β,m) ≥ -2Cn, and, as a result, we are able to conclude from (<ref>) and (<ref>) that ∫_ℬ×ℳL_n(β,m)/L_n,0d Π(β,m) ≥Π(E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] ≤ C)exp(-2Cn) with P_0-probability approaching 1 as n→∞. In other words, P_0(A_n) → 1 as n→∞. Step 2. Recognizing that the sample space can be partitioned into A_n and its complement, we have that E_P_0[Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n)] =E_P_0[1{A_n}Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n)] +E_P_0[1{A_n^c}Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n)] ≤ E_P_0[1{A_n}∫_ℳ_n^c∫_ℬL_n(β,m)/L_n,0d Π_ℬ(β)d Π_ℳ(m)/∫_ℬ×ℳL_n(β,m)/L_n,0d Π(β,m)] + E_P_0[1{A_n^c}] ≲exp(2Cn) E_P_0[∫_ℳ_n^c∫_ℬL_n(β,m)/L_n,0d Π_ℬ(β)d Π_ℳ(m)] + P_0(A_n^c) = exp(2Cn) Π_ℳ(ℳ_n^c) + P_0(A_n^c) where the first inequality uses that probabilities are bounded from above by 1, the second uses the bound obtained from Step 1, and the final equality applies Fubini's theorem and the fact that E_P_0[L_n(β,m)/L_n,0] = 1. Applying the result from Step 1, we have that P_0(A_n^c) = o(1), and, as a result, the proof is complete by invoking Condition 2 of the proposition because Π_ℳ(ℳ_n^c) ≤exp(-C'n) for any C'>2C. §.§ Proof of Proposition <ref> Suppose that Assumption <ref> holds and ℬ is compact. Then there exist constants C_1, C_2>0 that do not depend on β or m such that C_1(||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2+|β-β_0|^2) ≤ E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] and C_2(||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2+|β-β_0|^2) ≥ E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] for all (β,m) ∈ℬ×ℳ. Outline. The proof is comprised of two steps. Step 1 establishes the first inequality and Step 2 establishes the second inequality. Step 1. First, we claim that there exists a constant C_11>0 such that ||m_1-m_01||_L_2(𝒲)^2 + ||m_2-m_02||_L_2(𝒲)^2≤ C_11 E_P_0[log p_β,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] for all (β,m) ∈ℬ×ℳ. To see this, we first observe that compactness of ℬ and continuity of β↦ V(β) implies {V(β): β∈ℬ} forms a compact set of symmetric and positive-definite matrices. As a result, there exists 0<λ < λ< ∞ such that λ≤λ_min(V(β)) ≤λ_max(V(β)) ≤λ for all β∈ℬ with λ_min(V(β)) and λ_max(V(β)) used to denote the minimum and maximum eigenvalues, respectively, of the matrix V(β). As ||m_1-m_01||_L_2(𝒲)^2 + ||m_2-m_02||_L_2(𝒲)^2 = E_P_0[(m(W_i)-m_0(W_i))'I_2(m(W_i)-m_0(W_i))] and 1/λ≤λ_min(V(β)^-1) ≤λ_max(V(β)^-1) ≤ 1/λ for all β∈ℬ, it follows that ||m_1-m_01||_L_2(𝒲)^2 + ||m_2-m_02||_L_2(𝒲)^2 = 2λ/2λ E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] ≤ 2λ1/2λ_min(V(β)^-1) E_P_0[(m(W_i)-m_0(W_i))'(m(W_i)-m_0(W_i))] ≤ 2λ1/2 E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] = 2 λ E_P_0[log p_β,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] for all (β,m) ∈ℬ. Let C_11 = 2 λ to conclude the claim. Next, we claim there exists a constant C_12>0 such that |β-β_0|^2≤ C_12E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m_0(Y_i,X_i|W_i)] for all β∈ℬ. Recognizing that 𝒩(m(W_i),V(β)) = 𝒩(m_1(W_i)+ (X_i-m_2(W_i))β,σ_01^2) ×𝒩(m_2(W_i),σ_02^2), it follows that E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m_0(Y_i,X_i|W_i)] = 1/2E_P_0[(β-β_0)^2(X_i-m_02(W_i))^2] = 1/2E_P_0[(X_i-m_02(W_i))^2]|β-β_0|^2. Applying Part 2 of Assumption <ref>, we conclude that for C_12 = 2/c: |β-β_0|^2≤ C_12E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m_0(Y_i,X_i|W_i)] . for all β∈ℬ. Setting C_1 = (max{C_11,C_12})^-1, it follows that C_1(|β-β_0|^2 + ||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2) ≤ E_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)], for all (β,m) ∈ℬ×ℳ and, as a result, we conclude the first claim. Step 2. First, we claim there exists a constant C_21>0 such that ||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2≥ C_21 E_P_0[log p_β,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)]. The argument is symmetric to Step 1. Indeed, {V(β): β∈ℬ} is a compact set of matrices, and, as a result, there exists λ,λ∈ (0,∞) such that λ≤λ_min(V(β)) ≤λ_max(V(β)) ≤λ for all β∈ℬ. Recognizing that ||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2 = E_P_0[(m(W_i)-m_0(W_i))'I_2(m(W_i)-m_0(W_i))], and that 1/λ≤λ_min(V(β)^-1) ≤λ_max(V(β)^-1) ≤ 1/λ for all β∈ℬ, it follows that ||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2 = 2λ/2λE_P_0[(m(W_i)-m_0(W_i))'I_2(m(W_i)-m_0(W_i))] ≥ 2λλ_max(V(β)^-1)1/2E_P_0[(m(W_i)-m_0(W_i))'I_2(m(W_i)-m_0(W_i))] ≥ 2λ1/2E_P_0[(m(W_i)-m_0(W_i))'V(β)^-1(m(W_i)-m_0(W_i))] =2λE_P_0[log p_β,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)]. for all (β,m) ∈ℬ×ℳ. Set C_21 = 2λ. Next, a symmetric argument to that encountered in Step 1 invoking Part 2 of Assumption <ref> can be used to conclude the existence of a constant C_22∈ (0,∞) that does not depend on β or m such that |β-β_0|^2≥ C_22E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m_0(Y_i,X_i|W_i)] for all β∈ℬ. Indeed, set C_22 = 2/c with c∈ (0,∞) defined in Part 2 of Assumption <ref>. Defining C_2 = (min{C_21,C_22})^-1, it follows that C_2(||m_1-m_01||_L_2(𝒲)^2+||m_2-m_02||_L_2(𝒲)^2+|β-β_0|^2) ≥ E_P_0[log p_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)] for all (β,m) ∈ℬ×ℳ. Suppose Assumption <ref>, Part 1 of Assumption <ref>, and the following conditions hold: * ℬ is a compact subset of (ℝ,|· | ). * There exists a sequence δ_n such that n δ_n≥ 1, nδ_n^2→∞, and Π((β,m) ∈ V_2((β_0,m_0),δ_n)≥exp(-n δ_n^2C̃) for n sufficiently large and some constant C̃>0. * There exists a sequence {r_n}_n=1^∞ such that r_n→ 0, n r_n^2→∞, and, for any constant C>0, P_0(sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kr_n/√(2))^c|W_n(β,m)-W_n(β_0,m_0)| ≥ CK^2r_n^2) → 0 as n→∞. Then Part 2 of Assumption <ref> holds for δ_n = max{r_n,δ_n}. If the following condition also holds E_P_0[p_β,m(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)]^nΠ_ℳ(ℳ_n^c)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) = o(exp(-2n δ_n^2)), then Part 1 of Assumption <ref> holds. Outline. The proof has three steps. Step 1 relates B_ℳ(m_0,δ_n) to B_ℬ×ℳ((β_0,m_0),δ_n/√(2)). In Step 2, we use Condition 2 to derive a lower bound for the denominator of the posterior. Step 3 derives an upper for the posterior probability and utilizes Condition 3 to verify Part 2 of Assumption <ref>. Step 4 uses the additional condition to verify Part 1 of Assumption <ref>. Step 1. First, we use that the maximum of two nonnegative numbers is less than or equal to their sum and the Cauchy-Schwarz inequality to obtain the following string of inequalities ||m-m_0||_ℳ,2 ≤ ||m_1-m_01||_L_2(𝒲)+ ||m_2-m_02||_L_2(𝒲) ≤√(2)√(||m_1-m_01||_L_2(𝒲)^2+ ||m_2-m_02||_L_2(𝒲)^2). Consequently, we obtain ||m-m_0||_ℳ,2≥δ_n ||m_1-m_01||_L_2(𝒲)^2+ ||m_2-m_02||_L_2(𝒲)^2≥δ_n^2/2 As |β-β_0|^2≥ 0, we can further conclude that ||m-m_0||_ℳ,2≥δ_n |β-β_0|^2+ ||m_1-m_01||_L_2(𝒲)^2+ ||m_2-m_02||_L_2(𝒲)^2≥δ_n^2/2. Consequently, it is sufficient to study the behavior of Π(ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^c |{(Y_i,X_i,W_i)}_i=1^n). Step 2. Let A_n be the event that ∫_ℬ×ℳL_n(β,m)/L_n(β_0,m_0)d Π(β,m) ≥Π((β,m) ∈ V_2((β_0,m_0),δ_n))exp(-2nδ_n^2). for δ_n≤δ_n, and nδ_n≥ 1. By Lemma 8.37 in <cit.>, we have that P_0(A_n) → 1 as n→∞, and, as a result, we apply the following decomposition: Π(ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^c |{(Y_i,X_i,W_i)}_i=1^n) = ∫_ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0))d Π(β,m)/∫_ℬ×ℳexp(ℓ_n(β,m)-ℓ_n(β_0,m_0))d Π(β,m) ≤1{A_n^c} + exp(2nδ_n^2)∫_ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0))d Π(β,m)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) Consequently, if Π((β,m) ∈ V_2((β_0,m_0),δ_n))≥exp(-C̃n δ_n^2) (i.e., Condition 2), then Π(ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^c |{(Y_i,X_i,W_i)}_i=1^n) ≤1{A_n^c}+ exp(nδ_n^2(2+C̃))∫_ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0))d Π(β,m) As such, it remains to analyze the behavior of the integral. Step 3. Applying Hölder's inequality and the fact that probabilities are bounded from above by 1, we have that ∫_ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0))d Π(β,m) ≤sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0)) = sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/2√(2))^cexp(n [W_n(β,m)-W_n(β_0,m_0)]-nE_P_0[logp_β_0,m_0(Y_i,X_i|W_i)/p_β,m(Y_i,X_i|W_i)]) ≤exp(-CK^2/2nδ_n^2)sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(n [W_n(β,m)-W_n(β_0,m_0)]) with the last inequality applying Lemma <ref> (valid because of Condition 1). As r_n≤δ_n, we have that sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(n [W_n(β,m)-W_n(β_0,m_0)]) ≤sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kr_n/√(2))^cexp(n [W_n(β,m)-W_n(β_0,m_0)]). because d((β,m),(β_0,m_0)) ≥ Kδ_n/√(2) implies d((β,m),(β_0,m_0)) ≥ Kr_n/√(2). Consequently, we can apply Condition 3 to conclude that sup_(β,m) ∈ℳ_n∩ B_ℬ×ℳ((β_0,m_0),Kδ_n/√(2))^cexp(ℓ_n(β,m)-ℓ_n(β_0,m_0)) ≤exp(-CK^2/4nδ_n^2) with P_0-probability approaching 1. Putting things together, we have that Π(ℳ_n∩ B_ℬ×ℳ((β_0,m_0),δ_n/√(2))^c |{(Y_i,X_i,W_i)}_i=1^n) ≤exp(nδ_n^2(2+C̃)-CK^2/4nδ_n^2) ≤exp(n δ_n^2((2+C̃)-CK^2/4)) Setting K>0 sufficiently large so that 2+C̃< C K^2/4, we conclude that Part 2 of Assumption <ref> holds because n δ_n→∞. Step 4. We verify that Part 1 of Assumption <ref> holds. Defining A_n as before, we have that Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n) ≤1{A_n^c} + exp(2n δ_n^2)∫_ℬ×ℳ_nL_n(β,m)/L_n(β_0,m_0)d Π(β,m)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)). Consequently, we apply Fubini's theorem and i.i.d sampling to conclude that E_P_0[Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n)] ≤ P_0(A_n^c) + exp(2n δ_n^2)∫_ℬ×ℳ_n^cE_P_0[p_β,m(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)]^ndΠ(β,m)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) ≤ P_0(A_n^c)+ exp(2n δ_n^2)E_P_0[p_β,m(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)]^nΠ_ℳ(ℳ_n^c)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) As P_0(A_n^c) → 0 (see Step 2), it follows that E_P_0[p_β,m(Y_i,X_i|W_i)/p_β_0,m_0(Y_i,X_i|W_i)]^nΠ_ℳ(ℳ_n^c)/Π((β,m) ∈ V_2((β_0,m_0),δ_n)) = o(exp(-2n δ_n^2)) guarantees that Π(ℳ_n^c|{(Y_i,X_i,W_i)}_i=1^n) converges to 0 in mean, and, as a result, in probability. Thus, we verify Part 1 of Assumption <ref>. § ELABORATION ON SECTION <REF> Let L^*_n(β,ζ) = ∏_i=1^nq_β,ζ^*(Y_i|X_i,W_i), let ℓ_n^*(β,ζ) = log L_n^*(β,ζ), and assume that (β,ζ)∼Π^* with Π^* = Π_ℬ×Π_𝒵. The posterior based on likelihood L_n^*(β,ζ) and prior Π^* is Π^*(· | {(Y_i,X_i,W_i)}_i=1^n). We argue that posterior asymptotic normality for Π^*(β∈·|{(Y_i,X_i,W_i)}_i=1^n) holds under similar conditions to <ref>. As recovery of ζ does not depend on β, we follow the logic of Theorem <ref> and start with an assumption that there exists sets 𝒵_n in 𝒵 such that Π^*(𝒵_n|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 1 and Π^*(𝒵_n∩ B_𝒵(0,a_n)^c|{(Y_i,X_i,W_i)}_i=1^n) P_0→ 0 as n→∞, where B_𝒵(0,a_n) = {ζ∈𝒵: ||ζ||_𝒵 < a_n} is an open ball in the seminormed space (𝒵,||·||_𝒵) centered at 0 with radii {a_n}_n=1^∞ that satisfy a_n→ 0. These sets are analogous to ℳ_n∩ B_ℳ(m_0,Kδ_n) in the (β,m)-parametrization, and, as a result, it suffices to analyze parametric submodels {q_β,ζ^*: β∈ℬ} with ζ∈𝒵_n∩ B_𝒵(0,a_n) because conditions (<ref>) and (<ref>) guarantee a similar decomposition to (<ref>) in that Π̃(β∈ A|{(Y_i,X_i,W_i)}_i=1^n) is equal to ∫_𝒵_n∩ B_𝒵(0,a_n)Π^*(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,ζ) d Π(ζ|{(Y_i,X_i,W_i)}_i=1^n)+ o_P_0(1) as n→∞, where Π^*(β∈ A|{(Y_i,X_i,W_i)}_i=1^n,ζ)= ∫_A∏_i=1^nq_β,ζ^*(Y_i|X_i,W_i)dΠ_ℬ(β)/∫_ℬ∏_i=1^nq_β,ζ^*(Y_i|X_i,W_i)d Π_ℬ(β) is the posterior for a ζ-indexed parametric model {q_β,ζ^*: β∈ℬ} under prior Π_ℬ. Applying a second-order Taylor expansion of β↦log q_β,ζ^*(Y_i|X_i,W_i) around β_0, we have that ℓ^*_n(β,ζ)-ℓ^*_n(β_0,ζ) = √(n)(β-β_0)√(n)ℓ̃_n^*(β_0,η_0,ζ)-n/2(β-β_0)^2Ĩ_n(m_0), where ℓ̃_n^*(β_0,η_0,ζ) = 1/n∑_i=1^nℓ̃^*(Y_i,X_i,W_i,β_0,η_0,ζ) with ℓ̃^*(Y_i,X_i,W_i,β_0,η_0,ζ) = ((Y_i-X_iβ_0-η_0(W_i)-ζ(W_i))(X_i-m_02(W_i)))/σ_01^2. Provided that the stochastic equicontinuity condition sup_ζ∈𝒵_n∩ B_𝒵(0,a_n)|√(n)(ℓ̃_n^*(β_0,η_0,ζ)-ℓ̃_n^*(β_0,η_0,0))| P_0→ 0 is satisfied, we can show that the conditional posterior Π(β∈·|{(Y_i,X_i,W_i)}_i=1^n,ζ) satisfies sup_ζ∈𝒵_n∩ B_𝒵(0,a_n)Π^*(β∈ B_ℬ(β_0,M_n/√(n))^c|{(Y_i,X_i,W_i)}_i=1^n,ζ) P_0→ 0 as n→∞.[Although we omit proof of (<ref>), the steps are the same as Lemma <ref> in Appendix <ref>. The proof is actually easier because E_P_0[ℓ̃^*(Y_i,X_i,W_i,β_0,η_0,ζ)]=0 for all ζ implies the `no-bias' condition is automatically satisfied and the fact that only Ĩ_n(m_0) shows up in the expansion of β↦ℓ_n^*(β,ζ) means we can avoid asymptotic properties of m ↦Ĩ_n(m) (i.e., Part 2 of Assumption <ref>).] Moreover, for any M_n→∞ arbitrarily slowly, the asymptotic equicontinuity condition implies ℓ_n^*(β,ζ) - ℓ_n^*(β_0,ζ) = √(n)(β-β_0)√(n)ℓ̃_n^*(β_0,η_0,0)-n/2(β-β_0)^2Ĩ_n(m_0) + o_P_0(1) with uniformity over {(β,ζ) ∈ℬ×𝒵_n: ||ζ||_𝒵< a_n, |β-β_0| < M_n/√(n)}. As the true nuisance function satisfies η_0(W_i) = m_01(W_i) - m_02(W_i)β_0, we can write (<ref>) as ℓ_n^*(β,ζ) - ℓ_n^*(β_0,ζ) = √(n)(β-β_0)1/√(n)ℓ̃_n(β_0,m_0)-n/2(β-β_0)^2Ĩ_n(m_0) + o_P_0(1). with the right-hand side being equal to (<ref>). Combining (<ref>), (<ref>), and (<ref>), it is then possible to show that the marginal posterior for √(n)(β-β_0) is asymptotically equivalent to 𝒩(Δ_n,0,Ĩ_n(m_0)^-1) in total variation using similar arguments to Theorem <ref>. Based on the similarity of (<ref>) to Part 1 of Assumption <ref> and 𝒵_n∩ B_𝒵(0,a_n) to ℳ_n∩ B_ℳ(m_0,Kδ_n), the key message is that the semiparametric Bernstein-von Mises theorem in the hypothetical situation where the adaptive parametrization is used with independent priors for β and ζ is valid under conditions similar to those for the (β,m)-parametrization. abbrvnat
http://arxiv.org/abs/2306.02996v1
20230605160639
Over-the-Air Federated Learning in Satellite systems
[ "Edward Akito Carlos", "Raphael Pinard", "Mitra Hassani" ]
cs.LG
[ "cs.LG", "eess.IV" ]
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Over-the-Air Federated Learning in Satellite systems Edward Akito Carlos, Fellow, IEEE, Raphael Pinard, Student Member, IEEE , Mitra Hassani July 31, 2023 =================================================================================================== Federated learning in satellites offers several advantages. Firstly, it ensures data privacy and security, as sensitive data remains on the satellites and is not transmitted to a central location. This is particularly important when dealing with sensitive or classified information. Secondly, federated learning allows satellites to collectively learn from a diverse set of data sources, benefiting from the distributed knowledge across the satellite network. Lastly, the use of federated learning reduces the communication bandwidth requirements between satellites and the central server, as only model updates are exchanged instead of raw data. By leveraging federated learning, satellites can collaborate and continuously improve their machine learning models while preserving data privacy and minimizing communication overhead. This enables the development of more intelligent and efficient satellite systems for various applications, such as Earth observation, weather forecasting, and space exploration. Federated learning, Linear Antenna, Channel Capacity. § INTRODUCTION Federated learning in the context of satellites refers to a distributed machine learning approach where multiple satellites collaboratively train a model without sharing their sensitive data with a central server or each other. In this scenario, each satellite processes and learns from its own locally collected data, using onboard computational resources. The satellites exchange model updates rather than raw data, leveraging inter-satellite communication links or ground stations. The updates are aggregated in a centralized manner or through a decentralized scheme, allowing the collective intelligence of the satellite network to improve the global model. Federated learning in satellites offers several advantages. Firstly, it enables efficient utilization of computational resources on individual satellites, reducing the need for extensive data transmission and storage. This is particularly beneficial in scenarios where bandwidth and storage capacity are limited. Additionally, federated learning ensures data privacy and security by avoiding the need to transmit sensitive information between satellites or to a central server. Moreover, the distributed nature of federated learning in satellites enhances robustness and resilience. If a satellite encounters communication disruptions or malfunctions, other satellites can continue training the model independently. This decentralized approach also supports scalability, allowing new satellites to join the federation without requiring a centralized retraining process. By leveraging federated learning, satellite networks can collectively improve their machine learning models while maintaining data privacy, optimizing resource usage, and enhancing overall performance and adaptability in space-based applications. Federated learning in the context of satellites involves the application of collaborative machine learning techniques within a network of satellites. It allows satellites to collectively train machine learning models while preserving data privacy and minimizing communication overhead. In satellite systems, federated learning can be employed to leverage the data collected by individual satellites without the need to transmit sensitive or large-scale data back to a central server on Earth. Instead, each satellite performs local model training using its onboard data and computational resources. The trained models are then shared among the satellites, either directly or through relay satellites, using inter-satellite communication links. These models can be combined or averaged to create an improved global model that captures knowledge from each participating satellite. Federated learning in satellites offers several advantages. Firstly, it enables collaborative learning across a distributed network of satellites, allowing the utilization of a larger and more diverse dataset. Secondly, it reduces the need for extensive data transmission to Earth, which can be costly and inefficient. Moreover, it addresses privacy concerns by keeping sensitive data onboard the satellites and sharing only aggregated model updates. By employing federated learning techniques, satellite systems can benefit from collective intelligence and improved models while optimizing communication bandwidth and preserving data privacy in a distributed environment. § SATELLITE ARRANGEMENT TO ACHIEVE MAXIMUM CHANNEL CAPACITY In this section, we aim to find the optimum satellites' position which maximizes the channel capacity. For this, we introduce a new variable ν_i=sin(θ_i)sin(ϕ_i). Also, we assume that d≥λ/2, and therefore kd≥π From Arithmetic Mean-Geometric Mean inequality we obtain C= ∑_i=1^n_Tlog_2 ( 1+P/σ^2λ_i )= log_2 ( ∏_i=1^n_T( 1+P/σ^2λ_i ) ) ≤log_2 (∑_i=1^n_T( 1+P/σ^2λ_i ) ) On the other hand, because ∑_i=1^n_Tλ_i= Trace(𝐖)=1, inequality (<ref>) becomes C ≤log_2 ( n_T+P/σ^2) and equality happens when all λ_i's are equal. Also, we know that all diagonal elements of matrix 𝐖 are equal, meaning that if the off-diagonal elements become zero, we reach the maximum capacity. We divide the problem into three cases: * n_T=n_R In this case 𝐖=H^† H. If H^†, or equivalently H, is unitary, then 𝐖 would be a diagonal with them same eigenvalues. If (kd ν_i)'s are evenly distributed on the unit complex circle, H becomes unitary; because in (<ref>) off diagonal elements become zero. Hence: kdν_i=2π× i/n_T+ μ_0    i=1,2,...,n_T where -2π/n_T-π≤μ_0 ≤ -π and it is such that -π≤ kdν_i ≤π. Because kd≥π, -1≤ν_i=sin(θ_i)sin(ϕ_i) ≤ 1, which guarantee that θ_i and ϕ_i have solutions. * n_T<n_R Again 𝐖=H^† H. In this case if H^†, or equivalently H, is semi-unitary then 𝐖 is diagonal. To satisfy this, n_T vectors of length n_R must be mutually orthogonal over Hermitian inner product space. First we define the following set with n_R elements kd ν̃_̃ĩ=2π× i/n_R+ μ_0    i=1,2,...,n_R where -2π/n_R-π≤μ_0 ≤ -π and is constant. Each subset of the above set with cardinality of n_T makes the matrix H' semi unitary. * n_T>n_R In this case, 𝐖=H H^†, and hence the same discussion as case 2 could be held. In the following figure, one realization of the satellite arrangement to achieve maximum capacity when n_T=n_R=4 is depicted. To satisfy the above conditions, for instance, we begin with sin(θ_i)=0.9 for all users which means θ_i=0.35π for i=1,2,3 and 4. This yields sin(ϕ_1)=-5/6, sin(ϕ_2)=-5/18, sin(ϕ_3)=5/18 and sin(ϕ_4)=5/6, or (ϕ_1,ϕ_2,ϕ_3,ϕ_4)=(-0.69π,-0.08 π, 0.69π, 0.08 π) We found values for ν_i's which maximize the channel capacity in the above cases. When linear antenna is along x-axis ν_i=cos(θ_i)sin(ϕ_i) and when it is along the z-axis it is ν_i=cos(θ_i). Thus, the values found for ν_i's could be used correspondingly to maximize the channel capacity based on the antenna alignment. ν_i's are not unique, because μ_0 is an interval. Once the value for μ_0 is chosen, it would be fixed for all ν_i's. Even for a constant μ_0, for the antenna align X or Y axis, satellite configuration is not unique. ;However, the configuration would be unique for an antenna along Z axis. In Figure <ref>, the maximum capacity is compared with average capacity for 2 cases. § ARRAYS WITH OTHER CONFIGURATIONS This section considers two other possibilities for the antenna configuration which are mentioned in the following parts §.§ Linear array not along y axis One can derive the results we obtained so far for any other alignment of linear array antenna. This could be done by finding by rotating the array to make it along y-axis and then finding the corresponding ϕ and θ values. For an array placed on z axis, the array factor in (<ref>) becomes AF(θ, ϕ)=∑_m=0^M-1 I_m e^jkmdcos(θ) Only θ appears in matrix H which makes the calculation simpler, and all the results can be obtained similarly for this array. In appendix D we have found, for instance, 𝐄{C^2 } for this alignment. §.§ Rectangular array The elements are arranged uniformly along a rectangular grid in the xy plane to form an m × n array. This array could be regarded as either m or n linear array with n and m antenna elements, respectively. Henceforth, each sub-array can be analyzed independently, and the result would be added up. § MINIMIZING CHANNEL INTERFERENCE In this section, we assume that we want to put some satellites around the Earth serving terrestrial users such that the amount of interference is minimized. To do so, firstly, we put satellites so that the minimum distance between them (considering all possible pairs) is maximized. This question intuitively reminds us of the “Tammes problem” which has been extensively studied before. Tammes <cit.> problem looks for an answer for the following question: “How must N congruent non-overlapping spherical caps be packed on the surface of a unit sphere so that the angular diameter of spherical caps will be as great as possible” One can easily correspond our problem to the answer of Tammes problem. We can say all satellites are located on a unique sphere when revolving around the Earth. It is worth mentioning that when a satellite is relatively close to Earth, the orbit on which the satellite traverses is roughly a circle, and therefore our assumption is valid. Tammes problem was solved for some specific number of points(for N=1,2,…,12,23,24 and some other values <cit.> <cit.> <cit.>). Let X be a finite subset of S^n-1 in ℝ^n. We define ψ as follows: ψ(x)=min_x,y ∈ Xdist(x,y), x≠ y Then X is a spherical ψ(X)-code. Also, define d_N the largest angular separation ψ(X) with |X|=N that could be obtained in S^2, meaning that: d_N=max_X ⊂ S^2ψ (X), |X|=N. The remaining of this section considers two cases, one when there is no interference and not all parts of Earth is served by satellites, one when interference exists and all Earth surface is served. Afterward, channel capacity is calculated for the setup with minimum interference and it is compared with that of obtained in <ref>. §.§ No interference In this part, we assume that each non-overlapping spherical cap found for each N is the terrestrial coverage area for the corresponding satellite. In this case, each server on Earth is served by a single satellite, and therefore there is no interference. Having this setup, there exist some place on Earth not being served by any satellites. For this matter, we define coverage percentage for each configuration as ratio of the area covered by satellites to surface area of Earth. The following table could be accordingly attained. As seen, the coverage percentage is non-linear as N grows, however, it reaches it maximum value for N=12 among the values considered above. §.§ Interference and Overlapping Coverage Area to cover the whole surface of Earth with existing satellites, the coverage area for each satellite needs to be enlarged. For this purpose, each spherical cap is equally enlarged till all points on the surface of the Earth would be covered by at least on satellite. In this case, there would be some terrestrial servers receiving signals from a couple of satellites. From <cit.>, for N>6, if a server receives signal from more than 1 satellites, the number of satellites seen by the server is at most 5 and at least 3. In <cit.>, the author tries to find conjectured solutions for this problem. Also, it defined density denoted by D_N which has the same meaning as Coverage Percentage defined in this paper. §.§ Channel capacity To compare the channel capacity for the satellite arrangement in<ref> to that of found in <ref>, we only considers the satellites located in one hemisphere (The one that the receiver is located). We used the method elaborated in <cit.> to find the satellites location in the desired hemisphere. § ROTMAN LENS This lens is capable of beam-forming without the need for switches or phase shifters. Its typical geometry and design parameters are shown in Figure <ref>. Since the invention of Rotman lens, the lens equation has been extensively studied <cit.> , <cit.>. Specifically, for a KU-band Rotman lens, the normalized transfer matrix model obtained in <cit.> as follows 𝐒=e^jkW_0/√(n_Tn_R)                                                ×[c] e^-jkη_1sin(θ_1) . . . e^-jkη_1sin(θ_n_T) e^-jkη_2sin(θ_1) . . . e^-jkη_2sin(θ_n_T) . ... . . ... . . ... . e^-jkη_n_Rsin(θ_1) . . . e^-jk η_n_Rsin(θ_n_T)            ×[c] e^-jk(F_1+V_1) . . . 0 0 0 . . . e^-jk(F_m+V_m) 0 . ... . . ... . . ... . 0 . . . 0 e^-jk(F_n_T+V_n_T) where η_n=[n-(N+1)/2]× d and e^-jk(F_m+V_m) is an extra phase shift added to the m^th RF beam port; and (n=1,2,...,n_T), (m=1,2,...,n_R). The transfer matrix has two parts: the common phase-delay shared by the array ports and the progressed phase through array port in which the latter contributes to RF beam forming. phase alignment requires the phase-delay path (i.e., F_m+V_m) to be constant so that the Rotman lens retain the true-time-delay (TTD) characteristic of the device. This method is essentially required in the case of simultaneous multiple RF beam port excitation to obtain reconfigurable far-field microwave radiation properties for the satellite RF lens <cit.>, Meaning that e^-jk(F_m+V_m)=constant and therefore, the second matrix in (<ref>) becomes identity matrix. If the beam angles θ_m's received from n_T satellites are uniformly distributed over (0, π/2) and there are n_R antenna elements over the array plan, the average channel capacity of this setup could be obtained similar to that of obtained in Appendix <ref>, i.e. a linear array along Z axis. § CONCLUSION This paper analyzes the channel capacity of a MIMO system in which the receiver is equipped with a linear array antenna and the transmitters are some satellites whose locations are not known at the receiver. To address the characteristic of this MIMO channel, the average channel capacity is found and also the outage probability is presented. In addition, the optimum position of satellites are found such that the channel capacity is maximized. In contrast, the optimum position of satellites to cause minimum interference at the receiver is presented and the channel capacity in this setup is compared with the maximum possible channel capacity. As an application of this paper, the average capacity for a Rotman lens performing for KU-band satellites is also analyzed. IEEEtran
http://arxiv.org/abs/2306.02363v1
20230604140139
Inviscid Water-Waves and interface modeling
[ "Emmanuel Dormy", "Christophe Lacave" ]
math-ph
[ "math-ph", "cs.NA", "math.AP", "math.MP", "math.NA", "physics.flu-dyn" ]
[ Raphaël Clouâtre July 31, 2023 ==================== We present a rigorous mathematical analysis of the modeling of inviscid water waves. The free-surface is described as a parameterized curve. We present a numerically stable algorithm which accounts for its evolution with time. The method is shown to converge using approximate solutions, such as Stokes waves and Green-Naghdi solitary waves. It is finally tested on a wave breaking problem, for which an odd-even coupling suffise to achieve numerical convergence up to the splash without the need for additional filtering. § INTRODUCTION The study of water waves has a long mathematical history (Airy, Boussinesq, Cauchy, Kelvin, Laplace, Navier, Rayleigh, Saint-Venant, Stokes, to cite only a few). It has been studied in a variety of situations, probably the most complex of these being the wave breaking problem. What happens when a waves overturns raises mathematical difficulties. The water-air interface cannot be described as a graph any longer. A parametric description of the interface and the tracking of its lagrangian evolution are needed. We want to derive here a stable numerical strategy to solve for one-dimensional water waves (i.e. in a 2D domain, or a 3D domain assuming independence in one horizontal coordinate of space). We want to numerically approximate the Euler equations both in the water and in the air without introducing artificial regularizing parameters. This is particularly important is the case of loss of regularity of the interface. In order to study the possible formation of singularity (e.g. <cit.>), it is necessary not to artificially regularize the numerical approximation. We consider a simple periodic domain 𝒟= _L× see Fig. <ref>, and introduce two boundaries Γ_S the free surface water-air, and Γ_B the bottom. The domain is thus decomposed in three subdomains 𝒟_F the fluid domain, 𝒟_A the air domain, 𝒟_B below the bottom. Since we want our mathematical approximation to be able to described an interface which is not a graph (i.e. overtopping of water in the context of a breaking wave) we need to be able to track it as a parametrized curve. Indeed, a description in the form h(x,y) would develop a shock (discontinuity) as soon as the water wants to overtop. The Euler equation needs to be considered both in the water (𝒟_F) and in the air (𝒟_A). At the water-bottom interface (Γ_B) the normal component of velocity needs to vanish (impermeability condition). Whereas at the water-air interface (Γ_S) two quantities need to be continuous across the interface: the normal velocity and the pressure. The later, for an inviscid fluid, being equivalent to the continuity of the normal component of the stress tensor. The velocity tangential to the interface is notably not continuous across Γ_S. This results in a localized distribution of vorticity along Γ_S in the form of a vortex sheet. Interface evolution methods aim at capturing the time evolution of the water-air interface (Γ_S) using solely the knowledge of this vorticity distribution. This results in having to cope with singular integrals along the vortex sheet, but simplifies the problem numerically in that neither the water domain (𝒟_F), nor the air domain (𝒟_A), need to be meshed ; as would be the case for example using a Finite-Element Method. Such approaches a closer in spirit to a Boundary-Element Method used in three-dimensional problems (e.g. <cit.>). A natural approach to follow the interface numerically is then to discretize the vortex sheet using the so called `vortex method'. It introduces a Neumann to Dirichlet operator, and the singular integral resolution is stable. In this approach, a finite but large number of localized vortices is used to approximate a continuous vortex sheet. Such an approach has been shown to be efficient for Euler flows in the full space <cit.> and for exterior domains <cit.>. In the former, the Euler equations conserves vorticity, whereas in the case of an exterior domain, with a fixed boundary, the boundary can also be interpreted as a vortex sheet, for which the vorticity distribution needs to be evaluated at all time to ensure impenetrability. In the case of free surface flows, such as water waves, the vorticity along the Γ_S surface is also not preserved by the system, and its evolution in time needs to be traced in the system describing the evolution of two Euler flows with two continuity conditions. This approach has been pioneered in <cit.> and further developed recently in <cit.>. It is also a useful description for mathematical proofs (e.g. <cit.>). An alternative description of the jump in tangential velocity stems from expressing the velocity as a potential. This requires the additional assumption that both flows are irrotational. The vortex sheet then translate in a discontinuity of the potential across the interface Γ_S yet with a continuous normal gradient. This approach is know as the `dipole method' (it is equivalent to double layer potentials in potential theory). It was also introduced earlier on for this problem, see <cit.>. The former method is lighter to derive and offers the possibility to account for Euler flows with vorticity, whereas the later involves more analytical work and assumes an irrotational flow. We will see however that the later has better convergence properties for strongly non-linear configurations. In both cases, the spatial discretization in terms of singular integrals is known to converge toward the Euler equation <cit.>, see also <cit.> for a study of the vortex method in the deep-water case. The deep-water case is a trivial limit of the above description, in which the bottom vortex sheet (distributed on Γ_B) is sent to infinity. It is formally equivalent to simply suppressing it, or setting its vorticity to zero. A classical formulation of water waves is the celebrated Zakharov-Craig-Sulem description <cit.>. While this description has proven extremely useful from a theoretical point of view (e.g. <cit.>), it raises difficulties from a numerical point of view and his adaptation in the framework of singular integral formulations would be the subject of a future work (see Remark <ref> for a discussion). Our aim is to construct numerical schemes which can be used to guide mathematical constructions on these problems. The goal of this article is thus to derive a formulation of these methods in the most general case (bi-fluid or single fluid, including possibly a non-flat bottom, including vorticity and mean currents). The resulting expressions are thus non-trivial and in a first reading it may be advise to drop all terms associated to the density of air (bi-fluid formulation), uniform or localized vorticity and circulation (mean currents). We therefore want to ensure that our numerical scheme converges in a realistic manner (i.e. for parameters that can be achieved in practice) to the solution of the continuous problem. We introduce in this work a regularization-free approach to solve for the water-wave problem (i.e. without explicit filtering or any other regularization introducing extra parameters to the problem). We verify conserved quantities at the discrete level. We illustrate on simple test cases the numerical convergence to the approximate solutions (e.g. Stokes waves, Green-Naghdi solitary waves). We also demonstrate stability and convergence of our numerical solution for the wave-breaking problem. Finally, we investigate the effects of regularization strategies on the solution and illustrate numerically how they can yield irrelevant solutions. The free surface Γ_S is initially parametrized by arclength from left to right: e∈ [0,L_S] → z_S(e)=z_S,1(e)+ z_S,2(e) . In the same way, the bottom Γ_B is parametrized by e∈ [0,L_B] → z_B(e)=z_B,1(e)+ z_B,2(e) . It is useful to introduce the tangent vector τ_S = τ_S,1 +τ_S,2= |z_S,e(e)|^-1 z_S,e , which is pointing to the right and the normal n_S=n_S,1 + n_S,2= -τ_S,2 +τ_S,1=τ_S is pointing out of the fluid domain. The same is done at the bottom with the normal now pointing in the fluid domain. It should be stressed that the arclength is not be preserved as the fluid surface Γ_S evolves, it is thus important to consider |z_e| . We introduce on any vector field u = ( u_1, u_2) the following three operations u=u_1- u_2 , (u_1,u_2)^⊥ =(-u_2,u_1) , and the curl operator u = ∂_1 u_2-∂_2 u_1 . Finally, we also introduce for any vector x = ( x_1, x_2) the complex notation x=x_1+ x_2 . We will consider two different approach to compute the free-surface evolution. The first one is based on a so-called `vortex method' the discontinuity of the tangential velocity at the air-water interface is modeled at a vortex sheet (with vorticity distribution γ). The vorticity evolution stems from expressing the Euler equation in both fluids (and takes the form given in (<ref>)), whereas the interface is transported by the resulting normal flow (as expressed in (<ref>)). The second approach is referred to as the `dipole layer', where the velocity is now described by the jump in potential between the two fluids (measured by the dipole distribution μ, related to γ via γ = ∂_e μ). The dipole layer evolution stems from the Bernouilli equation (and takes the form given in (<ref>)), whereas the interface is again transported by the normal flow (as expressed in (<ref>)). In the next section, we introduce the singular integral representation, thus rigorously introducing γ and μ by solving the correspond elliptic equations. In Section <ref>, we express the water-waves equations using this formalism. Section <ref> presents our discretization strategy. Numerical results as well as convergence tests are presented in Section <ref>. The comparison with previously used regularization strategies (filtering and offsetting) is performed in Section <ref>. Finally in Section <ref> we discuss potential applications and further development. § SINGULAR INTEGRALS REPRESENTATION AT FIXED TIME §.§ Stream and potential functions To deal with inviscid fluids, we first need to introduce a few mathematical tools. The first of these tools is the reconstruction of the velocity in terms of the vorticity and the circulation. Even if we assume that the fluid is curl-free in D_F, the non-trivial boundary condition on Γ_S will be interpreted as a vortex sheet in D. For this reason, we introduce the Green kernel in D G( x)=1/4πln(cosh2π x_2/L-cos2π x_1/L), G( x, y)=G( x- y) , and we recall the following result proved in <cit.>: For any f∈ L^∞_c(), every solution Ψ of the following elliptic problem ΔΨ = f , lim_x_2→ +∞∂_2Ψ = -lim_x_2→ -∞∂_2Ψ , |Ψ |≤ C (|x_2|+1) can be written as Ψ( x)= Ψ[f]( x)=∫_ G( x, y) f( y)y + C where C is constant. The relation between the Green kernel (<ref>) in _L× and the usual kernel 1/2πln | x- y| in ^2 is formally derived in Appendix <ref>. From the explicit formula, it is easy to observe using Taylor expansions that ∫_ G( x, y) f( y)y = ( x_2/2L-ln 2/4π) ∫_f( y)y -1/2L∫_ y_2f( y)y + 𝒪(e^-|x_2|) at +∞ , ∫_ G( x, y) f( y)y = (- x_2/2L-ln 2/4π) ∫_f( y)y +1/2L∫_ y_2f( y)y + 𝒪(e^-|x_2|) at -∞ . Thus lim_x_2→ +∞∂_2Ψ = -lim_x_2→ -∞∂_2Ψ is a necessary and sufficient condition to use the representation formula (<ref>). Next, let us consider the following elliptic problem on _F : for any functions g with zero mean and ω, we want to analyze a vector field u such that ÷ u=0 in _F , u =ω in _F , u· n=0 on Γ_B , u· n=g on Γ_S , which we want to extend in , in order to be able to use the above proposition. In (<ref>), the divergence free assumption stems from the incompressibility property and the third condition corresponds to the impermeability of the boundary at the bottom. By the Stokes formula, the fact g has a zero mean is a necessary condition coming from these two assumptions. Unfortunately, (<ref>) has infinitely many solutions because of the harmonic vector field, also called the constant background current in <cit.>, H: ÷ H= H=0 in _F , H· n=0 on Γ_B∪Γ_S for instance when Γ_B= _L×{-1} and Γ_S=_L×{0}, we have H= e_11__F . In order to uniquely determine u from ω, we need to prescribe the circulation either on the bottom or below the free surface, knowing that we have the following compatibility condition from the Stokes formula ∫_Γ_B u·τσ̣- ∫_Γ_S u·τσ̣= ∫__Fω where the integral are taken from left to right. We therefore state that, for ω∈ L^∞(_F) , g∈ C^0(Γ_S) and γ∈ given, there exists a unique u∈ H^1(_F) such that ÷ u=0 in _F , u =ω in _F , u· n=0 on Γ_B , u· n=g on Γ_S , ∫_Γ_B u·τσ̣= γ . We should note that the conservation laws for the 2D Euler equations (including the circulation and the total vorticity) imply that ∫_Γ_B u·τ and ∫_Γ_S u·τ are both conserved quantities. In order to establish a vortex formulation, we can always write u=∇^⊥ψ_F, because ÷ u=0 and ∫_Γ_S u· n = ∫_Γ_B u· n =0 imply that ∫_Γ u^⊥·τ =0 for any closed loop Γ, which allows us to construct ψ_F, uniquely up to a constant. For the dipole formulation, we need to write u as a gradient, which is possible only by subtracting the curl and the circulation parts. Of course, we could take advantage of the fact u - ∇^⊥Ψ[ω]-γ/L e_1 is curl free with zero circulation, and can thus be written as a gradient in D_F. Nevertheless this approach introduces additional difficulties. For example, if we take into account the density of air, it will be crucial to properly define the air velocity field. However, for the single-fluid water-waves equations (in which the density of air is neglected), we are left with several possible choices, in particular stationary vector-fields could be used, thus simplifying the computation below. To underline where the properties of the vector fields are important, we stay general for now and we will introduce constraints as they become necessary. Let us consider any u_ω,γ such that ÷ u_ω,γ=0 in _F , u_ω,γ =ω in _F , ∫_Γ_B u_ω,γ· n ṣ= 0 , ∫_Γ_B u_ω,γ·τṣ= γ . Therefore, u_R:= u- u_ω,γ is div and curl free, without circulation and flux, and can thus be written as u_R = ∇ϕ_F = ∇^⊥ψ̃_F where ϕ_F and ψ̃_F are uniquely determined, up to a constant. Even if it may not seem natural to study ψ̃_F instead of ψ_F, we will see below that ψ̃_F is an interesting quantity to consider for the dipole formulation. To apply Proposition <ref>, in order to obtain a representation formula, we first need to extend continuously the potential ϕ_F or the stream functions ψ_F or ψ̃_F. Extending the potential is related to the fluid charge method developed in <cit.>. This method is unfortunately not relevant for a free surface problem, see Remark <ref>. We, therefore, prefer to extend the stream functions continuously. This is equivalent to assuming the continuity of the normal part of the velocity across the boundary. Such an extended vector field is divergence free in the whole domain , hence can be written using a stream function, and the boundaries can be interpreted as vortex sheets, corresponding to the jump in the tangential velocity. At the bottom, of course, we extend u in the simplest possible way, i.e. such that ÷ u = u=0 in _B, u· n= 0 on Γ_B , ∫_Γ_B u ·τṣ= 0 , which implies u|__B=0 . This is equivalent to extending ψ by the constant ψ_F|_∂Γ_B (and indeed, u· n=0 implies that ψ_F is constant on Γ_B). In order to use Proposition <ref>, we have to extend the stream function in the air such that u_2→ 0 as x_2→ +∞. Hence we extend it in the air with the unique solution of ÷ u= u =0 in _A, u · n=g on Γ_A, | u | → 0 when x_2→∞, ∫_Γ_A u·τṣ= 0 . Note that this equation is the physically relevant formulation if we are interested in the bi-fluid water-wave model, for which the continuity of normal velocity simply reflects that the two fluids are not mixing. Note also that it could, in principle, be possible to add some vortices in the air. We should stress however that the circulation has to vanish at infinity in order to use Proposition <ref>, if not, we would have to change the extension below the bottom. This extended vector field can be expressed as u=∇^⊥ψ , where ψ is continuous in and determined up to an arbitrary constant. This extension will be sufficient for the vortex formulation. Regarding the derivation of the dipole formulation, we now have to extend u_R, it will be convenient for the bottom condition to extend u_R by zero in _B (see further down Remark <ref>). In order to achieve this, we must add the following assumption on u_ω,γ: u_ω,γ· n =0 on Γ_B . This assumption allows us to extend u_R in _B by zero and ψ̃ by the constant ψ̃_F|_∂Γ_B. Again, for a compatibility at infinity, and in order to write u_R as a potential, we must extend u_R in the air by a vector field satisfying ÷ u_R= u_R =0 in _A , u_R· n=g - u_ω,γ· n on Γ_A , | u_R | → 0 when x_2→∞ , ∫_Γ_S u_R·τṣ= 0 . From the above equations, we can write u_R=∇^⊥ψ̃=∇ϕ , where ψ̃ is continuous and uniquely defined up to an arbitrary constant. The potential ϕ jumps across Γ_S and Γ_B , and we have complete freedom to choose independently the constants in each of the connected components: _B, _F and _A . These four constants will be determined below in order to be able to write ψ, ψ̃ and ϕ in the form of a singular integral by applying Proposition <ref>. Using the uniqueness of the solution to the elliptic problem (<ref>), it follows that u= u_ω,γ + ∇^⊥ψ̃= u_ω,γ + ∇ϕ in _F . Even if we have already properly defined the extension of ψ̃ in order to be able to use a Biot-Savart representation formula (<ref>), we still need to discuss the expression of u_ω,γ in _A. There are essentially two natural options: * either to have an explicit formula for u_ω,γ, or at least assume it is independent of time; * or to extend by zero. The choice depends on whether we are interested by the bi-fluid water-wave equation, in which the air is assumed to be an incompressible fluid with a non-zero density, or the single-fluid water-waves equations, where we neglect the density of the air in _A. In the later case (single-fluid), we do not need to know the velocity in the air, and we can simply set u_ω,γ = γ/L e_11__F if the bottom if flat and ω=0 . Then we cannot say that u_ω,γ + ∇^⊥ψ̃ defines the velocity in the air, because the normal part of the velocity is not continuous. A natural idea would then be to set u_ω,γ = γ/L e_1χ(x_2) if the bottom if flat and ω=0 . If we choose χ(x_2)= 1 for all x_2, this implies that a non-physical circulation is present in the air, which is equal to the circulation in the water. Alternatively if χ(x_2) is chosen to decay smoothly from 1 near the interface to 0 at infinity, this implies a strange, also non-physical, vorticity in the air u=-γ/Lχ'(x_2) . Both cases do not correspond to the actual air velocity. Hence, in the limiting case of vanishing air density, we can use this simpler expression for the velocity u_ω,γ . The air velocity can, however, not be reconstructed in that case (as it does not influence the interface evolution). In the first case (bi-fluid formulation), if we want to extend u in such a way that the normal component of the velocity is continuous and ÷ u= u=0 in _A, we then have to solve at any time an elliptic problem in _A to extend the flow u_ω,γ in the correct way. Alternatively, we could prefer to extend u_ω,γ by zero. In this case, we would need to add the following condition u_ω,γ· n =0 on Γ_S . and solve at any time the elliptic problem (<ref>) in _F with (<ref>) and (<ref>). We will observe later (see Section <ref>) that this approach is unpractical. In the case of a bi-fluid formulation with circulation, or with internal vorticity, the vortex method will be preferred. The dipole formulation can however be considered in two cases: the case of a bi-fluid formulation in the absence of both internal vorticity and circulation or the case of a single fluid formulation in the absence of internal vorticity. In the later case, we can obtain H solving ÷ H= H=0 in _F∪Γ_S∪_A , H· n=0 on Γ_B , ∫_Γ_B H·τ=γ and setting u_ω,γ = H. In the sequel, we will derive both the vortex and dipole formulations for the single-fluid and bi-fluid water-waves equations in the presence and absence of circulation and vorticity. We can apply the whole analysis of this paper to treat cases involving several submerged solids S_k⋐_S, simply constructing the harmonic vector such that ÷ H= H=0 in _F, H· n=0 on Γ_B∪_k∂ S_k, ∫_Γ_B H·τ=γ_0, ∫_∂ S_k H·τ=γ_k where γ_k is initially given. In the same way, if we are only interested by the single-fluid water-waves equations, we can simply construct H initially in _F∪Γ_S∪_A and this problem can be solved in the dipole formulation. If ω=γ_0=γ_1=γ_k=0 for all k, the dipole formulation is possible for both the single-fluid and bi-fluid water-waves equations. Otherwise, we will need to use the vortex formulation where the inclusion of such solids is a minor modification of the numerical code. In the vortex formulation, we can even include the case where the solids are moving with a prescribed velocity and rotation by setting H· n=(ℓ_i + r_i x^⊥)· n. The case of immerged solids moving under the influence of the flow involves the computation of pressure forces at the boundary of the solid (see e.g. equation (5.4) in <cit.>). The case of a floating (partially immerged) solid would be even more challenging (see the recent developments in <cit.>). §.§ Potential and dipole formula In the previous subsection, we have constructed continuous ψ or ψ̃ on , where the perpendicular gradient is continuous on _B∪_F∪_A, and his normal part is continuous across the interfaces Γ_B and Γ_S. Extended u or u_R in this way ensure that ÷ u =0 in , whereas the jump of the tangential part can be seen as a vortex sheet, namely u=Δψ =ω + |z_S,e|^-1γ_Sδ_Γ_S +|z_B,e|^-1γ_Bδ_Γ_B in where γ_S(e) : =|z_S,e(e)|[lim_ z∈_F→ z_S(e) u - lim_z∈_A→ z_S(e) u]·τ(e) γ_B(e) :=-|z_B,e(e)|[lim_ z∈_F→ z_B(e) u]·τ(e) are such that the mean value is ∫ (ω+ |z_S,e|^-1γ_Sδ_Γ_S + |z_B,e|^-1γ_Bδ_Γ_B)=0 , see (<ref>). These formulas and the following ones also hold replacing u, ω , γ_S , γ_B by u_R , 0 , γ̃_S , γ̃_B. Proposition <ref> implies that, ψ is determined up to a constant, which is fixed when we choose to represent[Even if δ_Γ is not a bounded function, it belongs in H^-1() where the well-posedness of elliptic problem is usually proven, and the formula can be rigorously established for C^1 curve, see <cit.>.] it as follows ψ ( x)= ∫_Γ_S G( x, y) |z_S,e|^-1γ_Sσ̣( y)+ ∫_Γ_B G( x, y) |z_B,e|^-1γ_Bσ̣( y) + ∫__F G( x, y) ω( y) y = ∫_0^L_S G( x, z_S(e)) γ_S(e) ẹ+ ∫_0^L_B G( x, z_B(e)) γ_B(e) ẹ + ∫__F G( x, y) ω( y) y . By the explicit formula of the Green kernel, we deduce from the previous formula the Biot-Savart law which yields the velocity u=∇^⊥ψ for all x in _F∪_A∪_S: u (x) = ∫_0^L_Sγ_S(e) ∇^⊥G( x- z_S(e)) ẹ + ∫_0^L_Bγ_B(e) ∇^⊥G( x- z_B(e)) ẹ + ∫__F∇^⊥G( x, y) ω( y) y = ∫_0^L_Sγ_S(e) 1/2L(x-z_S(e) /L/π) ẹ +∫_0^L_Bγ_B(e) 1/2L(x-z_B(e) /L/π) ẹ, + ∫__F1/2L(x-y /L/π) ω( y) y because ∇^⊥_ x G( x) = - sinhx_2/L/(2π) -sinx_1/L/(2π)/2L( coshx_2/L/(2π)-cosx_1/L/(2π)) = 1/2L(x_1+ x_2/L/π) , where we have used that -sinh b-sin a = -(sin a-sin( b)) = -2sina- b/2cosa+ b/2 and cosh b-cos a = cos b-cos a = 2 sina- b/2sina+ b/2. This formula with cotangent kernel is singular when x goes to the boundary Γ_S∪Γ_B . This is natural because it encodes the jump of the tangential part of the velocity. The limit formula, the so called Plemelj formula, will play a crucial role in the sequel and are recalled in Appendix <ref>. Another key tool presented in this Appendix is the following desingularization rule pv∫(z(e)-z(e')/L/π) f(e')ẹ' = ∫(z(e)-z(e')/L/π) f(e')z_e(e)-f(e)z_e(e') /z_e(e)ẹ' , because it transforms a principal value integral into a classical integral of a smooth function. This exact relation will be systematically used in order to handle regular terms, which can be integrated with greater accuracy, resulting is improved stability. It is worth stressing that this desingularization does not alter the accuracy of the scheme, as opposed to regularization technics. We would like to stress again that this periodic Biot-Savart law is formally related to the usual Biot-Savart law in ^2: see Appendix <ref>. We further consider that the vorticity f is composed on a constant part ω_01__F and a part that we approximate by a sum of Dirac masses ∑_j=1^N_vγ_v,jδ_z_v,j(t), see <cit.>. The velocity generated by the Dirac masses is simply 1/2L∑_j=1^N_vγ_v,j(x-z_v,j/L/π). The velocity associated to the constant part can be simplified thanks to an integration by parts ∫__F∇^⊥ G( x- y) ω_0y = ω_0[ - ∫__F( ∇ G) ( x- y) · e_2( y) y; ∫__F (∇ G) ( x- y) · e_1 ( y) y ] = ω_0[ ∫__F∇_y( G ( x- y)) · e_2( y) y; - ∫__F∇_y( G ( x- y)) · e_1 ( y) y ] = ω_0[ ∫_∂_F G ( x- y) e_2 ( y)·ñ_F ( y)σ̣( y); -∫_∂_F G ( x- y) e_1 ( y)·ñ_F( y) σ̣( y) ] = -ω_0/4π∫_∂_Fln(coshx_2-y_2/L/(2π) -cosx_1-y_1/L/(2π)) ñ_F^⊥( y) σ̣( y) where ñ_F is the unit normal vector outward to _F. This implies that ∫__F1/2L(x-y /L/π) ω_0y = ω_0/4π∫_0^L_Sln(coshx-z_S(e)/L/(2π) -cosx-z_S(e)/L/(2π)) z_S,e(e)ẹ -ω_0/4π∫_0^L_Bln(coshx-z_B(e)/L/(2π) -cosx-z_B(e)/L/(2π)) z_B,e(e)ẹ , which is well defined and continuous in . Let us note that we can compute u_ω,γ for ω= ω_0+∑γ_v,jδ_z_v,j in the same way. Therefore, we have a complete formula (<ref>) which gives u=∇^⊥ψ in terms of ω, γ_S and γ_B, which will be used for the vortex formulation. For the dipole formulation, we have, exactly in the same way, u_R (x) = ∫_0^L_Sγ̃_S(e) 1/2L(x-z_S(e) /L/π) ẹ +∫_0^L_Bγ̃_B(e) 1/2L(x-z_B(e) /L/π) ẹ , where γ̃_S(e) : =|z_S,e(e)|[lim_ z∈_F→ z_S(e) u_R - lim_ z∈_A→ z_S(e) u_R]·τ(e) γ̃_B(e) :=-|z_B,e(e)|[lim_ z∈_F→ z_B(e) u_R]·τ(e) . In the previous subsection, we have defined u_R and the extension such that u_R=∇ϕ in _B∪_F∩_A where we have the choice to fix one constant by connected component. As the mean values of γ̃_S and γ̃_B are zero, we know from the behavior at infinity (<ref>) that ∇ϕ = u_R= ∇^⊥ψ̃ goes to zero exponentially fast when x_2→∞. In order to control the boundary term in the following computation, we thus set the constant in _A such that ϕ goes to zero at infinity. In the same way, we set the constant in _B such that ϕ_B→ 0 when x_2→-∞. As ϕ is not continuous across the interfaces and we need the value from both side, we denote the restriction of ϕ in _F (resp. in _A and in _B) by ϕ_F (resp. by ϕ_A and ϕ_B). For any x∈_F, we compute ϕ( x)= ⟨ϕ_F, Δ G(· - x) ⟩ = -∫__F∇ϕ_F ( y)·∇ G( y- x)y + ∫_Γ_Sϕ_F( y) ∂_n G( y- x) σ̣( y) - ∫_Γ_Bϕ_F( y) ∂_n G( y- x) σ̣( y) = ∫_Γ_S(ϕ_F( y) ∂_n G( y- x) - ∂_nϕ_F ( y) G( y- x)) σ̣( y) - ∫_Γ_B(ϕ_F( y) ∂_n G( y- x) - ∂_nϕ_F ( y) G( y- x) ) σ̣( y) , where we keep in mind that n=τ^⊥ is pointing outward on Γ_S whereas is pointing inward on Γ_B. As u_R · n is continuous, we now use the fact that G( x)=𝒪(x_2) and ∇ G( x)=𝒪(1) at infinity to integrate by parts in the air and in the bottom domain ϕ_F( x) = ∫_Γ_S ( ϕ_F( y)-ϕ_A( y) ) ∂_n G( y- x) σ̣( y) - ∫_Γ_B ( ϕ_F( y)-ϕ_B( y)) ∂_n G( y- x) σ̣( y) because Δ_ y G( y- x)≡ 0 in _A∪_B (for x∈_F). Doing a similar computation for x∈_A: ϕ( x) = ⟨ϕ_A, Δ G(· - x) ⟩ = -∫__A∇ϕ_A ( y)·∇ G( y- x)y - ∫_Γ_Sϕ_A( y) ∂_n G( y- x) σ̣( y) = ∫_Γ_S∂_nϕ_A ( y) G( y- x)σ̣( y) - ∫_Γ_Sϕ_A( y) ∂_n G( y- x) σ̣( y) = ∫_Γ_S∂_nϕ_F ( y) G( y- x)σ̣( y)- ∫_Γ_Sϕ_A( y) ∂_n G( y- x) σ̣( y) = ∫_Γ_Sϕ_F( y) ∂_n G( y- x) d σ( y)- ∫_Γ_Sϕ_A( y) ∂_n G( y- x) σ̣( y) - ∫_Γ_B(ϕ_F( y) ∂_n G( y- x) - ∂_nϕ_F ( y) G( y- x) ) σ̣( y) = ∫_Γ_S(ϕ_F( y) - ϕ_A( y)) ∂_n G( y- x) σ̣( y) - ∫_Γ_B(ϕ_F( y) ∂_n G( y- x) - ∂_nϕ_B ( y) G( y- x) ) σ̣( y) = ∫_Γ_S (ϕ_F ( y)-ϕ_A ( y)) ∂_nG( y- x)σ̣( y) - ∫_Γ_B (ϕ_F ( y)-ϕ_B ( y)) ∂_nG( y- x)σ̣( y) , we notice that this formula holds true in _B∪_F∪_A. So, we are computing now ∂_nG( y- x) ∇ G( y- x) · n( y) σ̣( y) = 1/2L( coshz_2(e)-x_2/L/(2π)-cosz_1(e)-x_1/L/(2π))[ sinz_1(e)-x_1/L/(2π); sinhz_2(e)-x_2/L/(2π) ]·[ -z_2,e(e); z_1,e(e) ]ẹ = -[-sinhz_2(e)-x_2/L/(2π) - sinz_1(e)-x_1/L/(2π)/2L( coshz_2(e)-x_2/L/(2π)-cosz_1(e)-x_1/L/(2π)) z_e(e) ]ẹ = -[1/2L(z(e) - x/L/π) z_e(e) ]ẹ so, setting μ_S(e)=(ϕ_F - ϕ_A)(z_S(e)) , μ_B(e)=(ϕ_B- ϕ_F) (z_B(e)) , we finally get for any x∈_F∪_A∪_B ϕ( x) = - ∫_0^L_Sμ_S(e) [1/2L(z_S(e) - x/L/π) z_S,e(e) ]ẹ -∫_0^L_Bμ_B(e)[1/2L(z_B(e) - x/L/π) z_B,e(e) ]ẹ = ∫_0^L_Sμ_S(e) [1/2L(x-z_S(e) /L/π) z_S,e(e) ]ẹ +∫_0^L_Bμ_B(e)[1/2L(x-z_B(e) /L/π) z_B,e(e) ]ẹ . We note here that we did not provide any restriction on the constant for ϕ_F so the previous formula holds true if we change ϕ_F (so μ_S and μ_B) by a constant. It is therefore possible to fix initially this constant in such a way that ∫_0^L_Sμ_S,0(e)ẹ=0 . This condition is not conserved in time. Let us also note that with our extension and Assumption (<ref>), we have ϕ_B=0 in _B. It is also possible to derive the stream function ψ̃ from μ_S and μ_B. To do this, we first remark that u_R·τ = ∇ϕ·τ = |z_e|^-1∂_e ( ϕ(z)) , hence γ̃_S(e)=∂_e[ ϕ_F(z_S(e)) - ϕ_A(z_S(e')) ] =∂_eμ_S(e) and γ̃_B(e)=-∂_eϕ_F(z_B(e)) = ∂_eμ_B(e) , and then for any constants C_S,C_B∈ ψ̃(x) = ∫_0^L_S∂_e(μ_S(e)+C_S) G( z_S(e)- x) ẹ + ∫_0^L_B∂_e( μ_B(e)+C_B) G( z_B(e)- x) ẹ = - ∫_0^L_S(μ_S(e)+C_S) ∂_e(G( z_S(e)- x)) ẹ - ∫_0^L_B (μ_B(e)+C_B) ∂_e(G( z_B(e)- x)) ẹ . So we need to compute ∇ G( z(e)- x )·[ z_e,1; z_e,2 ] = 1/2L( coshz_2(e)-x_2/L/(2π)-cosz_1(e)-x_1/L/(2π))[ sinz_1(e)-x_1/L/(2π); sinhz_2(e)-x_2/L/(2π) ]·[ z_e,1(e); z_e,2(e) ] = -[-sinhz_2(e)-x_2/L/(2π) - sinz_1(e)-x_1/L/(2π)/2L( coshz_2(e)-x_2/L/(2π)-cosz_1(e)-x_1/L/(2π)) z_e(e) ] = -[1/2L(z(e)-x/L/π) z_e(e) ] and we finally get ψ̃(x) = ∫_0^L_S (μ_S(e)+C_S) [1/2L(z_S(e) - x/L/π) z_S,e(e) ]ẹ +∫_0^L_B (μ_B(e)+C_B)[1/2L(z_B(e) - x/L/π) z_B,e(e) ]ẹ = -∫_0^L_S (μ_S(e)+C_S) [1/2L( x-z_S(e) /L/π) z_S,e(e) ]ẹ -∫_0^L_B (μ_B(e)+C_B)[1/2L( x - z_B(e) /L/π) z_B,e(e) ]ẹ . As this formula is valid for any values of C_B and C_S, it holds true for C_S=C_B=0 and besides ∫_0^L_S[1/2L( x-z_S(e) /L/π) z_S,e(e) ] ẹ = ∫_0^L_B[1/2L( x - z_B(e) /L/π) z_B,e(e) ] ẹ =0 . For Section <ref>, it will be convenient to introduce the quantity Φ_S(e)=(ϕ_F + ϕ_A)(z_S(e)) , which is complementary to μ_S , and which can be expressed thanks to the formula giving ϕ and the limit formula (see Appendix <ref>) Φ_S(e) = ∫_0^L_Sμ_S(e') [1/L(z_S(e)-z_S(e') /L/π) z_S,e(e') ]ẹ' +∫_0^L_Bμ_B(e')[1/L(z_S(e)-z_B(e') /L/π) z_B,e(e') ]ẹ' . Computing the limit for e'→ e, we note that the first integral is a classical integral of a continuous function, where the extension for e'=e is -μ_S(e) [ z_S,ee(e)/π z_S,e(e)]. Even if this integral could be well approximated by Riemann sum for smooth fluid surface, it occurs that the following formula will be convenient to get non singular integrals, taking advantage of the desingularization (<ref>) Φ_S(e) = ∫_0^L_S ( μ_S(e') - μ_S(e)) [1/L(z_S(e)-z_S(e') /L/π) z_S,e(e') ]ẹ' +∫_0^L_Bμ_B(e')[1/L(z_S(e)-z_B(e') /L/π) z_B,e(e') ]ẹ' which is then extended for e=e' by zero. We conclude this section with one last compatibility condition, which is not used in this article but will be used in a forthcoming article. The function z→ϕ-ψ̃ is harmonic in _B∪_F∪_A, then the integrals along two curves going from left to right are the same if both curves are included in the same connected component. With the limit at infinity, it is clear that ∫_0^L_S(ϕ_A(z_S(e))-ψ̃(z_S(e)) )z_S,e(e)ẹ=- Llim_x_2→+∞ψ̃ . whereas ∫_0^L_B(ϕ_B(z_B(e))-ψ̃(z_B(e)) )z_B,e(e)ẹ=- L lim_x_2→-∞ψ̃= L lim_x_2→+∞ψ̃ . Inside the fluid we have ∫_0^L_S(ϕ_F(z_S(e))-ψ̃(z_S(e)) )z_S,e(e)ẹ = ∫_0^L_B(ϕ_F(z_B(e))-ψ̃(z_B(e)) )z_B,e(e)ẹ . By continuity of the stream function, we get ∫_0^L_Sμ_S(e) z_S,e(e) ẹ= ∫_0^L_S (ϕ_F - ϕ_A)(z_S(e))z_S,e(e) ẹ = ∫_0^L_S(ϕ_F(z_S(e))-ψ̃(z_S(e)) )z_S,e(e)ẹ - ∫_0^L_S(ϕ_A(z_S(e))-ψ̃(z_S(e)) )z_S,e(e)ẹ = ∫_0^L_B(ϕ_F(z_B(e))-ψ̃(z_B(e)) )z_B,e(e)ẹ + Llim_x_2→+∞ψ̃ = ∫_0^L_B (ϕ_F-ϕ_B)(z_B(e)) z_B,e(e)ẹ+ 2 Llim_x_2→+∞ψ̃ = - ∫_0^L_Bμ_B(e) z_B,e(e) ẹ+ 2 Llim_x_2→∞ψ̃ we thus have for all time ∫_0^L_Sμ_S(e) [z_S,e(e) ]ẹ=- ∫_0^L_Bμ_B(e) [z_B,e(e) ]ẹ . The celebrated Dirichlet to Neumann operator in the Zakharov-Craig-Sulem formulation <cit.> is very close to the dipole derivation. For φ∈ H^1/2(Γ_S) given, the principle is indeed to find u_R=∇ϕ_F=∇^⊥ψ̃ such that Δϕ_F=0 in _F , ∂_nϕ_F=0 on Γ_B , ϕ_F=φ on Γ_S . Extending as we did ψ̃ by continuity and defining ϕ, we can represent ϕ through the singular representation formulation (<ref>). Therefore, we should first find uniquely μ_S and μ_B such that ϕ_B=0 on Γ_B and ϕ_F=φ on Γ_S thanks to the limit formulas of Appendix <ref> (see (<ref>) for this kind of application). With (μ_S,μ_B) found, we differentiate in order to get (γ̃_S,γ̃_B) which allow us to construct u_R (<ref>), hence ∂_nϕ_F|_Γ_S again with the limit formulas. This ends the definition of the Dirichlet to Neumann operator φ↦∂_nϕ_F|_Γ_S. § EVOLUTION OF WATER-WAVES The bottom z_B and the constant part of the vorticity ω_0 are initially given. At any time, for a given (z_v,j)_j=1,…,N_v and z_S, we have stablished in Section <ref> the existence of (γ_S,γ_B) or (μ_S,μ_B) from g. Conversely, from γ_S or μ_S, we will first show that there is a unique γ_B or μ_B satisfying the boundary conditions at the bottom. We will thus use Section <ref> to get the velocity everywhere, and then deduce the displacements of the point vortex and the free surface: ∂_t z_v,j and ∂_tz_S. The last step is to use the Euler or the Bernoulli equations to determine ∂_tγ_S or ∂_tμ_S. Therefore, if we know g initially, we can construct (γ_S,0,γ_B,0) or (γ̃_S,0, γ̃_B,0) such that the corresponding velocity (<ref>) or (<ref>) verifies the correct boundary conditions. From γ̃_S,0 we will construct μ_S,0 as the primitive of γ̃_S,0 with zero mean. For t>0, the main numerical strategy can be summarized as * for the vortex method: (z_S ,(z_v,j)_j , γ_S) ↦(z_S ,(z_v,j)_j , γ_S , γ_B) ↦(∂_tz_S ,(∂_tz_v,j)_j , ∂_tγ_S) ; * for the dipole method: (z_S ,(z_v,j)_j , μ_S) ↦(z_S ,(z_v,j)_j , μ_S , μ_B) ↦(∂_tz_S ,(∂_tz_v,j)_j , ∂_tμ_S) . §.§ Determination of γ or μ from the boundary condition The quantities z_B , γ , ω_0 , (γ_v,j)_j are given by the initial conditions, and we want to solve * (z_S,0 ,(z_v,j,0)_j , g_0)↦γ_S,0 for the initial setting in the vortex formulation; * (z_S ,(z_v,j)_j , γ_S)↦γ_B for every time step in the vortex formulation; * (z_S,0 , u_ω,γ,0 ,g_0)↦γ̃_S,0↦μ_S,0 for the initial setting in the dipole formulation; * (z_S , u_ω,γ ,μ_S)↦μ_B for every time step in the dipole formulation. §.§.§ Initial γ_S,0 for the vortex formulation In many situation, such as solitary waves, z_B ,z_S , g= u_F· n|_Γ_S , γ , ω_0 and (γ_v,j ,z_v,j)_j=1,…, N_v are known initially. By uniqueness of the elliptic problem (see (<ref>) with our extension (<ref>)), we know that there exists a unique pair (γ_S ,γ_B) such that the normal velocity u· n=- (ûz_S,e/|z_S,e|) verifies the proper boundary condition on the free surface, i.e. pv∫_0^L_Sγ_S(e') [z_S,e(e)/2L(z_S(e)-z_S(e') /L/π)] ẹ' + ∫_0^L_Bγ_B(e') [ z_S,e(e)/2L(z_S(e)-z_B(e') /L/π)] ẹ' = RHS_V0,S(e) , where RHS_V0,S(e) = -g |z_S(e)| - ∫__F[z_S,e(e)/2L(z_S(e)-y /L/π)] ω_0y = -g |z_S(e)| - ∑_j=1^N_vγ_v,j[z_S,e(e)/2L(z_S(e)-z_v,j/L/π) ] -ω_0/4π∫_0^L_Sln(coshz_S(e)-z_S(e')/L/(2π) -cosz_S(e)-z_S(e')/L/(2π)) [z_S,e(e)z_S,e(e')] ẹ' +ω_0/4π∫_0^L_Bln(coshz_S(e)-z_B(e')/L/(2π) -cosz_S(e)-z_B(e')/L/(2π)) [z_S,e(e)z_B,e(e')] ẹ' ; on the bottom: ∫_0^L_Sγ_S(e') [z_B,e(e)/2L(z_B(e)-z_S(e') /L/π)] ẹ' + pv∫_0^L_Bγ_B(e') [ z_B,e(e)/2L(z_B(e)-z_B(e') /L/π)] ẹ' = RHS_V0,B(e) where RHS_V0,B(e) = -∫__F[z_B,e(e)/2L(z_B(e)-y /L/π)] ω_0y = - ∑_j=1^N_vγ_v,j[z_B,e(e)/2L(z_B(e)-z_v,j/L/π) ] -ω_0/4π∫_0^L_Sln(coshz_B(e)-z_S(e')/L/(2π) -cosz_B(e)-z_S(e')/L/(2π)) [z_B,e(e)z_S,e(e')] ẹ' +ω_0/4π∫_0^L_Bln(coshz_B(e)-z_B(e')/L/(2π) -cosz_B(e)-z_B(e')/L/(2π)) [z_B,e(e)z_B,e(e')] ẹ' , together with the circulation assumptions: ∫_0^L_Sγ_S(e') ẹ' =γ-ω_0 | _F| - ∑_jγ_v,j ∫_0^L_Bγ_B(e') ẹ' = - γ . The existence and uniqueness of a solution is related to the operator B in <cit.>, and Section <ref> will detail how these integrals can be discretized, ensuring that the resulting matrices are invertible. §.§.§ Time dependent γ_B for the vortex formulation For any given time, knowing z_B,γ, ω_0, (γ_v,j)_j=1,…, N_v from the initial conditions, we need to construct γ_B from z_S, γ_S, (z_v,j)_j=1,…, N_v such that the normal velocity u· n=- (ûz_S,e/|z_S,e|) satisfies the impermeability boundary condition on the bottom. This problem is then simpler than the previous one: pv∫_0^L_Bγ_B(e') [ z_B,e(e)/2L(z_B(e)-z_B(e') /L/π)] ẹ' = RHS_VB(e) where RHS_VB(e) = -∫_0^L_Sγ_S(e') [z_B,e(e)/2L(z_B(e)-z_S(e') /L/π)] ẹ' - ∫__F[z_B,e(e)/2L(z_B(e)-y /L/π)] ω_0y = - ∫_0^L_Sγ_S(e') [z_B,e(e)/2L(z_B(e)-z_S(e') /L/π)] ẹ' - ∑_j=1^N_vγ_v,j[z_B,e(e)/2L(z_B(e)-z_v,j/L/π) ] -ω_0/4π∫_0^L_Sln(coshz_B(e)-z_S(e')/L/(2π) -cosz_B(e)-z_S(e')/L/(2π)) [z_B,e(e)z_S,e(e')] ẹ' +ω_0/4π∫_0^L_Bln(coshz_B(e)-z_B(e')/L/(2π) -cosz_B(e)-z_B(e')/L/(2π)) [z_B,e(e)z_B,e(e')] ẹ' , together with the circulation assumptions: ∫_0^L_Bγ_B(e') ẹ' = - γ . This problem is related to the vortex method in the case of an impermeable boundary. The invertibility of this problem was studied in details in <cit.>, and the discretization will be described in Section <ref>. §.§.§ Initial μ_S,0 for the dipole formulation Regarding the dipole formulation, we will often have to construct μ_S,0 knowing g= u_F· n|_Γ_S. As usual, z_B ,z_S , γ , ω_0 and (γ_v,j ,z_v,j)_j=1,…, N_v are initially given. The first step is to construct u_ω,γ if (ω ,γ)≠ (0 ,0). As discussed in Section <ref>, we choose * in the case of the single-fluid water-waves equations with a flat bottom Γ_B=_L×{ -h_0}, to set u_ω,γ(x) = γ/L + ∫_1/2L(x-y /L/π) (ω1__F+ω̃1__B)( y) y where ω̃(x_1,x_2):=-ω(x_1, -x_2-2h_0), hence, after two integrations by parts u_ω,γ (x) = γ/L + 1/2L∑_j=1^N_vγ_v,j( (x-z_v,j/L/π)- (x-z_v,j+2 h_0/L/π) ) +ω_0/4π∫_0^L_Sln(coshx-z_S(e)/L/(2π) -cosx-z_S(e)/L/(2π)) z_S,e(e)ẹ -ω_0/2π∫_0^L_Bln(coshx-z_B(e)/L/(2π) -cosx-z_B(e)/L/(2π)) z_B,e(e)ẹ +ω_0/4π∫_0^L_Sln(coshx-z_S(e)+2 h_0/L/(2π) -cosx-z_S(e)+2 h_0/L/(2π)) z_S,e(e) ẹ . * in the case of the single-fluid water-waves equations without vorticity, to set u_ω,γ (x)= H (x)=∫_0^L_Bγ_B,H(e) 1/2L(x-z_B(e) /L/π) ẹ where γ_B,H is the unique solution of pv∫_0^L_Bγ_B,H(e') [ z_B,e(e)/2L(z_B(e)-z_B(e') /L/π)] ẹ' =0 , with ∫_0^L_Bγ_B,H(e') ẹ' = - 2γ . Indeed, u_ω,γ constructed in this way has a circulation γ in _F∪Γ_S∪_A and -γ in _B, which is compatible of the limit behavior of the stream function associated to a vorticity which has no zero mean value (see Proposition <ref>). * for all the other cases, to set u_ω,γ (x)= ∫_0^L_Sγ_S,ω,γ(e) 1/2L(x-z_S(e) /L/π) ẹ + ∫_0^L_Bγ_B,ω,γ(e) 1/2L(x-z_B(e) /L/π) ẹ + ∫__F1/2L(x-y /L/π) ω ( y) y where (γ_S,ω,γ , γ_B,ω,γ) is the unique solution of the system of Section <ref> with g=0, and where the last integral needs to be replaced with the formula when ω=ω_0+∑γ_v,jδ_z_v,j as we did in the previous paragraph. In any case, given an expression of u_ω,γ , we are looking first for γ̃_S,γ̃_B such that the associated u_R (<ref>) solves the elliptic problem coming from (<ref>), (<ref>) and (<ref>), i.e. we consider the unique solution of pv∫_0^L_Sγ̃_S(e') [z_S,e(e)/2L(z_S(e)-z_S(e') /L/π)] ẹ' + ∫_0^L_Bγ̃_B(e') [ z_S,e(e)/2L(z_S(e)-z_B(e') /L/π)] ẹ' = -g |z_S(e)| - [z_S,e(e) u_ω,γ(z_S(e))] and ∫_0^L_Sγ̃_S(e') [z_B,e(e)/2L(z_B(e)-z_S(e') /L/π)] ẹ' + pv∫_0^L_Bγ̃_B(e') [ z_B,e(e)/2L(z_B(e)-z_B(e') /L/π)] ẹ' = 0 together with the circulation assumptions ∫_0^L_Sγ̃_S(e') ẹ' = ∫_0^L_Bγ̃_B(e') ẹ' = 0 . Of course, if we have chosen the third construction of u_ω,γ, then we can replace [z_S,e(e) u_ω,γ(z_S(e))] by zero. Finally, we set the initial value of μ_S as the anti-derivative of γ̃_S with zero mean value μ_S,0(e)= ∫_0^e γ̃_S(e')ẹ' -1/L_S∫_0^L_S∫_0^e γ̃_S(e')ẹ'ẹ . §.§.§ Time dependent μ_B for the dipole formulation When the time evolves, we need, for the dipole formulation, to construct μ_B from z_S , μ_S, again knowing from the initial data z_B. This problem is very simple, as we know that the potential obtained from μ_B and μ_S (see (<ref>)) satisfies ϕ_B=0 in 𝒟_B. In particular, the limit of the potential (see Appendix <ref>) by below Γ_B vanishes, which reads 1/2μ_B(e) + ∫_0^L_Bμ_B(e')[1/2L(z_B(e)-z_B(e') /L/π) z_B,e(e') ]ẹ' =-∫_0^L_Sμ_S(e') [1/2L(z_B(e)-z_S(e') /L/π) z_S,e(e') ]ẹ' where the function in the left hand side integral is extended for e=e' by -μ_B(e)[1/2πz_B,ee(e)/z_B,e(e)] . By uniqueness of μ_B satisfying such an equation, we state that it is enough to solve it for μ_S given. This operator is different than the one in (<ref>) and corresponds to the operator A^* in <cit.>. We will highlight in Section <ref> that it is possible to interpret this problem as a small perturbation of the identity, and its inverse will be obtained in the form of a Neumann series. This problem is easily stated as we are looking for μ_B such that ϕ_B=0 below the bottom, which is possible only if we have constructed u_ω,γ tangent to the bottom (<ref>). §.§ Displacement of the free surface For the vortex formulation, we already have γ_S and γ_B, whereas for the dipole formulation we simply construct them from μ_S and μ_B: γ̃_B=∂_eμ_B(e) and γ̃_S=∂_eμ_S(e). With γ_S and γ_B, it is now possible from (<ref>) to compute the velocity anywhere. We recall that the tangential velocity is discontinuous at the water-air interface (see the discussion below (<ref>)). We can thus assume that the free surface is moving as ∂_t z_S(t,e) = α u_F(t,z_F(e)) + (1-α) u_A(t,z_F(e)) with a parameter α∈ [0,1]. Of course, by continuity of the normal component, the evolution of the free surface does not depend on the choice of α . It is however an interesting parameter from a numerical point of view. The choice of α can, for example, allow us to vary the resolution at the tip of a breaking wave. The limit formulas of Appendix <ref> give ∂_tz_S(t,e) = ∫_0^L_Sγ_S(e')z_S,e(e) -γ_S(e)z_S,e(e') /z_S,e(e)1/2L(z_S(e)-z_S(e') /L/π) ẹ' +∫_0^L_Bγ_B(e') 1/2L(z_S(e)-z_B(e') /L/π) ẹ' +2α-1/2γ_S(e)/z_S,e(e) + ∑_j=1^N_vγ_v,j/2L(z_S(e)-z_v,j/L/π) +ω_0/4π∫_0^L_Sln(coshz_S(e)-z_S(e')/L/(2π) -cosz_S(e)-z_S(e')/L/(2π)) z_S,e(e')ẹ' -ω_0/4π∫_0^L_Bln(coshz_S(e)-z_B(e')/L/(2π) -cosz_S(e)-z_B(e')/L/(2π)) z_B,e(e')ẹ' . This formula highlights that the dependence on α appears only through a tangent vector field γ_S(e)/|z_S,e(e)|z_S,e(e). The displacement of the point vortices is an obvious application of this formula. For all i=1,…,N_v , ∂_tz_v,i(t) = ∫_0^L_Sγ_S(e') 1/2L(z_v,i-z_S(e') /L/π) ẹ' +∫_0^L_Bγ_B(e') 1/2L(z_v,i-z_B(e') /L/π) ẹ' + ∑_j=1, j≠ i^N_vγ_v,j/2L(z_v,i-z_v,j/L/π) +ω_0/4π∫_0^L_Sln(coshz_v,i-z_S(e')/L/(2π) -cosz_v,i-z_S(e')/L/(2π)) z_S,e(e')ẹ' -ω_0/4π∫_0^L_Bln(coshz_v,i-z_B(e')/L/(2π) -cosz_v,i-z_B(e')/L/(2π)) z_B,e(e')ẹ' . In the case of the dipole formulation, we simply use (<ref>) to get u_R everywhere. We thus need the expression of u_ω,γ, see Section <ref> for these formula depending on the physical setting. In particular, we notice that often, we do not have the velocity in the air, hence we should only consider the case ∂_t z_S(t,e) = u_F(t,z_F(e)) which makes sense for the single-fluid water-waves equations. We thus write ∂_tz_S(t,e) = ∫_0^L_Sγ̃_S(e')z_S,e(e) -γ̃_S(e)z_S,e(e') /z_S,e(e)1/2L(z_S(e)-z_S(e') /L/π) ẹ' +∫_0^L_Bγ̃_B(e') 1/2L(z_S(e)-z_B(e') /L/π) ẹ' +1/2γ̃_S(e)/z_S,e(e) + u_ω,γ(z_S(e)) , where the last formula of u_ω,γ in Section <ref> has to be desingularized when x=z_S(e) as we did in the previous formula (<ref>). It is worth stressing that some freedom is left on the choice of α , which will not affect the shape of the solution, but only the tangential distribution of points on the interface. In the case of the bi-fluid problem, we have constructed u_ω,γ tangent to free surface, it is thus more natural to include the parameter α ∂_tz_S(t,e) = ∫_0^L_Sγ̃_S(e')z_S,e(e) -γ̃_S(e)z_S,e(e') /z_S,e(e)1/2L(z_S(e)-z_S(e') /L/π) ẹ' +∫_0^L_Bγ̃_B(e') 1/2L(z_S(e)-z_B(e') /L/π) ẹ' +2α-1/2γ̃_S(e)/z_S,e(e) +α u_ω,γ(z_S(e)) . §.§ Bernoulli equation and dipole formulation Writing u = u_ω,γ+∇ϕ_F, the Euler equations takes the form ∇[ ∂_tϕ_F +1/2| u|^2] + ∂_t u_ω,γ +( u) u^⊥=-∇[p_F/ρ_F+gx_2] where ρ_F is the density of the fluid and g is the gravity acceleration. Hence in the neighborhood of the free surface where the vorticity is contant, the Euler equations can be reduced to the modified Bernoulli equation ∂_t ϕ_F+1/2|∇ϕ_F + u_ω,γ|^2 + ϕ_t,ω,γ -ω_0 (ψ_ω,γ + ψ̃_F) =-p_F/ρ_F-g x_2 , with ψ_ω,γ the stream function of u_ω,γ and ϕ_t,ω,γ the potential of ∂_t u_ω,γ[Such a potential exits in the neighborhood of the free surface, because ∂_t u_ω,γ=∂_tω_0=0 and by the conservation of circulation.]. For all the examples given above, the difficult component to express is the stream function associated to the constant part. This is because it is not obvious how to express ∫__F G( x- y) y as an integral over the boundaries. Hence, it is easier to assume ω_0=0 which remove the presence of ψ_ω,γ. We should also observe that the potential ϕ_t,ω,γ is even more complicated to obtain. Therefore, in the sequel, we only consider stationary u_ω,γ where we can forget ∂_t u_ω,γ and ϕ_t,ω,γ. Of course, this implies that we also assume γ_v,j=0, hence ω=0 and we can replace u_ω,γ with u_γ. From now on, we will restrict our attention to the following two situations for the dipole formulation: * the bi-fluid and the single fluid water-waves equations without circulation and vorticity, (i.e. u_γ=0); * the single fluid water-waves equations with circulation and without vorticity, (i.e. u_γ=γ e_1/L for the flat bottom and u_γ= H, initially constructed for other bottom). For the bi-fluid water-waves equations, we also consider the Bernoulli equation in the air ∂_t ϕ_A+1/2|∇ϕ_A|^2=-p_A/ρ_A-g x_2 . For the single-fluid water-waves equations, the density of the air is neglected and the pressure in the air is constant. It is useful to count here boundary conditions for the fluid domain 𝒟_F . At the bottom boundary Γ_B the normal component of the flow vanishes, which provides the needed boundary condition. At the top boundary Γ_S, in the bi-fluid problem both the normal component of velocity and the pressure are continuous with that is the air domain 𝒟_A above, across the boundary Γ_S . These two continuity relations are thus enough to close the fluid system in 𝒟_F (morally the number of jump conditions needed at the boundary between two domains is n_1+n_2, where n_1 and n_2 are the numbers of outer conditions required for the PDE solution in domains 1 and 2 respectively). The single-fluid water-waves problem, corresponds to the limit of a vanishing density for the air. The two jump conditions on Γ_S are then replaced by a single boundary condition on pressure p=0, which again is enough to close the Euler system in 𝒟_F. In any case, we need to relate p_A and p_F. In the presence of surface tension, the pressure is not continuous and a pressure jump is achieved, which is directly related to the surface curvature κ [ p n ]=(p_A-p_F) n=-σκ n . The surface tension coefficient σ is here related to capillary effects (e.g., <cit.>). Setting the atmospheric pressure to 0 for the single-fluid equations, we consider the limit of the Bernoulli equation at the interface ∂_t ( ϕ_F(z_S(e)) ) - ∂_t z_S(e) · (∇ϕ_F)(z_S(e)) +1/2|∇ϕ_F+ u_γ|^2(z_S(e)) =-σ/ρ_Fκ(z_S(e)) -g z_S,2(e) , hence we get 1/2∂_tμ_S(e) = 1/2∂_t ( ϕ_F(z_S(e)) )- 1/2∂_t ( ϕ_A(z_S(e)) )= ∂_t ( ϕ_F(z_S(e)) )-1/2∂_t Φ_S(e) = - 1/2∂_t Φ_S(e) + ∂_t z_S(e) ·( ∇ϕ_F)(z_S(e)) -1/2|∇ϕ_F+ u_γ|^2(z_S(e)) -σ/ρ_Fκ(z_S(e)) -g z_S,2(e) . For the bi-fluid water-waves equation without circulation, we need to consider the Bernoulli equation in the air, hence performing the difference we get ∂_t μ_S(e) + ∂_t z_S(e) · (∇ϕ_A - ∇ϕ_F)(z_S(e)) +1/2(|∇ϕ_F|^2- |∇ϕ_A|^2 )(z_S(e)) =ρ_F-ρ_A/ρ_Fρ_Ap_F(z_S(e))- σ/ρ_Aκ(z_S(e)) , whereas the sum gives ∂_t Φ_S(e) - ∂_t z_S(e) · (∇ϕ_A + ∇ϕ_F)(z_S(e)) +1/2(|∇ϕ_F|^2+ |∇ϕ_A|^2 )(z_S(e)) =-ρ_F+ρ_A/ρ_Fρ_Ap_F(z_S(e))+ σ/ρ_Aκ(z_S(e)) -2g z_S,2(e) . We remove the pressure by multiplying the second equation by the Atwood number A_tw = ρ_F-ρ_A/ρ_F+ρ_A and finally obtain 1/2∂_t μ_S(e) = -A_tw/2∂_t Φ_S(e) +1/2∂_t z_S(e) · ( (A_tw+1)∇ϕ_F +(A_tw-1) ∇ϕ_A)(z_S(e)) -1/4((A_tw+1) |∇ϕ_F|^2+ (A_tw-1) |∇ϕ_A|^2 )(z_S(e)) -(A_tw+1)σ/2ρ_Fκ(z_S(e)) -gA_tw z_S,2(e) because A_tw-1/ρ_A=-2/ρ_F+ρ_A=-A_tw+1/ρ_F . Even if the derivation differs, it is worth stressing that the single-fluid water-waves equations without circulation are recovered when setting A_tw=1 in the above equation. We have already derived the expression for ∂_t z_S(e) and in the similar way for ∇ϕ_F (z_S(e)) and ∇ϕ_A (z_S(e)), our next step is to compute ∂_t Φ_S(e). From (<ref>) ∂_tΦ_S(e)/2 = ∫_0^L_S ( ∂_tμ_S(e') - ∂_tμ_S(e))[1/2L(z_S(e)-z_S(e') /L/π) z_S,e(e') ]ẹ' +∫_0^L_B∂_tμ_B(e')[1/2L(z_S(e)-z_B(e') /L/π) z_B,e(e') ]ẹ' +∫_0^L_S ( μ_S(e) - μ_S(e'))[π/2L^2sin^-2(z_S(e)-z_S(e') /L/π) (∂_tz_S(e)-∂_tz_S(e') )z_S,e(e') ]ẹ' +∫_0^L_S ( μ_S(e') - μ_S(e))[1/2L(z_S(e)-z_S(e') /L/π) ∂_tz_S,e(e') ]ẹ' -∫_0^L_Bμ_B(e')[π/2L^2sin^-2(z_S(e)-z_B(e') /L/π)∂_tz_S(e) z_B,e(e') ]ẹ' where the third right hand side integral concerns a continuous function whose value for e'=e is μ_S,e(e) [1/2π∂_tz_S,e(e)/z_S,e(e)] whereas in the fourth integral the extension is -μ_S,e(e) [1/2π∂_tz_S,e(e)/z_S,e(e)] which is exactly opposite to the first one, and then can be omitted. Finally, substituting (<ref>) into (<ref>) or (<ref>), yields an equation of the form A_S^* [∂_tμ_S](e)+ C_D [∂_tμ_B](e) = G_D,1(e) , where the operator A_S is the same kind of operator as in (<ref>) (see Section <ref> for an explanation on how to discretize such an operator, the discrete expressions of A_S, C_D and G_D,1 are given in Appendix <ref>). As the above equation involves ∂_tμ_B, we derive another equation differentiating (<ref>) with respect to time: ∫_0^L_S∂_tμ_S(e') [z_S,e(e')/2L(z_B(e)-z_S(e') /L/π)] ẹ' +1/2∂_tμ_B(e)+ ∫_0^L_B∂_tμ_B(e') [ z_B,e(e')/2L(z_B(e)-z_B(e') /L/π)] ẹ' = -∫_0^L_Sμ_S(e') [∂_tz_S,e(e')/2L(z_B(e)-z_S(e') /L/π)] ẹ' -∫_0^L_Sμ_S(e') [π z_S,e(e') ∂_t z_S(e')/2L^2sin^-2(z_B(e)-z_S(e') /L/π)] ẹ' which gives an equation of the form D_D [∂_tμ_S](e)+ A_B^* [∂_tμ_B](e) = G_D,2(e) where A_B^* is precisely the same operator as on the left hand side of (<ref>), which was already inverted. From this equation, we have ∂_tμ_B =A_B^*-1[ G_D,2-D_D [∂_tμ_S]] which means that ∂_tμ_S will be obtain by solving (A_S^*- C_DA_B^*-1 D_D)[∂_tμ_S]= G_D,1 - C_DA_B^*-1 G_D,2 . We will discuss in Section <ref> that A_S^*- C_DA_B^*-1 D_D can be seen as a perturbation of a simple matrix, though not the identity. The discrete version is also given in Appendix <ref>. §.§ Euler equation and vortex formulation In <cit.>, the authors differentiate the equation for ∂_tμ (<ref>) to get the equation for ∂_tγ . This is difficult to justify because the kernels in the integrals are singular. Let us also stress that such a derivation yields a vortex formation which should only be used without circulation, i.e. with zero mean value for γ_S,0. Alternatively, we write the Euler equations in _F ∂_t u_F + ( u_F·∇) u_F = -∇ p_F/ρ_F -g e_2 . For the single-fluid water-waves equation, we have ∇ p_A=0 , whereas for the bi-fluid water-waves equations we also write the Euler equations in _A ∂_t u_A + ( u_A·∇) u_A = -∇ p_A/ρ_A -g e_2 . As for the dipole formulation, we need to relate the pressures on both sides of the interface using the continuity of the normal component of the stress tensor at the interface (<ref>). This implies by differentiating with respect to e z_S,e(e) · (∇ p_A-∇ p_F)(z_S(e)) =-σ/ẹ(κ(z_S(e))) . So we need to consider the limit of the tangential part of the Euler equations at the interface. For the single-fluid water-waves equations, one can simply replace z_S,e(e)·∇ p_F(z_S(e)) by σ/ẹ(κ(z_S(e))) and obtain ∂_t( u_F (z_S(e)) · z_S,e(e) ) + [(( u_F(z_S(e))- ∂_t z_S(e))·∇) u_F (z_S(e))]· z_S,e(e) - u_F (z_S(e)) ·∂_t z_S,e(e) =- σ/ρ_F/ẹ(κ(z_S(e))) -g z_S,e,2 . In order to introduce ∂_tγ_S , we use Ψ_S(e) := ( u_F + u_A) (z_S(e)) · z_S,e(e) = pv∫_0^L_Sγ_S(e') [z_S,e(e)/L(z_S(e)-z_S(e')/L/π) ] ẹ' + ∫_0^L_Bγ_B(e') [z_S,e(e)/L(z_S(e)-z_B(e')/L/π) ] ẹ' + ∑_j=1^N_vγ_v,j[z_S,e(e)/L(z_S(e)-z_v,j/L/π)] +ω_0/2π∫_0^L_Sln(coshz_S(e)-z_S(e')/L/(2π) -cosz_S(e)-z_S(e')/L/(2π)) [z_S,e(e)z_S,e(e')] ẹ' -ω_0/2π∫_0^L_Bln(coshz_S(e)-z_B(e')/L/(2π) -cosz_S(e)-z_B(e')/L/(2π)) [z_S,e(e)z_B,e(e')] ẹ' so by the definition of γ_S (<ref>), we write 1/2∂_tγ_S(e) = ∂_t( u_F (z_S(e)) · z_S,e(e) ) -1/2∂_t Ψ_S(e) = -1/2∂_t Ψ_S(e) - [(( u_F(z_S(e))- ∂_t z_S(e))·∇) u_F (z_S(e))]· z_S,e(e) + u_F (z_S(e)) ·∂_t z_S,e(e) - σ/ρ_F/ẹ(κ(z_S(e))) -g z_S,e,2 . For the bi-fluid formulation, we proceed as for the dipole formulation, i.e. * we compute from both Euler equations ∂_tγ_S(e)= ∂_t( u_F (z_S(e)) · z_S,e(e) ) -∂_t( u_A (z_S(e)) · z_S,e(e) ) ; * we express in the same way ∂_tΨ_S(e)= ∂_t( u_F (z_S(e)) · z_S,e(e) ) +∂_t( u_A (z_S(e)) · z_S,e(e) ) ; * we replace z_S,e(e) ·∇ p_A(z_S(e)) by z_S,e(e) ·∇ p_F(z_S(e)) -σ/ẹ(κ(z_S(e))) ; * we remove the pressure term by multiplying the second equation by the Atwood number (<ref>), and by adding the two equations, and we use A_tw-1/ρ_A=-2/ρ_F+ρ_A=-A_tw+1/ρ_F. In the end, we get a slightly modified equation for ∂_tγ_S(e): 1/2∂_tγ_S(e) = -A_tw/2∂_t Ψ_S(e) -1+A_tw/2[(( u_F(z_S(e))- ∂_t z_S(e))·∇) u_F (z_S(e))]· z_S,e(e) +1-A_tw/2[(( u_A(z_S(e))- ∂_t z_S(e))·∇) u_A (z_S(e))]· z_S,e(e) -g A_tw z_S,e,2 +(1+A_tw/2 u_F-1-A_tw/2 u_A)(z_S(e)) ·∂_t z_S,e(e) - (1+A_tw)σ/2ρ_F/ẹ(κ(z_S(e))) which coincides with the first one when we set A_tw=1. We may simplify the second and third right hand side terms by (( u_F(z_S)- ∂_t z_S)·∇) u_F (z_S) = (1-α)γ_S/|z_S,e|^2( z_S,e·∇) u_F (z_S) = (1-α)γ_S/|z_S,e|^2∂_e( u_F (z_S)) , (( u_A(z_S)- ∂_t z_S)·∇) u_A (z_S) = -αγ_S/|z_S,e|^2( z_S,e·∇) u_A (z_S) = -αγ_S/|z_S,e|^2∂_e( u_A (z_S)) . We then substitute the computation of ∂_tΨ_S (see Appendix <ref>) into (<ref>) or (<ref>) to get an equation of the form A_S [∂_tγ_S](e)+ C_V [∂_tγ_B](e) = G_V,1(e) where A_S is the adjoint operator of A_S^* in the dipole formulation, see Section <ref> for properties of these operators. Finally, we differentiate with respect to time (<ref>) to get an equation of the form D_V [∂_tγ_S](e)+ B_B [∂_tγ_B](e) = G_V,2(e) where the conservation of circulation ∫∂_tγ_B=0 is included in the last line of the discretization. This allows to obtain ∂_tγ_S by solving (A_S- C_VB_B^-1 D_V)[∂_tγ_S]= G_V,1 - C_VB_B^-1 G_V,2 . Again, the discrete forms are given in Appendix <ref>. In <cit.>, we have developed a method referred to as the “fluid charge method”. In this method, after retrieving u_ω,γ we have written u_R=∇ϕ_F and solved the Laplace problem with homogeneous Neumann boundary condition on Γ_B. This method relies on an extension of ϕ in D_B by continuity. We can establish that ∇ϕ (x) = ∫_0^L_Sσ_S(e) 1/2L(x-z_S(e) /L/π) ẹ +∫_0^L_Bσ_B(e) 1/2L(x-z_B(e) /L/π) ẹ , where σ_S(e) : =|z_S,e(e)|[lim_z∈_A→ z_S(e)∂_nϕ - lim_z∈_F→ z_S(e)∂_nϕ] σ_B(e) :=|z_B,e(e)| [lim_z∈_F→ z_B(e)∂_nϕ - lim_z∈_B→ z_B(e)∂_nϕ]=-|z_B,e(e)| [ lim_z∈_B→ z_B(e)∂_nϕ] . Indeed, in this formulation, the tangential part is continuous whereas the normal part has a jump. Therefore, we can adapt Section <ref> to find σ_B such that u_R satisfies the impermeability condition on Γ_B. Note that this problem is related to the inversion of the operator A in <cit.>. Next, we can use the previous formula to get the displacement of the free surface in terms of σ_S. Unfortunately, in the case of Water waves, we do not have an equation for ∂_tσ_S. In Sections <ref> and <ref>, we have used the continuity of the normal component of the stress tensor (<ref>). For the dipole formulation, this equation on p_F-p_A provides the connexion between the two Bernoulli equations and allows to get an equation on ∂_tμ_S=1/2∂_t ( ϕ_F(z_S(e)) )- 1/2∂_t ( ϕ_A(z_S(e)) ). For the vortex formulation, a differentiation along the free surface of this relation on p_F-p_A yields an equation on z_S,e(e) · (∇ p_A-∇ p_F)(z_S(e)). This establishes a connexion between the tangential part of the two Euler equations. Here, it is important that γ_S corresponds to the jump of the tangential velocities. In the fluid charge method, σ_S corresponds to the jump of the normal velocities. To get an equation for ∂_tσ_S , we would thus need a relation for z_S,e(e)^⊥· (∇ p_A-∇ p_F)(z_S(e)), i.e. a sort of continuity of the normal derivative of the normal component of the stress tensor, which has no physical meaning. Note that, defining the velocity above the free surface such that the normal component has a jump implies that it does not corresponds to the air velocity. Let us note that Baker in <cit.> considered either the vortex or dipole formulation for the free surface combined with the fluid charge method at the bottom. Because of the extension of ψ above the fluid and ϕ below the fluid domain, Proposition <ref> cannot be applied to such a formulation. For this reason, we have chosen in the previous sections to consider the same formulation for the free surface and for the bottom. §.§ The deep-water case It is easy to derive a deep-water formulation removing contributions from the bottom. Following line by line the previous sections without the presence of _B and Γ_B, i.e. assuming that the fluid domain is infinite in the vertical direction, we can get the following model * the dipole formulation for the bi-fluid water-waves equations without vorticity and circulation, where the velocity is given by γ_S=∂_eμ_S. The equation verifyed by ∂_tμ_S stays exactly (<ref>), but dropping all terms involving μ_B in the expression (<ref>) for ∂_tΦ_S. This yields A_S^* [∂_tμ_S](e) = G_D,1(e) , where G_D,1 is giving in (<ref>) where we remove u_γ and μ_B; * the dipole formulation for the single-fluid water-waves equations with circulation and without vorticity, where u_γ=γ/L e_1 and γ̃_S=∂_eμ_S. The equation verified by ∂_tμ_S stays exactly (<ref>), but again dropping all terms involving μ_B in the expression (<ref>) for ∂_tΦ_S. This yields A_S^* [∂_tμ_S](e) = G_D,1(e) , where G_D,1 is giving in (<ref>) where we remove μ_B and replace A_tw by 1 and u_γ by γ/L e_1; * the vortex formulation with circulation and where the vorticity is composed of point vortices (no constant part). The velocity is given by γ_S and the equation verified by ∂_tγ_S stays exactly (<ref>), but dropping all terms involving γ_B in the expression for ∂_tΨ_S, see Appendix <ref>. This yields A_S [∂_tγ_S](e) = G_V,1(e) . Let us note that the equations obtained in this case are very close to <cit.>, but where we have used the desingularization (<ref>), which allows us to justify the derivatives and to handle only classical integrals. In this earlier article, principal value integrals with xsin^-2x singularities may be a cause of numerical instabilities. § NUMERICAL DISCRETIZATION §.§ Time integration Whereas most earlier numerical studies on water waves breaking used high-order Runge-Kutta integrators <cit.>, we preferred to restrict our study to a second order in time, but symplectic integrator for harmonic oscillators. We use the so-called Verlet integrator, which amounts to using a staggered grid in time, and preserves the hamiltonian structure of harmonic oscillators. The governing equations take the form ∂ _t X = G(Z,X) , ∂ _t Z = F(Z,X) , where X≡γ in the vortex formulation and X≡μ in the dipole formulation. F and G denote here non-linear differential operators. These are discretized in the form X^n+1/2-X^n-1/2 = Δ t G(Z^n,X^n) , Z^n+1-Z^n = Δ t F(Z^n+1/2,X^n+1/2) . The right-hand-side of the first equation involves X^n which is not known and that of the second equation similarly involves the unknown Z^n+1/2 . These are respectively constructed as 2 X^n ≃ X^n+1/2+X^n-1/2 and 2 Z^n+1/2≃ Z^n+1+Z^n and calculated using a fixed point relaxation. We observed numerically that this symplectic integrator offers better stability properties than standard Runge-Kutta integrator and yields remarkable conservation properties on test cases (see Section <ref>). §.§ Shifted grids in space Besides the use of a staggered mesh in time, a staggered mesh is space can also be used. This approach was for example used and fully justified (via a mathematical demonstration) in <cit.> to enforce impermeability boundary conditions. Shifted grids are also used here for the free surface. Our aim is to avoid regularization techniques and yet desingularize the integrals involved in the computation. We reformulated all singular integrals as regular ones using relation (<ref>) (see also Appendix <ref>). The resulting integral is now non-singular, but can only be defined at the former singularity as a continuous prolongation. This extension necessarily involves higher order derivatives, which can induce some numerical errors when the curvature becomes large (i.e. in a situation relevant to wave breaking). Evaluating the integrals on a shifted dual grid resolves this problem as the function is at worse evaluated half a grid point away from the former singularity. The integral eventually needs to be interpolated on the original grid for time stepping. This introduces some numerical smoothing, which is however entirely controlled by the grid size and thus vanishes in the limit of a large number of points. Finally, all derivatives in space, with respect to e, in the discrete expressions are evaluated thanks to second order finite difference formula. §.§ Discretization and inversion of singular operator by Neumann series One of the main numerical difficulties lies in the resolution of linear systems with matrices related to singular kernel operators. If it is well known that continuous operators are invertible <cit.>, the relevant discretization, the invertibility and the convergence of the discret operators are recently studied in <cit.>. The notation A , B and their adjoints A^* , B^* come from Equation (3.1) in this article, where the relation and the inversion is based on Poincaré-Bertrand formula concerning the inversion of Cauchy integrals, see <cit.> for the full details. Given an arc-length parametrization : [0,L_B] →Γ_B, if 0=e_B,1 <e_B,2 < … <e_B,N < L_B are close to the uniform distribution θ_i= (i-1)L_B/N_B, then the matrix A^*_B,N appearing to compute μ_B, see Section <ref>, defined as A_B,N^*(i,j)= L_B/N_B[1/2L(z_B(e_B,i)-z_B(e_B,j) /L/π) z_B,e(e_B,j) ] ∀ i≠ j ∈ [1,N_B]× [1,N_B] , A_B,N^*(i,i)= 1/2- L_B/N_B[1/2πz_B,ee(e_B,i)/z_B,e(e_B,i)] ∀ i∈ [1,N_B] , is invertible and can be seen as a perturbation of 1/2 I_2. It is thus a well conditioned matrix, see for instance <cit.>. It can be inverted very efficiently by a Neumann series. We refer to <cit.>, in particular Theorem 8.1 therein where the convergence rate is given. Namely, we write A_B,N^*=1/2( I_N-R_B,N), R_B,N<1 , which implies that A_B,N^*-1=2( I_N - R_B,N )^-1= 2 ∑_k=0^+∞ R^k_B,N . In view of a fixed point procedure, we denote R_B,N:=-2A_B,N^*+ I_N, and U_n+1=2 ∑_k=0^n+1 R^k_B,N = R_B,N(2 ∑_k=0^n R_B,N^k) + 2 I_N = R_B,N U_n +2 I_N , U_0=2 I_N . Indeed, the distance between U_n+1 and U_n controls the error A_B,N^*-1-U_n = 2 ∑_k=n+1^+∞ R_B,N^k≤ 2 R_B,N^n+11/1- R_B,N = U_n+1- U_n/1- R_B,N . Concerning the computation of ∂_tμ_S, we have noticed in Section <ref>, see (<ref>), that we should invert A_D,N:=A^*_S,N-C_D,N A^*-1_B,ND_D,N , where C_D,N and D_D,N account for the interactions between the bottom and the free surface. If it was established that A^*_B,N is a perturbation of 1/2 I_N_S, it is not the case of A_D,N because of the asymptotic behavior of C_D,N and D_D,N when the free surface is far away the bottom. Indeed, studying the behavior when z_S-z_B = X + o(X) for large X∈_+, we get from the definition of C_D,N and D_D,N that (C_D,N)_i,j =A_twL_B/N_B[1/2L(z_S(i)-z_B(j) /L/π) z_B,e(j) ] = A_twL_B/N_B[1/2L - z_B,e(j) ] +o(1) = -A_tw de_B z_B,e(j)/2L+o(1). and (D_D,N)_i,j = L_S/N_S[1/2L( z_B(i)-z_S(j) /L/π) z_S,e(j) ] = de_S z_S,e(j)/2L+o(1). Using the decomposition of A_S,N^*=1/2( I_N_S-R_S,N) and A^*-1_B,N=2( I_N_B+R̃_B,N) we conclude that A_D,N = 1/2 I_N_S + A_twde_B de_S/2L^2( z_B,e(j) )_i,j( z_S,e(j))_i,j+R̂_B,N +o(1) = 1/2 I_N_S + A_twde_B de_S/2L^2( ∑_k=0^N_B z_B,e(k) z_S,e(j) )_i,j+R̂_B,N+o(1) = 1/2 I_N_S + A_twde_S/2L^2( z_S,e(j) ∫_0^L_B z_B,e(e) de )_i,j+R̂_B,N+o(1) = 1/2 I_N_S + A_twde_S/2L( z_S,e(j) )_i,j+R̂_B,N+o(1) = 1/2Ã_N+R̂_B,N+o(1). We are then first interested in inverting such matrices Ã= I_N_S + (a_j)_i,j. If 1+∑_k a_k≠ 0, then à is invertible and we have Ã^-1= I_N_S -1/1+∑ a_k (a_j)_i,j. It is enough to check that the right hand side matrix is the right inverse to Ã, namely Ã( I_N_S -1/1+∑ a_k (a_j)_i,j) = I_N_S + (a_j)_i,j -1/1+∑ a_k (a_j)_i,j -1/1+∑ a_k (a_j)_i,j^2 = I_N_S + ( a_j(1+∑ a_k)/1+∑ a_k -a_j/1+∑ a_k -a_j∑ a_k/1+∑ a_k)_i,j = I_N_S . In our case, we have 1+ ∑ a_j = 1+ A_twde_S/L∑_j z_S,e(j) = 1+A_tw/L∫_0^L_S z_S,e(e) ẹ +o(1) =1+ A_tw+o(1) which is non zero for the single-fluid formulation where A_tw=1 , but also for for the bi-fluid water-waves equations where A_tw> -1 if ρ_F>0. By the previous lemma and (<ref>), we naturally set à := I_N_S + A_twde_S/L( z_S,e(j) )_i,j , Ã_-1 := I_N_S - A_twde_S/L(1+A_tw)( z_S,e(j) )_i,j and R := I_N_S -2 Ã_-1 A_D,N , so that A_D,N = 1/2à ( I_N_S - R ) , where we have neglected the error made in (<ref>). We then have for R<1 (because A_D,N=1/2Ã+o(1)) ∂_tμ_S= A_D,N^-1 RHS = 2 ( I_N_B - R )^-1Ã_-1 RHS= 2 ∑_k=0^+∞ R^k Ã_-1 RHS. which can be written as a fixed point procedure u_n+1 = 2∑_k=0^n+1 R^k Ã_-1 RHS = R (2 ∑_k=0^n R^kÃ_-1 RHS) + 2Ã_-1 RHS = R u_n +u_0 , u_0=2Ã_-1 RHS . Concerning the vortex method, we need to inverse B_B,N which appears in the determination of γ_B in the vortex formulation, see Sections <ref>, <ref>, <ref>. Such an operator is related to the classical vortex method in domains with boundaries, see the operator B in <cit.>. The discrete version B_B,N(i,j)= L_B/N_B[ z_B,e(ẽ_B,i)/2L(z_B(ẽ_B,i)-z_B(e_B,j) /L/π)] ∀ (i,j)∈ [1,N_B-1]× [1,N_B] , B_B,N(N_B,j)=L_B/N_B ∀ j∈ [1,N_B] , is invertible if ẽ_B,i∈ (e_B,i,e_B,i+1) are close to a uniform distribution θ̃_i= (θ_i+ θ_i+1)/2. Moreover <cit.> states that γ_B, N = B_B,N^-1 RHS_VB,N is a good approximation of γ_B. Even though it is invertible, the matrix is not well-conditioned and cannot be seen as a Neumann series, except relating B^-1 to A^-1 through <cit.>. However, this step only needs to be performed once at the beginning of the numerical integration (since this matrix does not evolve with time). It is thus worth inverting it accurately. The last matrix to inverse is A_V,N:=A_S,N-C_V,N B^-1_B,ND_V,N to compute ∂_tγ_S, see (<ref>), which can be also inverted by Neumann series as we did for A_D,N. § NUMERICAL RESULTS AND CONVERGENCE In order to compare and validate the various numerical approaches, we have considered three different initial conditions and test cases when the bottom is flat Γ_B=_L×{-h_0}. In all test cases, we used N_S=N_B=N. While simulations were performed using different values of the Atwood number, we report here simulations with A_tw=1 , which can be compared with existing solutions. The parameter α was set to α=1, implying that the point are advected tangentially at the velocity of the lower fluid. Also all test cases presented here include a flat bottom (though the cases of infinite depth or variable bottom can also be handle using the same code). The simulations are made non-dimensional, with a length-scale based on the water depth, such that h_0=1, and a time-scale based on gravity, such that g=1 . We will numerically investigate the stability of our scheme in a few test cases. We will then validate the numerically obtained solutions against analytical solutions, when available. When not available, we will simply check convergence to an N independent solution in the limit of large N . Another useful validation concerns conserved quantities. The first thing to be checked is the conservation of mass M(t):= ρ_F Vol _F(t)=ρ_F∬__F(t)÷[ 0; x_2 ]x = ρ_F∫_∂_F(t)[ 0; x_2 ]·ñσ̣( x) = ρ_F∫_0^L_S z_S,2(e) z_S,e,1(e)ẹ - ρ_F∫_0^L_B z_B,2(e) z_B,e,1(e)ẹ and the total energy E(t):= 1/2∬__F(t)ρ_F | u|^2 + ∬__F(t)ρ_F g x_2 = ρ_F/2∬__F(t)∇ϕ_F·∇^⊥ψ + ρ_F g/2∬__F(t)÷[ 0; x_2^2 ]x = -ρ_F/2∫_0^L_S[ u_F(z_S(e)) z_S,e(e)] ψ(z_S(e))ẹ+ ρ_F/2∫_0^L_B[ u_F(z_B(e)) z_B,e(e)] ψ(z_B(e))ẹ +ρ_Fg/2∫_0^L_S z_S,2^2(e) z_S,e,1(e)ẹ - ρ_Fg/2∫_0^L_B z_B,2^2(e) z_B,e,1(e)ẹ where the expression of u_F and ψ, given in (<ref>) and (<ref>), has to be considered with the limit formulas of Appendix <ref>. We also successfully checked conserved integrals for domains with horizontal symmetries <cit.> (not detailed here). §.§ Case 1: linear water waves The first test case we consider is that of simple waves of small amplitude, it takes the form provided by Stokes at first order η(t,x)= A cos( k x - ω t) , Φ(t,x, y)= A ω/k cosh k(y+h_0) /sinh k h_0 sin( k x - ω t) , -2mm with ω= √( g k tanh k h_0 ) , and u_x=∂_x Φ , u_y=∂_y Φ . Our initial condition is thus η_0(x)= A cos( k x ) , u· n = A sin kx √(g k tanh kh_0) . We also consider Stokes waves at second order, where the initial data is slightly modified: η_0(x) = A cos( k x ) + kA^2 3- tanh^2 kh_0/4tanh^3 kh_0 cos( 2k x) , u· n = A sin kx √(g k tanh kh_0)/√(1+k^2A^2 sin^2(kx))(1+ kA/tanh kh_0 cos (kx) ) . In our simulations, we used u· n as boundary condition to numerically find γ_0 and μ_0 (see  <ref> and  <ref>). This contrasts with <cit.> whom provides the analytical expression for both γ and μ in the limit of a vanishing amplitude. This distinction is small at this stage, but turns out to be important at later stage (see our third test case below). We consider numerically a wave number k=1, and a domain of horizontal extend L=2 π . We integrate our simulations for a time t_f=10 k / ω . We introduce a CFL-number CFL= min (| ż_S | Δ t / (|z_S,e| ẹ)) , based on the lagrangian velocity of the points sampling the interface. We have checked that our numerical code is stable for a CFL-number CFL ≤ CFL^*, we observed numerically that CFL^* ∈ [0.25, 0.5) . We can then test the convergence of our numerical solution by varying the spatial resolution N . In order to keep temporal truncation error, we then used CFL ≤ 1/10 . Convergence is then assessed by comparing the final state to the initial condition z(t_f) - z(0) _∞. The resulting errors for waves of amplitudes A=10^-2 and A=10^-3 respectively are represented in Figure <ref>(a,b). The error, defined in L^∞-norm, decreases with increasing resolution to a value which decreases with the amplitude of the wave. This can be interpreted as the signature of weak non-linear corrections. Interestingly, the vortex and the dipole methods are indistingishible in these graphs. §.§ Case 2: solitary waves Our second test case concerns solitary waves. We consider here an extension of solitary waves to a periodic domain. This involves the cnoidal wave solution for the Green-Naghdi equation. To present this solution, first we define the Jacobi elliptic functions cn(u,m):=cosφ(u,m) as the inverse of the elliptic integral u(φ,m):=∫_0^φθ̣/√(1-msin^2θ) where m∈ (0,1). We also need the complete elliptic integral of the first and second kind: K(m):=∫_0^π/2θ̣/√(1-msin^2θ) and E(m):=∫_0^π/2√(1-msin^2θ)θ̣ . For a given period L, amplitude A and depth h_0 , we determine the nonlinearity parameter m∈ (0,1) verifying the following dispersion relation AL^2=16/3mK^2(m)h_0^2/gc^2(m), where the velocity c is given by c(m):=√(g h_0(1+η_1(m)/h_0)(1+η_2(m)/h_0)(1+η_3(m)/h_0)) with η_1(m):=-A/mE(m)/K(m), η_2(m):=A/m(1-m-E(m)/K(m)), η_3(m):=A/m(1-E(m)/K(m)). Once m is obtained, the Green-Naghdi soliton is then defined by the surface elevation η(t,x)=η_2+A cn^2(2K(m)/L(x-ct),m). As it translates to the right with velocity c, we write u· n|_(0,x,η(0,x))= -c∂_xη(0,x)/√(1+|∂_xη(0,x)|^2), which uniquely defines γ_0 and μ_0, see  <ref> and  <ref>. For more details on cnoidal waves, we refer to <cit.>, and references therein. We consider numerically a domain of large extend to allow for a localized solitary wave, and choose L= 40 π . For an amplitude A=0.1 , we computed a circulation γ = 3 10^-3 . We checked numerically that setting γ=0 did not alter the solution in this case (i.e. the circulation is weak enough not to affect the solution). The results are presented in figure <ref>. Figure <ref>(a) highlights the modification induced on the above initial condition after one full period, i.e. t^*=L/c≃ 124.23. The difference (represented in dashed blue) appears dominated by an correction on the velocity c, presumably due to higher order correction terms. We performed simulations up to t=10 t^* which confirmed that no change of shape is observed, but the phase lag with the analytical solution increases with time. Both methods appear stable in time and no significant change on the numerical solution was observed when the time-step was reduced by a factor of 2 . The numerical convergence of both methods is demonstrated on figure <ref>(b). Here the solitons are localized and a small difference of phase velocity yields a large L^∞-norm, not reflecting the actual distance between the two curves. We thus use a discrete Hausdorff distance between two solutions to measure the error (see Fig. <ref>.b): E=d_H(z_1,z_2)≡max(max_i(min_j(|z_1(i)-z_2(j)|)),max_j(min_i(|z_1(i)-z_2(j)|))) , where both curves were interpolated using 2^17≃ 130.000 points in order to identify accurately enough the closer points on both curves. Both methods exhibit an approximately second order convergence in space (as highlighted by the black dotted line). Both the vortex and the dipole methods yield very good results on this test case as well. For this rather long integration (up to t=t^*≃ 124.23), we observe that the volume is conserved up to fluctuations of the order of 10^-8 and the total energy (kinetic and potential) is very weakly dissipated (or the order of 10^-4 over the full period). §.§ Case 3: wave breaking Finally we turned to a strongly non-linear problem, that of a wave breaking. We consider a flat bottom and a mean water depth h=1. We consider an initial interface of the form η_0(x)= A cos( k x ) with A=1/2and k=1 . Our initial velocity stems from (<ref>) but is now used with large amplitude. In doing so it is important to relax the approximation n ≃ e_y . We start with (<ref>) which yields the initial condition on velocity u_x=∂_xΦ= A √(g k/tanh kh) cos kx , u_y= ∂_yΦ=A √(g k tanh kh) sin kx . Using expression of the normal n = 1/√(1+(∂_x η)^2)[ - ∂_x η; 1 ] = 1/√(1+k^2 A^2 sin ^2 kx)[ k A sin kx; 1 ] , we get u · n = A sin kx √(g k tanh kh)/√(1+k^2 A^2 sin ^2 kx)(1 +k A 1/tanh kh cos kx ) which resumes to (<ref>) in the limit of vanishing amplitudes. It is interesting to note that this differs from <cit.>, whom used the γ_0 and μ_0 constructed from the small amplitude approximation. It should also be stress that the construction of a sensible initial condition at large amplitude (wave breaking) from the simple wave analytics is necessarily based on simplifying assumption as there is no analytical solution in this strongly non-linear limit. This situation is in a fully non-linear regime and more challenging numerically than the two previous test cases. We observed here, as reported by <cit.>, that for fully-nonlinear configuration the vortex method is unstable to a high wave number instability. As the resolution is increased and higher wave number are resolved, the integration time before an instability occures decreases (see first column in table <ref>). Such is not the case (again as stressed by <cit.>) for the dipole method, for which the integration time is fairly independent of the resolution (see second column in table <ref>). As we have shown in the first two cases, both the vortex method and the dipole method are stable and can be used in situations involving a small to moderate curvature. In the case of huge curvature (such as wave-breaking), the vortex method becomes impractical and does not converge (as the stability decreases with increasing resolution). For both methods, we observed that the Volume and total energy are conserved up to less than 1 % . The case of the dipole method is not quite as severe, but we observe a numerical instability at a time independent of N, well before the splash occurs. The instability of the dipole method appears to develop first in the form of points approaching each other in the direction tangent to the interface, in a manner similar to the so called `phantom traffic jam' instability. In order to delay the formation of this instability, we introduced an `odd-even coupling' (OEC) at the end of each time-step in the form of an arbitrary regularization on μ by replacing at the end of each time step, ∂_t μ_S(e_k) with ( ∂_t μ_S(e_k-1)+2 ∂_t μ_S(e_k)+∂_t μ_S(e_k+1))/4 . Using this approach we could extend the integration time by nearly 10% (see table <ref>). For large grids, this coupling procedure does not alter the simulations on the first two test-cases such as simple waves or solitary waves, but does stabilise the numerical integration of wave breaking The OEC approach introduces a stabilisation, which however does not affect the overall convergence of the scheme (see figure <ref>) and which vanishes continuously in the limit of large grids. It is worth stressing that contrary to other regularization techniques previously used on this problem (and discussed in the following section), the above OEC does not involve any arbitrary small parameter ε other than the grid space. This approach is thus free of the risk to present results with vanishing distance between points and yet a finite regularization. The curves marking the interface between water and air, are generally not graphs for this test-case. We thus stick to the discrete Hausdorff distance between two curves, see (<ref>), to measure the error (see Figs. <ref> and <ref>). It is worth stressing that the very same test case has been recently investigated using the Navier-Stokes equations and a Finite Element discretization, and that convergence has been achieved to the solution portrayed on Fig. <ref>.a as the Reynolds number is increased <cit.>. § COMPARISON WITH REGULARIZATION STRATEGIES §.§ Fourier filtering for the vortex method Since the Vortex methods appears unstable (see Barker 1982 and table <ref>), some Fourier filtering can be introduced. The strategy of a filtered Vortex method is for example followed by <cit.>. The filtering corresponds to a product in Fourier space of both z_S and γ_S with a filter function. We used here the filter function introduced in <cit.> (see eq. (2.14) in this reference) F̂ = 1/2 -1/2tanh(2 |k| π/N_S-ξ_0/d) . Two parameters are introduced in the above. ξ_0 locates the centre of the transition zone (usually some fraction of π) and d controls the width of the transition zone. Following <cit.>, we used ξ_0=π/4 and d=π/40. The Fourier filtered method (referred to as `F-vortex' in the text) yields a increased stability and thus longer integration. Remarkably the final time of integration for the F-vortex method is independent of N (see table <ref> and the method does converge see Fig. <ref>(a). Despite the simulation has been extended in time far beyond the unfiltered vortex method, the solution does converge (to first order) to that of the dipole method, see Fig. <ref>(b). The final integration time however remains shorter than for the dipole method (let alone the OEC-dipole method). The use of ξ_0=π/4 in the above tests (guided by <cit.>) yields an `effective' resolution of approximately N/4 , (though with a larger stability than the pure vortex method with N/4). Increasing the cut-off frequency, say to ξ_0=π/2 instead of ξ_0=π/4 yields a less stable scheme. The observed time for instability with ξ_0=π/2 was t=2.64 for N=512 and t=2.24 for N=1024 . We should also stress that <cit.> introduced, in the case of a very stiff initial data, a filtering on the dipole method. This interesting approach stabilizes the dipole method, thus allowing for longer time integration. §.§ Curve-offset method Another approach to regularise the boundary integral consists in considering that the vortices are located at a finite distance above the free surface (e.g. <cit.>). This finite offset prevents any singularity in the integration as the vortices are fictiously located at (X_j,Y_j)=(x_j,y_j)+L_d n with L_d=δ L/N . While no singularity can occur with this technics, the kernel in the vortex formulation takes the form (π(x_s,i-x_v,j)/L) where (x_s,i) are located on the free surface whereas (x_v,i) are located a finite distance above the free surface. It is finite but of order (π L_d/L) , which becomes large when L_d is small. This method is largely discussed in <cit.> where the author introduces L_d , but also a second regularization which consists in replacing the Green kernel G(x_s,i,x_v,j), behaving as ln |x_s,i-x_v,j| (see (<ref>)), with ln |x_s,i-x_v,j|+b where b is not necessarily small (b∈ [0; 10 000]). The above b term is difficult to justify from a mathematical point of view, and not quite a small perturbation. Instead of the above procedure, we consider a regularization inspired from the vortex-blob method. We thus introduce a second regularizing parameter ε_N>0 replacing the previous kernel functions by (π(x_s,i-x_v,j)/L+ε_N n). In practice, when testing this approach, we have considered the L_d regularization (finite distance of the vortices from the interface) and a parameter ε_N on the form of the blob method (i.e. regularizing the kernel). Following <cit.>, we explicitely relate L_d to 1/N . Also ε_N is taken to vanish as 1/N to try to assess the convergence properties of this approach. We present in Fig. <ref> the numerical simulations performed with the curve-offset method applied to the vortex formulation. We used (<ref>) as initial condition. This configuration corresponds to the wave breaking and the results should be compared with those of Fig. <ref>. We considered four different regularizations weights, namely L_d=L/N ε_N=1/2N , L_d=2L/N ε_N=1/2N , L_d=L/2N ε_N=1/2N , L_d=L/N ε_N=1/4N . We report the various numerical solutions at t=3.68 in Fig. <ref>(a,b). We first note that all curves are significantly different from the results presented in Fig. <ref>. The wave is much slower with the regularization. The unregularized method, for which we observed convergence toward the Euler solution on various test cases, has already reached splashing at that time. The shape of the obtained numerical solution also strongly depends on L_d , but only weakly on ε_N (the green and black curves being extremely close to another). We performed further tests, which confirmed the weak influence of the ε_N regularization on the numerical solution. A puzzling property of this approach is that, for a given choice of regularization, say L_d=L/N ε_N=1/2N , some convergence is achieved as N is increased (see Fig. <ref>c). Yet the numerical curve then converges toward a curve which could seem plausible, but significantly differs from the unregularized solution. A final concern with this approach is that the total energy of the wave is not conserved (and varies by ∼ 30% through the simulation). § DISCUSSION We derived a numerical strategy to discretize inviscid water waves in the case of overturning interfaces (i.e. when the water-air interface is not a graph). We showed that this discretisation can be used up to the splash (i.e. when the interface self intersects). No filtering or regularisation were introduced other than numerical discretisation. In the most severe case of a splashing wave, an odd-even coupling was introduced, as the numerical truncation, it vanishes in the limit of a large number of points. This formulation opens the way for further studies. In particular, we want to study the possibility of a finite-time singularity formation at the tip of a breaking wave. No singularity was observed with the initial condition considered here. We also want to investigate the effect of an abrupt jump in water height. Finally, the triple interface of water, air and land (i.e. the sloping beach problem) still needs to be addressed. § ACKNOWLEDGEMENTS The authors are grateful to David Lannes and Alan Riquier for discussions. This work was partially supported by the ANR project `SINGFLOWS' (ANR-18-CE40-0027-01), the IMPT project `Ocean waves', and the CNES-Tosca project `Maeva'. § COTANGENT KERNEL We begin this appendix by relating the cotangent kernel and the usual kernel in ^2. Let f a L-periodic real function and z a curve verifying z(e+L)=z+L, then K_^2[|z_e|^-1 f](x) = 1/2π∫_-∞^∞1/x-z(e') f(e') ẹ' =1/2π∫_0^L∑_k=-∞^+∞1/x-z(e') -L k f(e') ẹ' = 1/2π L∫_0^L∑_k=-∞^+∞1/1/L(x-z(e')) - k f(e') ẹ' = 1/2L∫_0^L(x-z(e')/L/π) f(e') ẹ' because it is well known for x∈ (0,1) that π(π x)=lim_N→∞∑_k=-N^N 1/x+n = 1/x+ lim_N→∞∑_k=1^N 2x/x^2-n^2 which can be extended to ∖ by unicity of the analytic extension. This computation can be found in several formal derivation to get the Biot-Savart formula without using the Green kernel in _L×, see for instance <cit.>. The limit of Cauchy integrals from above provides lim_s→ 0^+1/2L∫_0^L(x-z(e')/L/π) f(e') ẹ'|_x= z(e)+s n =1/2L pv∫_0^L(z(e)-z(e')/L/π) f(e') ẹ' - 1/2f(e)/z_e(e) and from below we have lim_s→ 0^-1/2L ∫_0^L(x-z(e')/L/π) f(e') ẹ'|_x= z(e) +s n =1/2L pv∫_0^L(z(e)-z(e')/L/π) f(e') ẹ' + 1/2f(e)/z_e(e) . From this formula, we recover that the tangential component u·τ= (ûτ) has a jump, whereas the normal component u· n=-(ûτ) is continuous. We can also note that ∫_0^L[z_e(e)/2L(z(e)-z(e')/L/π) ]f(e') ẹ' is actually a classical integral whereas pv∫_0^L[z_e(e)/2L(z(e)-z(e')/L/π) ]f(e') ẹ' only makes sense in terms of principal value. As usual concerning desingularization of the principal value, we first note that pv∫_0^L z_e(e') (z(e)-z(e')/L/π) ẹ'= L/π pv ∫_-∞^∞z_e(e')/z(e)-z(e')ẹ' = -L/πlim_ε→ 0^+([ ln(z(e)-z(e') ) ]_-∞^e-ε + [ ln(z(e)-z(e') ) ]^+∞_e+ε) = -L/πlim_ε→ 0^+lim_S →∞(( ( ln (ερ)+iθ) - ln S ) + ( ln S + iπ - ( ln (ερ)+iθ + i π) ) )=0 where z(e)-z(e ±ε) = - ±ε z_e(e) = -±ερ e^iθ, hence we can always write pv∫(z(e)-z(e')/L/π) f(e') de' = ∫(z(e)-z(e')/L/π) f(e')z_e(e)-f(e)z_e(e') /z_e(e)de' which is now a classical integral of a continuous function. For more details on singular integral, we refer to <cit.>, see <cit.> for a brief summary. § DISCRETE OPERATORS FOR THE DIPOLE FORMULATION For z_B(i):=z_B(e_B,i), z_S(i):=z_S(e_S,i) and μ_S(i):=μ_S(e_S,i) given, we construct the matrix A_B,N^* A_B,N^*(i,j)= L_B/N_B[1/2L(z_B(i)-z_B(j) /L/π) z_B,e(j) ] ∀ i≠ j ∈ [1,N_B]× [1,N_B] , A_B,N^*(i,i)=1/2- L_B/N_B[1/2πz_B,ee(i)/z_B,e(i)] ∀ i∈ [1,N_S] , and F_D,N F_D,N(i)= -∑_j=1^N_SL_S/N_Sμ_S(j) [ z_S,e(j)/2L(z_B(i)-z_S(j) /L/π)] ∀ i∈ [1,N_B] . We set μ_B = (A_B,N^*)^-1 F_D,N. This operation corresponds to (<ref>). Next, we compute γ_B=∂_eμ_B(e), γ_S=∂_eμ_S(e), next ∂_tz_S, ∇ϕ_F (z_S(e)) and ∇ϕ_A (z_S(e)) where we could extend in the integral on Γ_S for e'=e by γ_S(e)z_S,ee(e) -γ_S,e(e)z_S,e(e)2π z_S,e^2(e). We compute A_S,N^* as A_S,N^*(i,j)= A_twL_S/N_S[1/2L(z_S(i)-z_S(j) /L/π) z_S,e(j) ] ∀ i≠ j ∈ [1,N_S]× [1,N_S] , A_S,N^*(i,i)=1/2-A_twL_S/N_S∑_j≠ i[1/2L(z_S(i)-z_S(j) /L/π) z_S,e(j) ] ∀ i∈ [1,N_S] , next C_D,N(i,j)= A_twL_B/N_B[1/2L(z_S(i)-z_B(j) /L/π) z_B,e(j) ] ∀ (i, j) ∈ [1,N_S]× [1,N_B] . and finally D_D,N(i,j)= L_S/N_S[1/2L( z_B(i)-z_S(j) /L/π) z_S,e(j) ] ∀ (i,j) ∈ [1,N_B]× [1,N_S] . Concerning the right hand side term, we compute G_D,1,N(i)= -A_tw∑_j≠ iL_S/N_S ( μ_S(i) - μ_S(j))[π/2L^2sin^-2(z_S(i)-z_S(j) /L/π) (∂_tz_S(i)-∂_tz_S(j) )z_S,e(j) ] -A_tw∑_j≠ iL_S/N_S ( μ_S(j) - μ_S(i))[1/2L(z_S(i)-z_S(j) /L/π) ∂_tz_S,e(j) ] +A_tw∑_j=1^N_BL_B/N_Bμ_B(j)[π/2L^2sin^-2(z_S(i)-z_B(j) /L/π)∂_tz_S(i) z_B,e(j) ] + 1/2[ ∂_t z_S(i) ((A_tw+1)∇ϕ_F (z_S(i))+(A_tw-1) ∇ϕ_A (z_S(i)))] -1/4((A_tw+1) | (∇ϕ_F+ u_γ )(z_S(i)) |^2+ (A_tw-1) |∇ϕ_A (z_S(i)) |^2 ) -(A_tw+1)σ/2ρ_Fκ(z_S(i)) -gA_tw z_S(i) , for all i∈ [1, N_S], whereas G_D,2,N(i)= - ∑_j=1^N_SL_S/N_Sμ_S(j) [1/2L( z_B(i)-z_S(j) /L/π) ∂_tz_S,e(j) ] - ∑_j=1^N_SL_S/N_Sμ_S(j)[π/2L^2sin^-2( z_B(i)-z_S(j) /L/π)∂_tz_S(j) z_S,e(j) ] for all i∈ [1, N_B]. It could seem strange that the diagonal terms in A_S,N^* are of a different nature than those in A_B,N^*. It is in fact the same, because we can make use of Appendix <ref> to rewrite (<ref>) in the form 1/2μ_B(e) + ∫_0^L_B (μ_B(e')-μ_B(e))[1/2L(z_B(e)-z_B(e') /L/π) z_B,e(e') ]ẹ' =-∫_0^L_Sμ_S(e') [1/2L(z_B(e)-z_S(e') /L/π) z_S,e(e') ]ẹ' , for which we would defined A_B,N^*(i,i)=1/2-L_B/N_B∑_j≠ i[1/2L(z_B(i)-z_B(j) /L/π) z_B,e(j) ] ∀ i∈ [1,N_S] and we recover the same expression by the discretization of desingularization rule (<ref>) L_B/N_B∑_j≠ i[1/2L(z_B(i)-z_B(j) /L/π) z_B,e(j) ] - L_B/N_B[1/2πz_B,ee(i)/z_B,e(i)] =0 . We can therefore use either of these formulations, but it is more interesting to avoid the second derivative z_S,ee , which tends to destabilize the numerical code when the curvature of the interface becomes large. As the bottom boundary does not depend on time, we can retain the expression in terms of z_B,ee. This relation could be also used in the extension for ∂_tz_S mentioned above replacing L_SN_Sγ_S(i)z_S,ee(i) -γ_S,e(i)z_S,e(i)2π z_S,e^2(i) with L_SN_Sγ_S(i) z_S,e(i)∑_j≠ i1/2L(z_S(i)-z_S(j) /L/π) z_S,e(j) - L_SN_Sγ_S,e(i)2π z_S,e(i) . Unfortunately, this does not improve the stability of the code. The method explained in  <ref> with the shifted grids in space allows us to avoid γ_S,e , which would appear by the extension by continuity. § DISCRETE OPERATORS FOR THE VORTEX FORMULATION First, we give the precise equation for the vortex formulation, then we give the discret version of the operators. We compute 1/2∂_tΨ_S(e)= ∫_0^L_S[∂_tγ_S(e')z_S,e(e)-∂_tγ_S(e)z_S,e(e')/2L(z_S(e)-z_S(e') /L/π) ]ẹ' +∫_0^L_B∂_tγ_B(e')[1/2L(z_S(e)-z_B(e') /L/π) z_S,e(e) ]ẹ' -∫_0^L_S[πγ_S(e')z_S,e(e)-γ_S(e)z_S,e(e')/2L^2sin^-2(z_S(e)-z_S(e') /L/π) (∂_tz_S(e)-∂_tz_S(e') ) ]ẹ' +∫_0^L_S[γ_S(e')∂_tz_S,e(e)-γ_S(e)∂_tz_S,e(e')/2L(z_S(e)-z_S(e') /L/π) ]ẹ' +∫_0^L_Bγ_B(e')[1/2L(z_S(e)-z_B(e') /L/π) ∂_tz_S,e(e) ]ẹ' -∫_0^L_Bγ_B(e')[π/2L^2sin^-2(z_S(e)-z_B(e') /L/π)∂_tz_S(e) z_S,e(e) ]ẹ' + ∑_j=1^N_vγ_v,j[∂_tz_S,e(e)/2L(z_S(e)-z_v,j/L/π)-π z_S,e(e)∂_tz_S,e(e)/2L^2sin^-2(z_S(e)-z_v,j/L/π)] +ω_0/4π∫_0^L_Sln(coshz_S(e)-z_S(e')/L/(2π) -cosz_S(e)-z_S(e')/L/(2π)) ×[ ∂_t z_S,e(e)z_S,e(e')+ z_S,e(e)∂_t z_S,e(e')] ẹ' - ω_0∫_0^L_S[ ∂_tz_S(e)-∂_tz_S(e')/2L( z_S(e)-z_S(e')/L/π) ] [ z_S,e(e)z_S,e(e')] ẹ' -ω_0/4π∫_0^L_Bln(coshz_S(e)-z_B(e')/L/(2π) -cosz_S(e)-z_B(e')/L/(2π)) [∂_tz_S,e(e)z_B,e(e')] ẹ' + ω_0∫_0^L_B[ ∂_tz_S(e)/2L( z_S(e)-z_B(e')/L/π) ] [z_S,e(e)z_B,e(e')] ẹ' where we have used the relation on the cotangent (<ref>). Every integrals are classically defined, in particular the functions can be extended by continuity for e=e' in the third integral by [γ_S(e)z_S,ee(e)-γ_S,e(e)z_S,e(e)/2π z_S,e^2(e)∂_tz_S,e(e) ] and in the fourth one by [γ_S(e)∂_tz_S,ee(e)-γ_S,e(e)∂_tz_S,e(e)/2π z_S,e(e)] , which can be simplified by a part in the extension of the third integral. We can replace the term z_S,ee in the extension of the second integral by replacing L_S/N_S[γ_S(i)z_S,ee(i)2π z_S,i^2(e)∂_tz_S,e(i) ] by L_S/N_S∑_j≠ i[γ_S(i)z_S,i^2∂_tz_S,e(i)1/2L(z_S(i)-z_S(j) /L/π) z_S,e(j) ] . Unfortunately, it is more complicated to replace ∂_tz_S,e and ∂_tz_S,ee because differentiating the previous relation would introduce an additional sin^-2 term. The first integral has to be replaced by ∫_0^L_S[∂_tγ_S(e')z_S,e(e)/2L(z_S(e)-z_S(e') /L/π) ]ẹ' , where the continuous function is extended for e=e' by zero. These computations provide the explicit expression for G_V,1. We can note that the many terms disappear when considering the single-fluids formulation, i.e. the α=1 and A_tw= 1 case. For the expression of G_V,2, we get G_V,2(e) = -∫_0^L_Sγ_S(e') [π z_B,e(e) ∂_tz_S(e')/2L^2sin^-2(z_B(e)-z_S(e') /L/π)] ẹ' - ∑_j=1^N_vγ_v,j[πz_B,e(e) ∂_tz_v,j/2L^2sin^-2(z_B(e)-z_v,j/L/π) ] -ω_0/4π∫_0^L_Sln(coshz_B(e)-z_S(e')/L/(2π) -cosz_B(e)-z_S(e')/L/(2π)) [z_B,e(e)∂_t z_S,e(e')] ẹ' - ω_0∫_0^L_S[ ∂_tz_S(e')/2L( z_B(e)-z_S(e')/L/π) ] [z_B,e(e)∂_t z_S,e(e')] ẹ' . Concerning the numerical approximation, for z_B(i):=z_B(e_B,i), z_S(i):=z_S(e_S,i) and γ_S(i):=γ_S(e_S,i) given, we set z̃_B(i):=(z_B(e_B,i)+z_B(e_B,i+1))/2 for i=1,…, N_B-1 to construct the matrix B_B,N B_B,N(i,j)= L_B/N_B[ z̃_B,e(i)/2L(z̃_B(i)-z_B(j) /L/π)] ∀ (i,j)∈ [1,N_B-1]× [1,N_B] , B_B,N(N_B,j)=L_B/N_B ∀ j∈ [1,N_B] , The discretization of RHS_V0,B,N and RHS_VB,N are clear, replacing every z_B(e) by z̃_B(i), and where the last component is -γ. We deduce γ_B = B_B,N^-1 RHS_VB,N. Next, we compute ∂_tz_S, u_F (z_S(e)) and u_A (z_S(e)) where we could extend in the integral on Γ_S for e'=e by γ_S(e)z_S,ee(e) -γ_S,e(e)z_S,e(e)2π z_S,e^2(e), and we compute the derivative with respect to e. We compute A_S,N as A_S,N(i,j)= A_twL_S/N_S[1/2L(z_S(i)-z_S(j) /L/π) z_S,e(i) ] ∀ i≠ j ∈ [1,N_S]× [1,N_S] , A_S,N(i,i)= 1/2 ∀ i∈ [1,N_S] , next C_V,N(i,j)= A_twL_B/N_B[1/2L(z_S(i)-z_B(j) /L/π) z_S,e(i) ] ∀ (i, j) ∈ [1,N_S]× [1,N_B] . and finally D_V,N(i,j)= L_S/N_S[1/2L(z̃_B(i)-z_S(j) /L/π) z̃_B,e(i) ] ∀ (i,j) ∈ [1,N_B-1]× [1,N_S] , D_V,N(N_B,j)=0 ∀ j∈ [1,N_S] . abbrv
http://arxiv.org/abs/2306.08555v1
20230614145925
Quantum computing with subwavelength atomic arrays
[ "Freya Shah", "Taylor L. Patti", "Oriol Rubies-Bigorda", "Susanne F. Yelin" ]
quant-ph
[ "quant-ph" ]
[email protected] Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Ahmedabad University, Ahmedabad 380015, Gujarat, India Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA NVIDIA, Santa Clara, California 95051, USA Physics Department, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Photon-mediated interactions in subwavelength atomic arrays have numerous applications in quantum science. In this manuscript, we explore the potential of three-level quantum emitters, or “impurities" embedded in a two-dimensional atomic array to serve as a platform for quantum computation. By exploiting the altered behavior of impurities as a result of the induced dipole-dipole interactions mediated by subwavelength array, we implement a set of universal quantum gates consisting of the √(iSWAP) and single-qubit rotations. We demonstrate that these gates have very high fidelities and coherence times, as long as the atoms remain within a proximal range. Finally, we implement quantum circuits leading to the generation of the maximally entangled two-qubit Bell states, as well as the entangled three-qubit GHZ state. These findings establish subwavelength emitter arrays as an alternative platform for quantum computation and quantum simulation. Quantum computing with subwavelength atomic arrays Susanne F. Yelin July 31, 2023 ================================================== § INTRODUCTION Recent experimental advances in atomic array configurations with subwavelength spacing and long-range dipole-dipole interactions <cit.> have opened up exciting possibilities for exploring novel physical phenomena, such as superradiance <cit.> and subradiance <cit.>, topological quantum optics <cit.>, efficient light-matter interactions <cit.> and enhanced photon storage capabilities <cit.>. Additionally, atomic arrays can be employed to modify the radiative environment and properties of impurity emitters, illustrated by the red atoms in Fig. <ref>. In that case, the array acts as a Markovian bath for the embedded impurities, which acquire a suppressed decay rate and exhibit long-range interactions mediated by the delocalized spin waves of the lattice <cit.>. In this manuscript, we explore the potential of subwavelength atomic arrays with embedded impurity emitters as candidates for quantum computation, a novel paradigm with multiple applications ranging from quantum simulation <cit.> to cryptography <cit.> and optimization <cit.>. The realization of universal quantum computation, that is, the ability to perform any combination of quantum operations efficiently <cit.> using highly scalable systems with exceptionally low errors, has been an elusive and long-standing goal <cit.>. A multitude of hardware platforms, including superconducting qubits <cit.>, neutral atoms <cit.>, ion-trapped systems <cit.>, and cold Rydberg atoms <cit.> have been the focus of extensive research. While each platform offers unique advantages, they also present their own limitations and challenges (decoherence, sensitivity to external noise, and scalability with increasing qubit number), which have slown the path towards quantum computing's full potential <cit.>. Here, we demonstrate that impurities embedded in closely-spaced atomic arrays can serve as qubits suitable for quantum computation and simulation protocols. For that, we show that the lattice-mediated interactions implement an iSWAP gate between the impurities. Individual addressing of each impurity further allows to suppress these interactions and thereby enables precise control over the iSWAP gate operation, as well as the implementation of the single-qubit X and Z rotations. Together, these three operations form a universal set of gates for quantum computation and simulation. Due to the high coherence and low error rates of these operations, they can be applied sequentially to simulate quantum circuits with large numbers of gates and qubits or impurities. The rest of the work is organized as follows. In Section <ref>, we introduce the system and derive the effective equations of motion of the embedded impurities. In Section <ref>, we demonstrate the implementation of the aforementioned universal gate set. We further illustrate the implementation of quantum circuits in this system by preparing a two-qubit Bell state and the three-qubit Greenberger–Horne–Zeilinger (GHZ) state in Section <ref>. These states serve as the universal computational primitive <cit.> and thus highlight the robustness and efficacy of our platform for implementing various quantum algorithms with high precision. § MODEL We consider a two-dimensional array of N_L two-level atoms with N_I embedded three-level impurity atoms, as depicted by the blue and red emitters in Fig. <ref>, respectively. The two levels of the array emitters are labeled as |G⟩ and |E⟩, and their transition frequency is ω_L = 2 π c/λ_L. We additionally consider a closely-spaced lattice, such that the lattice spacing a < λ_L. The three levels of the impurity atoms are the ground state |g⟩, the excited state |e⟩ and the high-energy metastable state (HMS) |r⟩, as shown in  fig:model. The frequency of the |g⟩↔|e⟩ and |g⟩↔|r⟩ are ω_I and ω_R, respectively. Both the lattice atoms and the impurities interact with the vacuum electromagnetic field. Applying the Born-Markov approximation, we trace out the radiation field and obtain an effective non-Hermitian Hamiltonian for the atoms only, H=H_L+H_I+H_LI+H_E. The lattice Hamiltonian Ĥ_L can be written as Ĥ_L= ∑_i^N_L(w_L-i/2γ _L)σ̂ _i^†σ _i+∑_i,j≠ i^N_L(J_ij-i/2Γ_ij )σ _i^†σ _j, where γ_L is the spontaneous decay rate of the lattice atoms and σ̂_i=|G_i⟩⟨E_i| is the lowering operator for lattice atom i. The eliminated vacuum field induces coherent and dissipative interactions between the emitters, which arise from virtual emission and reabsorption of photons. These couplings J_ij≡ J_ij(𝐫_i,𝐫_j) and Γ_ij≡Γ_ij(𝐫_i,𝐫_j) depend on the distance r_ij=|𝐫_i-𝐫_j| between atoms i and j, and are given by J_ij(_i,_j) = -3πγ_L/w_L𝐝_i^†· [G(r_ij, w_L)] ·𝐝_j, Γ_ij(_i,_j) = 6πγ_L/w_L𝐝_i^†· [G(r_ij,w_L)] ·𝐝_j. Here, G(r,w) is the dyadic Green's tensor <cit.>, and 𝐝_i is the atomic dipole moment of atom i. Similarly, the impurity Hamiltonian Ĥ_I reads H_I = ∑_α^N_I[(w_I-i/2γ _I)s_α^†s_α + (w_R-i/2γ _R)r_α^†r_α] +∑_α,β≠α^N_Iγ_I/γ_L (J_αβ-i/2Γ_αβ)s_α^†s_β, where γ_I and γ_R are the spontaneous decay rates of excited and HMS states of the impurities, and s_α=|g_α⟩⟨e_α| and r_α=|e_α⟩⟨r_α| are the lowering operators of the |g⟩↔|e⟩ and |e⟩↔|r⟩ transitions, respectively. We consider only the |g⟩↔|e⟩ transition to have a transition wavelength λ_I = 2 π c / ω_I of the order of the impurity distance. Thus, only this transition exhibits significant light-induced couplings J_αβ and Γ_αβ. Thus, the HMS states of different impurities do not interact with one another and can therefore be used to store the quantum state of the impurities for long times provided that γ_R ≪γ_I. Provided that ω_L ≈ω_I, the lattice and impurity emitters also undergo light-induced interactions given by H_LI =√(γ_I/γ_L)∑_i^N_L∑_α^N_I [(J_iα-i/2Γ_iα)σ_i^†s_α+(J_α i-i/2Γ_α i)s_α^†σ_i ]. Finally, we apply classical driving fields on the |g⟩↔|e⟩ and |e⟩↔|r⟩ transitions of the impurity atoms with frequencies ω_I and ω_f, respectively, H_E = -∑_α^N_I[ (Ω_α^* s_α^†+ Ω_α s_α)(e^-iω_It +e^iω_It) + (Ω_f α^* r_α^†+Ω_f α r_α)(e^-iω_f t +e^iω_f t)], where the strengths Ω_α= -𝐝_αε and Ω_f α= -𝐝_α fε_f are the product of the corresponding dipole moments and electric field amplitudes. For simplicity, we further consider that the system contains at most two excitations. Applying the Schrödinger equation, we can obtain the equations of motion of the amplitudes in each of the states of the lattice and impurity atoms, derived in Appendix <ref>. If γ_L≫γ_I, the lattice atoms act as a Markovian bath for the impurities and can therefore be adiabatically eliminated, as shown in Ref. <cit.> and derived in Appendix <ref>. We then obtain a reduced set of equations for the N_I impurity emitters only. Their wavefunction is then simply given by |Ψ(t)⟩ = a(t)|g⟩ + ∑_α^N_Ic_α(t)e^-iw_It|e_α⟩ + ∑_α^N_If_α(t)e^-i(w_I+w_f)t|r_α⟩ + ∑_α,β≠α^N_IC_αβ(t)e^-2iw_It|e_α, e_β⟩ + ∑_α,β≠α^N_Iy_αβ(t)e^-i(2w_I+w_f)t|e_α, r_β⟩ + ∑_α,β≠α^N_I F_αβ(t)e^-i(2w_I+2w_f)t|r_α, r_β⟩, where |g⟩ denotes the state where all impurities are in ground state, |e_α⟩ the state where only impurity α is in the excited state, |e_α, e_β⟩ the state where only impurities α and β are in the excited state, and similarly for |r_α⟩, |r_α, r_β⟩ and |e_α, r_β⟩. The corresponding equations of motion applying the rotating wave approximation are ȧ= iΩ_α c_α(t), ċ_̇α̇(t)= iΩ_α^*a(t) +i[i/2γ_I-Σ_α]c_α(t)-i ∑_β≠α^N_I [Φ_αβ+ϕ_αβ] c_β(t)+iΩ_βC_αβ(t)+iΩ _f_αf_α(t), Ċ_αβ(t)= i[iγ_I-Σ_α-Σ_β]C_αβ(t)+iΩ_β^*c_α(t)+iΩ_α^*c_β(t)+iΩ_f αy_βα(t)+iΩ_f βy_αβ(t) -i ∑_ϵ≠α,β^N_I( [Φ_βϵ+ϕ_βϵ] C_αϵ + [Φ_αϵ+ϕ_αϵ] C_βϵ), ḟ_α(t)= i(δ_R+i/2γ _R)f_α(t) +iΩ_βF_αβ(t) +Ω_f α^*c_α(t), ẏ_αβ(t)= i[δ_R+i/2γ_I+i/2γ_R-Σ _α]y_αβ(t)+iΩ_β^*C_αβ(t) + i Ω_f_α F_αβ +i Ω^*_α f_β -i ∑_ϵ≠α,β^N_I [Φ_αϵ+ϕ_αϵ] y_ϵβ, Ḟ_αβ(t)= 2i(δ_R+i/2γ_R)F_αβ(t)+iΩ_f α^*y_αβ(t)+iΩ_f β^*y_βα(t), Here, Ω_α and Ω_f_α denote the driving fields that excite impurity α from the ground state to the excited state, and from the excited state to the HMS state, respectively. As illustrated in Fig. <ref>, we introduce δ_R=ω_f- (ω_R- ω_I). The term Φ_αβ = γ_I/γ_L(J_αβ-i/2Γ_αβ) describes the coherent and dissipative exchange of excitations between impurities α and β mediated by the electromagnetic field, and its value is therefore ∼γ_I. Additionally, the lattice also mediates interactions between impurities. Defining the light-induced interaction between impurity α and the lattice atoms as C_α, and the light-induced coupling between the lattice atoms as L, the lattice-mediated interactions between both impurities, ϕ_α,β, reads ϕ_αβ = C_β^† L^-1 C_α, C_α^† =√(γ_I/γ_L)( J_α1-i/2Γ_α1,⋯, J_α N-i/2Γ_α N), L = [ δ_LI+i/2γ_L ⋯ -J_1N+i/2Γ_1N; ⋮ ⋱ ⋮; -J_N1+i/2Γ_N1 ⋯ δ_LI+i/2γ_L ] . where δ_LI = ω_I - ω_L denotes the detuning between the lattice and impurity atoms. That is, ϕ_α,β describes an excitation transferred between both impurities through the normal modes of the lattice. Finally, Σ_α = ϕ_αα denotes the self-interaction of impurity α mediated by the lattice. Its real part, Re[Σ_α], modifies the transition frequency of the impurities. Since this term is equal for all impurities provided that the lattice is large enough, it can be simply cancelled out by redefining the frequency of the impurities. The imaginary part, however, alters the effective decay rate of the impurity emitters Γ_eff=γ_I-2Im[Σ_α]. That is, we can reduce the impurity decay rate and thus improve the coherence time when Im[Σ_α]>0. If all emitters are circularly polarized (𝐝_L=𝐝_I=(1,i,0)/√(2)), there exists an optimal detuning δ_LI between the impurity and lattice atoms for which Γ_eff≪γ_I and the impurities become long-lived <cit.>. In that case, the frequency of the impurities lies outside the energy band of the normal modes of the lattice and the impurity-impurity interactions are off-resonantly coupled by their guided, or non-decaying, modes. Crucially, this suppressed decay rate allows for longer time to perform quantum operations or gates, thereby increasing the maximum circuit-depth. As a result, we consider this configuration for the rest of this work. § UNIVERSAL SET OF GATES A universal set of gates is a group of operations that can implement any unitary transformation on a given number of qubits. For an ensemble of impurity atoms or qubits coupled to the atomic array, we can implement the universal gate set comprised of arbitrary single-qubit operations, using X and Z rotations, and the two-qubit gate √(iSWAP)  <cit.>. §.§ X rotation First, we implement a gate that allows any desired rotation around the X-axis of the single-qubit Bloch sphere. We perform the X_R rotation on impurity α by applying a strong resonant pulse Ω_α on it. This results in Rabi oscillations between the ground and excited state, given by the X_R (θ) rotation matrix X_R(θ) = [ cos(θ/2) isin(θ/2); isin(θ/2) cos(θ/2) ], where θ = 2Ω_ατ. A notable instance of this class of gates is the X-gate, also known as the NOT gate or bit-flip gate, which is equivalent to a π rotation and flips the state of a qubit from |g⟩ to |e_α⟩, and vice versa. Applying the pulse for τ=π/2Ω_α, we obtain an iX gate, which is equivalent to X gate up to a global phase. In fig:iswap(a), we plot the population of the ground and excited state of an impurity initially in the excited state after applying X_R for different values of t (or analogously θ). Since |Ω_α | ≫|Φ+ϕ | ∼γ_I, the time taken to implement the X gate is much shorter than the characteristic timescale of the array-mediated impurity-impurity interactions (∼γ_I), which have a negligible effect during the single-qubit operation. As a result, we attain large fidelities above 0.99 for circuit depths greater than 1000. §.§ Z rotation The Z_R rotation introduces a relative phase ϕ between ground and excited states of a single qubit or impurity, and is described by the matrix Z_R (θ) = [ e^-i θ 0; 0 1 ]. We can implement the Z_R gate in our system by applying a far-detuned driving field between the excited state and the HMS of a given impurity, such that δ_R≫Ω_f. If all other drives are zero, the amplitudes in these two states are ċ_̇α̇(t) =- Γ_eff/2 c_α(t)+iΩ_f αf_α(t) ḟ_̇α̇(t) =i(δ_R+ i/2γ_R)f_α(t)+iΩ_f α^*c_α(t) Since δ_R≫Ω_f α, γ_I, the fast-evolving HMS state can be adiabatically eliminated by setting ḟ_α→ 0. We then obtain ċ_̇α̇(t)=(-Γ_eff/2- i |Ω_f α|^2/δ _R)c_α(t). While the HMS state is never populated due to the off-resonance, it introduces a stark shift Ω_z=|Ω_f α|^2/δ _R to the excited state |c_α⟩. As a result, the excited state rotates at a faster frequency than the ground state and acquires a phase θ = Ω_z τ after a time τ, thereby implementing the the Z_R gate given by Eq. (<ref>). Setting τ=π/Ω_z, we can implement a Z gate, which corresponds to a π rotation around the z-axis of the Bloch sphere [fig:iswap(b)]. Finally, it is worth noting that Ω_z∼γ_L ≫γ_I. Consequently, the time required to implement the Z gate is again much shorter than the characteristic timescale of the light-induced impurity-impurity interactions. §.§ √(iSWAP) Gate √(iSWAP) is a two-qubit operation that results in universal for quantum computation when used together with the set of single-qubit rotations (see fig:iswap(c)) <cit.>. This two-qubit gate is described by the unitary matrix √(iSWAP)=[ 1 0 0 0; 0 1/√(2) i/√(2) 0; 0 i/√(2) 1/√(2) 0; 0 0 0 1 ]. For instance, applying an √(iSWAP) gate on the state |e_1,g_2⟩, produces the state 1/√(2)|e_1,g_2⟩+i/√(2) | g_1,e_2⟩. We implement the √(iSWAP) gate between two impurities of our system by setting all drives to zero, which ensures that the excitation number in the system is conserved. Then, the equations of motion for the system with two impurities is ġ(t) = 0, ċ_1(t)= -Γ_eff/2c_1(t)-i[Φ_12 +ϕ_12]c_2(t), ċ_2(t)= -Γ_eff/2c_2(t)-i[Φ_21 +ϕ_21]c_1(t), Ċ_12 = -Γ_eff C_12(t), where g denotes the total ground state, c_1 and c_2 the states with impurity 1 or 2 populated, and C_12 the state where both impurities are excited. Equation (<ref>) result in the √(iSWAP) gate given by Eq. (<ref>) after evolving for a time t=π/4(Φ_12 +ϕ_12) [see fig:iswap(c)]. Since both Φ_12 and ϕ_12 are of the order of γ_I, the time required to perform this gate is ∝γ_I^-1. That is, the √(iSWAP) gate is much slower than the single qubit gates. We can further define the fidelity of the iSWAP operation as the overlap between a target state |ψ_t⟩ and the one obtained by solving the full dynamics of the system |ψ_d⟩, ℱ = |⟨ψ_t | ψ_d ⟩|^2. In Fig. <ref>(d), we show the the error ϵ = 1 - ℱ after 100 consecutive operations of the iSWAP gate as a function of the distance between the impurities d for different lattice spacing a. In general, ϵ increases with a and d. If the impurities are in neighboring plaquettes (d=a), we can perform 100 iSWAP operations with small errors rates of about ϵ∼ 10^-3 and ϵ∼ 10^-4 for a=0.1λ_L and a=0.05λ_L, respectively. While this error remains relatively small for a=0.05 λ_L and d=4a, it approaches ϵ→ 1 if the lattice spacing is increased to a=0.1 λ_L. This sets the maximum distance between impurities for which the iSWAP gate can be applied with high fidelities, and therefore sets the fundamental limit of this platform. §.§ Decoupling Impurities To implement any arbitrary quantum circuit, we need to control the interactions between any two qubits or impurities during the whole process. In the platform at hand, however, the √(iSWAP) operation continuously acts between all impurities present and may introduce significant errors when performing the slow two-qubit √(iSWAP) gate. To avoid this, we require a method that decouples all qubits that are not involved in a specific operation. In what follows, we describe multiple techniques to decouple impurities that are in an arbitrary superposition of the ground and excited state. It is worth noting that such decoupling protocols only need to be implemented when performing an iSWAP gate. The single impurity gates X and Z are significantly faster than the characteristic timescale of the lattice mediated impurity-impurity interactions and are therefore unaffected by them. §.§.§ Decoupling impurity in the ground state First, we describe how to decouple an impurity that is in the ground, thus hindering population transfer from other impurities (turning off the iSWAP operation). a.) Detuning |g⟩↔ |e⟩ transition To decouple impurity α, we can apply a detuning to the |g⟩↔ |e⟩ of that impurity larger than the lattice-mediated impurity-impurity interactions, δ_α≫ | Φ+ϕ | ∼γ_I. This energy shift can be either applied to the ground or excited state, and can be respectively modeled by adding the term δ_α s_αs_α^† or δ_α s_α^†s_α in the impurity Hamiltonian given by Eq.(<ref>). As shown in Appendix <ref>, the excited state can for example be detuned by coupling it to an additional energy level with a strong drive, in a process analogous to electromagnetically induced transparency. b.) Population transfer to another HMS state Alternatively, we can transfer the excitation in the ground state to another auxiliary state that does not decay and does not interact with the remaining impurities via light-induced dipole-dipole interactions. Excitation exchange between both states can simply be performed with a π-pulse. §.§.§ Decoupling impurity in the excited state An impurity α in the excited state |e_α⟩ does not decay only at the optimal lattice-impurity detuning δ_LI. As opposed to decoupling an impurity in the ground state, we cannot detune the |g⟩↔ |e⟩ transition in this case, since it would result in a quick decay of the excited impurity. As a result, decoupling an excited impurity requires to transfer the excitation to another metastable state | r_α⟩, which again does not decay and does not interact with the other impurities. This is achieved by applying a π-pulse with strength Ω_f α≫ | Φ+ϕ | which promotes | e_α⟩→ | r_α⟩. §.§.§ Decoupling impurity in an arbitrary state If impurity α is in an arbitrary superposition of ground and excited state, we have to combine the methods described above to decouple it. We will do so by first transferring the excited part of the wave function to a metastable state and then detuning the |g⟩↔ |e⟩ transition. Note that any extra phase acquired by the impurity during this process can be simply compensated by applying a local Z-gate. § OPERATIONS The set of gates defined above is universal for quantum computation, and can be therefore used to construct any quantum operation or circuit <cit.>. As an example, we here demonstrate how to generate two- and three-qubit entangled states. §.§ Entangling two impurities: Bell state The Bell states, |Φ^±⟩ = 1/√(2)(|00⟩±|11⟩) and |Ψ^+⟩ = 1/√(2)(|10⟩±|01⟩), are four maximally entangled two-qubit states. That is, measuring of one of the qubits immediately collapses the wavefunction of the other qubit, regardless of the physical distance between them <cit.>. Figure. <ref>(a) displays the decomposition of the Bell state |Φ^+⟩ using the gate set available in our system. Assuming that only one of the two impurities is initially excited, one needs to applies a √(iSWAP) gate on both impurities, followed by a π/2-rotation around the Z-axis and an X gate on the second qubit. If the system contains extra impurities, we simply need to decouple them by applying a large detuning to render them out of resonance. For the three-impurity system shown in Fig. <ref>, the prepared state is thus |Φ^+⟩⊗ |g⟩. As shown in Fig. <ref>(b), the Bell state can be prepared with high fidelity (>0.999) for a= 0.1γ_I. The error originates predominantly from the √(iSWAP) gate, and follows a similar trend as shown in Fig. <ref>(d). Finally, it is worth noting that the remaining Bell states can be obtained by applying suitable combinations of a π/2-rotations around the Z axis and an X gate on the second qubit. §.§ Entangling three impurities: GHZ state We further demonstrate the implementation of the entangled three-qubit Greenberger-Horne-Zeilinger (GHZ) state, which is defined as |GHZ⟩ = 1/√(2)(| 000 ⟩ + | 111 ⟩) <cit.> and has potential applications in various areas such as quantum error correction, quantum teleportation, and quantum cryptography  <cit.>. The three-qubit GHZ state can be obtained from the Bell state |Φ^+⟩⊗ |g ⟩ prepared in Section <ref> by applying a CNOT gate with the second and third qubits as the control and target qubits, respectively. The CNOT gate can be efficiently decomposed into our system's gate set, as shown in the  fig:ghz(a). Finally, we illustrate the evolution of the state of the impurities during the protocol in  fig:ghz(b), where the initial state is taken to be |Φ^+⟩⊗ |g ⟩. § CONCLUSION We have demonstrated that the universal quantum gate set composed of the two-qubit √(iSWAP) gate and the single-qubit X and Z rotations can be readily implemented on impurity emitters embedded in atomic lattices with subwavelength spacing, and illustrate their operation by preparing two- and three-qubit entangled states. We have further show that this platform allows for very high gate fidelities, provided that the impurities are sufficiently close to one another. In real experimental implementations, however, additional errors may arise from various sources such as decoherence, environmental noise, and position or frequency disorder of the emitters. This work paves the way towards controlling and engineering long-range interactions between emitters <cit.>, and demonstrates the potential of using subwavelength arrays for quantum simulation <cit.> and computation <cit.>, as well as for quantum information storage and processing <cit.>. In future works, more sophisticated protocols such as optimization <cit.> and error-correction <cit.> could also be explored on this platform. This theoretical proposal may be implemented using ultracold atoms trapped using optical lattices <cit.>, optical tweezers <cit.> or metasurface holographic optical traps <cit.>. The main experimental difficulty of the proposed protocol lies in individually addressing each impurity emitter. This feature, which is needed to perform the single-qubit gates and to turn on and off the two-qubit iSWAP gate, requires a narrow focusing or localization of the driving fields to focal points smaller than an optical transition wavelength. While this is challenging in free space platforms such as atomic arrays, the difficulty could be circumvented by extending this proposal to other types of baths or structures capable of mediating light-induced interactions, such as one-dimensional waveguides <cit.> or cavities <cit.>. These platforms provide controllable strong couplings between emitters separated by distances much larger than the resonant wavelength <cit.>, and can thus enable individual addressing of each impurity, as well as the design of more versatile quantum circuits involving gates between farther-placed emitters. O.R.B. acknowledges support from Fundación Mauricio y Carlota Botton and from Fundació Bancaria “la Caixa” (LCF/BQ/AA18/11680093). SFY would like to acknowledge funding from NSF through the CUA PFC, PHY-2207972 and the QSense QLCI as well as from AFOSR. apsrev4-1-title § EQUATIONS OF MOTION The wavefunction of the full system (impurities and lattice atoms) containing at most two excitations is given as, |Ψ_1(t)⟩ = |Ψ(t)⟩⊗ |G⟩+ ∑_i^N_Lb_i(t)e^-iw_It|E_i, g⟩ +∑_i^N_L∑_α^N_Iv_iα(t)e^-2iw_It|E_i, e_α⟩ +∑_i^N_L∑_α^N_Iz_iα(t)e^-i(2w_I+w_f)t|E_i, r_α⟩, where |Ψ(t) ⟩ corresponds to the wavefunction defined in Eq. (<ref>), and |E_i,e_α⟩, for example, denotes the state where both impurity α and the lattice atom i are excited. Note that we neglect the contribution from the states |E_i,E_j⟩ where two lattice atoms are simultaneously excited. As shown in Appendix <ref>, the population in the states |E_i,E_j⟩ is strongly suppressed and can therefore be safely neglected. The dynamics of the system are obtained by solving the Schrödinger equation using the wavefunction in eqn:lattice_EOMS, ȧ =∑_α^N_IiΩ_αc̃_α(t), ḃ_i(t) = i(δ_LI+i/2γ_L)b_i(t)-i∑_j≠ i^N_L (J_ij-i/2Γ_ij)b_j(t)-i√(γ_I/γ_L)∑_α^N_I(J_iα-i/2Γ _iα)c_α(t)+i∑_α^N_IΩ_αv_iα, ċ_α(t) = -γ _I/2c_α(t)-i√(γ _I/γ_I)∑_i^N_L(J_α i-i/2Γ _α i)b_i(t)-iγ _I/γ _L∑_β≠α^N_I(J_αβ-i/2Γ_αβ)c_β(t)+iΩ_α^*a(t)+iΩ _f_αf_α(t) +i∑_β≠α^N_IΩ_β^*C_αβ(t), ḟ_α(t) = i(δ_R+i/2γ _R)f_α(t) +i∑_β≠α^N_IΩ_βF_αβ(t) +Ω_f α^*c_α(t), v̇_iα(t) = i(δ_LI+i/2γ _L+i/2γ _I)v_iα(t)-i∑_j≠ i^N_L(J_ij-ii/2Γ_ij)v_jα(t)-i√(γ_I/γ_L)∑_β≠α^N_I(J_iβ-i/2Γ _iβ)C_αβ(t) -iγ_I/γ_l∑_β≠α^N_I(J_αβ-i/2Γ_αβ)v_iβ(t)+iΩ_α^*b_i(t)+iΩ_f αz_iα(t), Ċ_αβ(t) =-γ_I C_αβ(t)-i√(γ _I/γ_L)∑_i^N_L(J_α i-i/2Γ_α i)v_iβ(t)-i√(γ _I/γ_L)∑_i^N_L(J_β i-i/2Γ_β i)v_iα(t)+iΩ_β^*c_α(t)+iΩ_α^*c_β(t) +iΩ_f αy_βα(t)+iΩ_f βy_αβ(t), ż_iα(t) =i(δ_LI+δ_R+i/2γ_L+i/2γ_R)z_iα(t)-i∑_i≠ j^N_L(J_ij-i/2Γ_ij)z_jα(t)-i√(γ_I/γ_L)∑_β≠α^N_I(J_iβ-i/2Γ_iβ)y_βα(t)+iΩ_f α^*v_iα(t), ẏ_αβ(t) = i(δ_R+i/2γ_I+i/2γ_R)y_αβ(t)-i√(γ_I/γ_L)∑_i^N_L(J_α i-i/2Γ_α i )z_iβ(t)+iΩ_f β^*C_αβ(t)+iΩ_f αF_αβ(t)+iΩ_α^*f_β(t), Ḟ_αβ(t) = 2i(δ_R+i/2γ_R)F_αβ(t)+iΩ_f α^*y_αβ(t)+iΩ_f β^*y_βα(t). Considering γ_L≫γ_I, the characteristic time scale of the lattice atoms is much faster than that of the impurities. Further assuming Ω_α≪γ_L, we can adiabatically eliminate the states containing an excitation in the lattice atoms. In other words, the lattice immediately reaches its steady state after any slow variation of the impurities. Defining the vectors b⃗ = [b_1, , b_N_L]^T and v⃗_α = [v_1α, , v_N_L α]^T, we obtain ḃ⃗̇=0 → b⃗= ∑_α L^-1( C_αc_α(t) - Ω_αv⃗_α), where L is an N_L × N_L matrix characterizing the interactions between lattice atoms and C_α is an N_L × 1 vector describing the couplings between impurity α and the lattice, both defined in Eq. (<ref>). Since v⃗_α and z⃗_α = [z_1α, , z_N_L α]^T also contain factors proportional to γ_L, they can also be adiabatically eliminated. Keeping to terms to leading order in the small parameter γ_I/γ_L, we obtain v̇⃗̇_α =0 → v⃗_α=L^-1 C_βC_αβ(t), ż⃗̇_α =0 → z⃗_α=L^-1 C_βy_βα(t). Substituting these values in Eq. (<ref>), we find that the term proportional to v⃗_α is suppressed with respect to that proportional to c_α by a factor Ω_α/γ_L, and can thus be neglected. Finally, we substitute these instantaneous steady state amplitudes for b⃗, v⃗_α and z⃗_α in eqn:EOMs and obtain the effective equations of motion for the impurity emitters only, given in eqn:reduced_EOMS. Finally, it is worth noting that we work in the regime Ω_α∼γ_L when implementing the single-qubit gates described in Section <ref>. In that case, the driving strength in the impurity is much larger than the light-mediated lattice-impurity interactions, which can therefore be safely neglected for timescales τ∼γ_L^-1 << γ_I^-1. In other words, the impurity and lattice atoms simply decouple in this regime. § EFFECT OF MULTIPLE EXCITATIONS IN LATTICE ATOMS In Appendix <ref>, we neglect the contribution from the states containing two excitations in the lattice atoms. Here, we justify this approximation. For that, let us first consider a single impurity embedded in an atomic array. If no driving fields are applied, the wavefunction simply reads |ψ'⟩ = c_α|e_α⟩ + ∑_i^N_Lb_i|E_i⟩. Adiabatically eliminating the lattice atoms, we obtain b⃗ =L^-1Ĉ_αc_α. Then, ⟨ψ'|ψ'⟩ =(1+b⃗^†b⃗)|c_α|^2→ |c_α^2|=1/1+b⃗^†b⃗ Above the band edge region (that is, if the impurity is off-resonant with all normal modes of the lattice), the impurity population |c_α^2| is the vast majority of the wavefunction and the population in the lattice atoms b_i is hence negligibly small. Performing a similar analysis for a two-impurity system, we can conclude that the terms b_ij containing both excitations in the lattice are even further suppressed. We can numerically confirm this intuition by simulating the dynamics of the full system. For that, we write down the full wavefunction containing up to two excitations, |Ψ_2(t)⟩= |Ψ_1(t) ⟩ + ∑_i^N_L∑ _j≠ i^N_Ib_ij(t)e^-2iω_I t | E_i,E_j,g⟩, where |Ψ_1(t) ⟩ is given in Eq. (<ref>). Applying the Schrodinger equation, we find the equations of motion for the terms b_ij, ḃ_ij(t)= 2i(δ_LI+i/2γ_L)b_ij(t)-i∑_i≠ j^N_L[(J_ij-i/2Γ_ij)b_ij(t)+(J_ji-i/2Γ_ji)b_ji(t)]-i√(γ_I/γ_L)∑_i≠ j^N_L∑_β≠α^N_I(J_iα-i/2Γ_i α)v_i α(t). In fig:app, we compare the dynamics resulting from the full set of equations containing two excitations in the lattice (solid lines) with the approximated version used in Appendix <ref> that neglects the terms b_ij (dashed lines). The population in the impurity emitters, as well as in the terms containing one excitation in the lattice and one in an impurity (∑_i,α |v_i α|^2), are approximately equal in both cases. As expected, the population ∑_i,j |b_i j|^2 is heavily suppressed and can thus be safely neglected. It is worth noting that this approximation largely reduces the computational power required to simulate the dynamics of the system. § EQUATIONS OF MOTION FOR THREE EXCITATIONS To implement the entangled Bell and GHZ states in Section <ref>, we need to consider states containing up to three excitations. Then, the total wavefunction takes the form |Ψ_2(t)⟩ = |Ψ_1(t)⟩+ ∑ ^N_I_α,β,ϵ≠α,βC_αβϵ(t)e^-3iw_It|G,e_α,e_β,e_ϵ⟩ + ∑ ^N_I_α,β,ϵ≠α,βF_αβϵ(t)e^-i3(w_I+w_R)t|G,r_α,r_β,r_ϵ⟩ + ∑ ^N_L_i∑ ^N_I_α,β≠αv_iαβ(t)e^-i3w_It|E_i,e_α,e_β⟩+∑ ^N_L_i∑ ^N_I_α,β≠αz_iαβ(t)e^-i(3w_I+2w_R)t|E_i,r_α,r_β⟩ +∑ ^N_I_α,β,ϵ≠α,βE_αβϵ(t)e^-i(3w_I+w_R)t |r_α,e_β,e_ϵ⟩ +∑_α,β,ϵ≠α,βG_αβϵ(t)e^-i(3w_I+2w_R)t|e_α,r_β,r_ϵ⟩ + ∑_i^N_L∑_α,β≠α^N_IR_iαβ(t)e^-i(3w_I+w_R)t|E_i,r_α,e_β⟩. Using the Hamiltonian in Eq. (<ref>) and the Schrödinger equation, we obtain the equations of motion describing the full system. For concision, we only provide the equations that differ from those in eqn:EOMs, Ċ_αβ(t)= ⋯+i∑_ϵ≠α,β^N_IΩ_ϵC_αβϵ(t), ż_iα(t)= ⋯ +i∑_β≠α^N_IΩ_βR_iαβ(t), ẏ_αβ(t)= ⋯ +i∑_ϵ≠α,β^N_IΩ_ϵE_αβϵ(t), Ḟ_αβ(t)= ⋯ +i∑_ϵ≠α,β^N_IΩ_ϵG_ϵαβ(t), Ċ_αβϵ(t)= -3/2γ_IC_αβϵ(t)+∑^N_L_i∑ _α^N_I(J_α i-i/2Γ_α i)v_iβϵ(t)+i∑_α^N_IΩ_α^*C_βϵ(t)+i∑_α^N_LΩ_f αE_αβϵ(t), Ṙ_αβϵ(t)= 3i(δ_R+i/2γ _R)R_αβϵ(t)+i∑_α^N_LΩ_f α^*G_αβϵ(t), v̇_iαβ(t)= i(δ_LI+i/2γ_L+iγ_I)v_iαβ(t)-iγ_I/γ_L∑^N_I_α,ϵ≠α,β(J_αϵ-i/2Γ_αϵ)v_iϵβ(t)-iγ_I/γ_L∑_ϵ≠αβ^N_I(J_iϵ-i/2Γ_iϵ)C_αβϵ(t) -i∑_j≠ i^N_L(J_ij-i/2Γ_ij)v_jαβ(t) +i∑ _α^N_IΩ_α^*v_iα(t)+i∑_f_α^N_IΩ_f αR_iαβ(t), ż_iαβ(t)= i(δ_LI+i/2γ_L+2δ_R+iγ_R )z_iαβ(t)-iγ_I/γ_L∑_ϵ≠αβ^N_I(J_iϵ-i/2Γ_iϵ)G_ϵαβ(t)-i∑_j≠ i^N_L(J_ij-i/2Γ_ij)z_jαβ(t) +i∑_f_α^N_IΩ_f α^*R_iβα(t), Ė_αβϵ(t)= i(δ_R+i/2γ_R+iγ_I)E_αβϵ(t)-i√(γ_I/γ_L)∑_i^N_L∑_β^N_I(J_β i-i/2Γ_β i)R_iαϵ(t)+iΩ_f α^*C_αβϵ+i∑_β≠α^N_IΩ_β^*y_βα(t)+i∑_β≠α^N_IΩ_f βG_ϵβα(t), Ġ_αβϵ(t)= i(2δ_R+iγ_R+i/2γ_I)G_αβϵ(t)-i√(γ_I/γ_L)∑_i^N_L(J_α i-i/2Γ_α i)Z_iβϵ(t)+iΩ_α^*F_βϵ+i∑_β≠α^N_IΩ_f β^*E_ϵβα(t)+iΩ_f αF_αβϵ(t), Ṙ_iαβ(t)= i(δ_LI+i/2γ_L+i/2γ_I+δ_R+i/2γ_R)R_iαβ(t)-i√(γ_I/γ_L)∑_j≠ i^N_L(J_ij-i/2Γ_ij)R_jαβ(t)-iγ_I/γ_L∑ _ϵ≠α,β^N_I(J_βϵ-i/2Γ _βϵ)R_iαϵ(t) -i√(γ_I/γ_L)∑ _ϵ≠α,β^N_I(J_iϵ-i/2Γ _iϵ)E_αβϵ(t)+iΩ_f α^*v_iαβ+i∑_β≠α^N_IΩ_βz_iα(t)+i∑_β≠α^N_IΩ_f βz_iαβ(t). Here, the three dots stand for the terms already present in eqn:EOMs. Note also that we have negelected all terms containing more than one excitation in the lattice atoms, as justified in Appendix <ref>. § DECOUPLING GROUND STATE IMPURITIES THROUGH ELECTROMAGNETICALLY INDUCED TRANSPARENCY To decouple an impurity in the ground state from the rest of the system, we simply need to detune its resonance frequency. This can be achieved by considering impurities with three levels: the ground state |g_α⟩, the dipole-coupled excited state |e_α⟩ and a non-interacting state |r_α⟩. For simplicity, let us consider a two-impurity system such that impurity 2 is in the excited state |e_2⟩ and impurity 1 is in the ground state |g_1⟩. To suppress population transfer from impurity 2 to 1, we apply a drive on impurity 1 with strength much larger than the lattice-mediated interactions, |Ω_f 1 | ≫ | Φ_12+ϕ_12 | ∼γ_I. The dynamics of the system are then governed by the following Hamiltonian Ĥ = -i Γ_eff/2( |e_1⟩⟨ e_1 | + |e_2⟩⟨ e_2 | ) +(Φ_12+ϕ_12) ( | e_1⟩⟨ e_2| + | e_2⟩⟨ e_1| ) -i γ_R/2( |r_1⟩⟨ r_1| +|r_2⟩⟨ r_2| ) -Ω_f 1( |e_1⟩⟨ r_1|+ |r_1⟩⟨ e_1| ), where we have assumed that the drive is on resonance with the |e_1 ⟩↔ |r_1⟩ transition, and that its Rabi frequency Ω_f 1 is real. Due to the strong drive, the coupling between both impurities can be treated perturbatively. Further assuming that Γ_eff, γ_R ≪ | Φ_12+ϕ_12 | ≪Ω_f1, we can neglect the decay rates. Then, the eigenstates of impurity 1 are approximately the dressed states |χ_±⟩≈ (|e_1⟩± |r_1⟩)/√(2), which are respectively shifted by ∓Ω_f1 from the resonance frequency, and the Hamiltonian in Eq. <ref> to lowest order in the perturbation to Ω_f1 simply reads Ĥ ≈ - Ω_f 1 |χ_+ ⟩⟨χ_+ | + Ω_f 1 |χ_- ⟩⟨χ_- | + 1/√(2)(Φ_12+ϕ_12) ∑_ν=±( |χ_ν⟩⟨ e_2 | + |e_2⟩⟨χ_ν | ), where we have omitted the uncoupled state |r_2⟩. That is, the coupling between the excited state of impurity 2, |e_2 ⟩, and the normal modes of impurity 1, |χ_±⟩ is much smaller than the frequency difference between these states. This strong off-resonance strongly suppresses the population transfer between both emitters, such that impurity 1 remains in the ground state and is effectively decoupled from the remaining impurities. We can alternatively understand this phenomenon as an interference effect analogous to electromagnetically induced transparency (EIT). For that, it is enough to note that the state |e_1⟩ is coupled by Ĥ to the superposition |B⟩∝Ω_f1 |r_1⟩ - (Φ_12+ϕ_12) |e_2⟩, and is consequently decoupled from the orthogonal state, |D⟩∝Ω_f1 |e_2⟩ + (Φ_12+ϕ_12) |r_1⟩. That is, the interference between the drive to |r_1⟩ and the light-induced coupling to |e_2⟩ renders |e_1⟩ dark with respect to |D⟩. For |Ω_f 1 | ≫ | Φ_12+ϕ_12 |, the dark or uncoupled state takes the form |D⟩≈ |e_2⟩, and an excitation initially in impurity 2 is not transfered to impurity 1. In other words, the √(iSWAP) operation |e_2⟩→|e_1⟩ is no longer permitted.
http://arxiv.org/abs/2306.05498v1
20230608184242
Monte Carlo inference for semiparametric Bayesian regression
[ "Daniel R. Kowal", "Bohan Wu" ]
stat.ME
[ "stat.ME", "math.ST", "stat.CO", "stat.ML", "stat.TH" ]
T ∼̇ ℝ 𝒞 𝒟 ℰ ℕ 𝒵 𝔹 ℙ Τ𝒯 𝐅 ℱ ℳ 𝒰 𝒳 𝒴 Σ_ϵ iid∼ Monte Carlo inference for semiparametric Bayesian regression Daniel R. KowalDobelman Family Assistant Professor, Department of Statistics, Rice University (mailto:[email protected]@rice.edu). and Bohan WuPhD student, Department of Statistics, Columbia University (mail.to:[email protected]@columbia.edu) ================================================================================================================================================================================================================================================================================= Data transformations are essential for broad applicability of parametric regression models. However, for Bayesian analysis, joint inference of the transformation and model parameters typically involves restrictive parametric transformations or nonparametric representations that are computationally inefficient and cumbersome for implementation and theoretical analysis, which limits their usability in practice. This paper introduces a simple, general, and efficient strategy for joint posterior inference of an unknown transformation and all regression model parameters. The proposed approach directly targets the posterior distribution of the transformation by linking it with the marginal distributions of the independent and dependent variables, and then deploys a Bayesian nonparametric model via the Bayesian bootstrap. Crucially, this approach delivers (1) joint posterior consistency under general conditions, including multiple model misspecifications, and (2) efficient Monte Carlo (not Markov chain Monte Carlo) inference for the transformation and all parameters for important special cases. These tools apply across a variety of data domains, including real-valued, integer-valued, compactly-supported, and positive data. Simulation studies and an empirical application demonstrate the effectiveness and efficiency of this strategy for semiparametric Bayesian analysis with linear models, quantile regression, and Gaussian processes. Keywords: Bayesian nonparametrics; Gaussian processes; Nonlinear regression; Quantile regression; Transformations. § INTRODUCTION Transformations are widely useful for statistical modeling and data analysis. A well-chosen or learned transformation can significantly enhance the applicability of fundamental statistical modeling frameworks, such as Gaussian models <cit.>, linear and nonlinear regression models <cit.>, survival analysis <cit.>, discriminant analysis <cit.>, and graphical models <cit.>, among many other examples. This is especially true for data with complex marginal features, such multimodality, skewness, zero-inflation, and discrete or compact support, and for Bayesian probability models that must adapt to these features. Consider the transformed regression model for paired data _n = {(x_i,y_i)}_i=1^n with x_i ∈𝒳⊆ℝ^d and y_i ∈𝒴⊆ℝ: g(y_i) = z_i z_i indep∼ P_Z |θ, X = x_i where g: 𝒴→ℝ is a monotone increasing transformation and P_Z |θ, X is a covariate-dependent and continuous distribution with parameters θ∈Θ⊆^p. Informally, P_Z |θ, X may be considered the core statistical model, while the transformation g serves to potentially improve the adequacy of this model for the given data _n. When the transformation is unknown and P_Z |θ, X is a parametric model, then (<ref>)–(<ref>) is a semiparametric regression model. Identifiability will be imposed on P_Z |θ, X, typically by fixing the location and scale, and is discussed subsequently. We focus on Bayesian analysis of (<ref>)–(<ref>), but acknowledge the rich history of frequentist inference for transformation models, including monotone stress minimization <cit.> alternating conditional expectations <cit.>, additivity and variance stabilization <cit.>, transnormal regression models <cit.>, and many others. There are various important examples of (<ref>)–(<ref>): Consider the general regression model z_i = f_θ(x_i) + σϵ_i where f_θ is parametrized by θ∈Θ⊆ℝ^p. Typically, the errors are ϵ_i iid∼ N(0,1) and thus P_Z |θ, X = x_i= N(f_θ(x_i), σ^2). Although this model may be applied directly to y and offers flexibility via f_θ, the Gaussian assumption for the errors is often restrictive and inadequate. The modeler must then consider whether to revise f_θ, specify an alternative distribution for ϵ_i, or incorporate a transformation via (<ref>). We explore the latter option, and seek to provide excellent empirical performance, efficient algorithms, and strong theoretical guarantees for Bayesian inference. Bayesian quantile regression specifies an error distribution for (<ref>) such that f_θ(x) target the τth quantile of P_Z |θ, X. The most common choice is the asymmetric Laplace distribution (ALD) with density p_τ(ϵ) = τ(1-τ)exp{-ρ_τ(ϵ)} and ρ_τ(ϵ) = ϵ{τ - I(ϵ <0)} is the check loss function <cit.>. However, the ALD is often a poor model for data, especially when τ is close to zero or one (see Section <ref>). A transformation can alleviate such inadequacy. The τth quantile of z corresponds to the g^-1(τ)th quantile of y, so f_θ(x) maintains interpretability and the transformed regression model readily provides quantile estimates for y at x. In survival analysis, it is common to express a time-to-event variable y_i > 0 in the form of (<ref>)–(<ref>), referred to as a (linear) transformation model <cit.>. Specific modifications of (<ref>) with f_θ(x) = -x^θ and σ =1 produce popular semiparametric models. First, the proportional hazards model uses subject-specific hazard function λ_i(t) = λ_0(t) exp(x_i^θ), where λ_0 is the (nonparametric) baseline hazard function and S_0(t) = exp{-∫_t^∞λ_0(r) dr} is the baseline survival function. The proportional hazards model is obtained from (<ref>) when ϵ_i follows the extreme-value distribution, p(ϵ) = exp(ϵ)exp{-exp(ϵ)}, while the transformation is linked to the baseline functions via g(t) = log[-log{S_0(t)}] = log∫_0^t λ_0(r). Alternatively, the proportional odds model uses the subject-specific odds function ω_i(t) = (y_i ≤ t){1 - (y_i ≤ t)}^-1 = ω_0(t) exp(-x_i^θ), where ω_0 is the (nonparametric) baseline odds function. This model is obtained from (<ref>) when ϵ_i follows the standard logistic distribution, p(ϵ) = exp(-ϵ){1+ exp(-ϵ)}^-2, and the transformation is g(t) = logω_0(t). These are foundational models in survival analysis, and rely critically on the transformation g to infer the nonparametric baseline functions. In each example, the model P_Z |θ, X is supported on ℝ. A critical role of the transformation (<ref>)—besides adding modeling flexibility for the marginal distribution of y—is to deliver probabilistic coherency for the support 𝒴, such as integer-valued data = ℤ, compactly-supported data = [0,1], or positive data = ℝ^+. This broadens the applicability of each continuous model P_Z |θ, X. Here, we assume that 𝒴 is continuous. The integer case requires special considerations for discrete data addressed in <cit.>; however, that work focuses exclusively on a Gaussian linear model for (<ref>) and uses a prior-based approximation strategy that cannot provide a proper notion of posterior consistency. Our analysis is significantly broader and more robust, and includes new methods and theory for general settings plus tailored algorithms and detailed analysis for linear variable selection, quantile regression, and Gaussian processes. For Bayesian analysis, an unknown transformation must be modeled and accounted for with the joint posterior distribution p(g, θ|_n). However, common Bayesian models for g are either unnecessarily restrictive, computationally challenging and inefficient, or unwieldy for theoretical analysis. Parametric specifications of g such as the extended Box-Cox family <cit.> are widely popular, especially in conjunction with regression or Gaussian process models for (<ref>) <cit.>. These parametric transformations are restrictive and unsuitable for certain domains, including integer-valued or compactly-supported data. Despite their simplicity, these approaches often remain computationally inefficient, especially for Markov chain Monte Carlo (MCMC) sampling. Alternatively, g may be modeled nonparametrically, including Gaussian processes <cit.>, mixtures of incomplete beta or hyperbolic functions <cit.>, splines <cit.>, normalizing flows <cit.>, or compositions <cit.>. Each of these models requires constraints to ensure monotonicity of g. More critically, these approaches do not provide easy access to the joint posterior p(g, θ|_n). Posterior modes and variational approximations are often inadequate for reliable uncertainty quantification <cit.>, while commonly used MCMC algorithms are complex and often inefficient, usually with Gibbs sampling blocks for [g |_n, θ] and [θ|_n, g]—and often with sub-blocks for the coefficients that determine g. These additional blocks reduce Monte Carlo efficiency and can be difficult to implement and generalize. These factors limit the utility of existing approaches for semiparametric Bayesian regression via (<ref>)–(<ref>). The proposed approach bears some resemblance to copula-based methods that decouple marginal and joint parameter estimation, called inference function for margins <cit.>. That framework uses a two-stage point estimation that is inadequate for joint uncertainty quantification, and predominantly is limited to copula models. <cit.> introduced a Bayesian analog, which uses an empirical likelihood approximation with MCMC. <cit.> and <cit.> applied copula models for regression analysis, but again relied on MCMC for posterior and predictive inference. Alternatively, the rank likelihood <cit.> eschews estimation of g and provides inference for θ based only on the ranks of y. However, this approach does not produce a coherent posterior predictive distribution, requires computationally demanding MCMC sampling, and primarily focuses on copula models <cit.>. This manuscript introduces a general methodological and computational framework for Bayesian inference and prediction for the transformed regression model (<ref>)–(<ref>). The proposed approach is easy to implement for a variety of useful regression models and delivers efficient Monte Carlo (not MCMC) inference (Section <ref>). Empirically, this framework improves prediction, variable selection, and estimation of g for Bayesian semiparametric linear models; provides substantially more accurate quantile estimates and model adequacy for Bayesian quantile regression; and increases predictive accuracy for Gaussian processes (Section <ref>). Our theoretical analysis establishes and characterizes posterior consistency under key settings, including multiple model misspecifications (Section <ref>). Some limitations are addressed in Section <ref>. Supplementary material includes proofs of all results, additional simulation results, and reproducible code. § METHODS §.§ General case Our approach builds upon the decomposition of the joint posterior distribution p(g, θ|_n) = p(g |_n) p(θ|_n, g) under model (<ref>)–(<ref>). The first term, p(g |_n) = ∫_Θ p(g, θ|_n) dθ, presents the more significant challenge for Bayesian modeling, computing, and theory, and is the focus of this paper. The second term is more straightforward: p(θ|_n, g) is equivalent to the posterior distribution of θ under model (<ref>) using data z_i = g(y_i) with known transformation g. Thus, the presence of the transformation does not introduce any additional challenges for this term: the conditional posterior p(θ|_n, g) = p(θ|{x_i, g(y_i)}_i=1^n) is well-studied for many choices of (<ref>), and often is available in closed form or can be accessed using existing algorithms for models of the form P_Z |θ, X. The central idea is to identify g using the marginal distributions P_Y and P_X of the dependent and independent variables, respectively, under the assumed model (<ref>)–(<ref>). We target the posterior distributions of these relevant quantities, which induces a posterior distribution on g. Specifically, the marginal posterior (predictive) distributions satisfy F_Y|_n(t) = (y ≤ t |_n) = {z ≤ g(t) |_n} = F_Z |_n{g(t)}, where the marginal posterior (predictive) distribution of z is F_Z|_n(t) = ∫_𝒳∫_Θ F_Z |θ, X=x(t) p(θ|_n ) p(x |_n) dθ d x = ∫_𝒳 F_Z | X=x(t) p(x |_n) d x and F_Z |θ, X is the cumulative distribution function of P_Z |θ, X in (<ref>), F_Z | X(t) = ∫_Θ F_Z |θ, X(t) p(θ|_n) d θ, and we have assumed independence between θ and X. Thus, the transformation is g(t) = F_Z |_n^-1{F_Y|_n(t)}, which directly depends on the marginal (posterior predictive) distributions P_Y and P_X and the model (<ref>) via (<ref>). The monotonicity of g in (<ref>) is guaranteed by construction and requires no additional constraints. An analogous prior (predictive) version of (<ref>) may be constructed similarly. We induce a model for g by placing distinct marginal models on P_Y and P_X. A natural strategy is to deploy Bayesian nonparametric models for the marginal distributions P_Y and P_X, which provide a variety of highly flexible and well-studied options <cit.>. However, we also prioritize modeling strategies that admit efficient computing and convenient theoretical analysis for g, and must carefully consider the key equations (<ref>) and (<ref>). Although p(θ|_n) appears in (<ref>), our approach is carefully constructed to be robust to approximations of this term (Section <ref>); for instance, even using the prior p(θ) in defining F_Z |_n produces highly competitive empirical results (Section <ref>). Asymptotic robustness is studied in Section <ref>. First consider P_X. This term is a nuisance parameter: it is required only for the marginalization in (<ref>) but does not otherwise appear in the model. Our default recommendation for P_X is the Bayesian bootstrap <cit.>. The Bayesian bootstrap may be constructed by placing a Dirichlet process prior over 𝒳 and taking the limit as the concentration parameter goes to zero. Although more complex models for P_X are available, such as Bayesian copula models for mixed data types <cit.>, such specifications may be unnecessarily complex, require customization and diagnostics, and impede computational performance. By comparison, the Bayesian bootstrap requires no tuning parameters, applies for mixed data types, and offers substantial modeling flexibility. In particular, the Bayesian bootstrap for P_X delivers an exceptionally convenient and efficient Monte Carlo sampler for the posterior of F_Z|_n in (<ref>) (Algorithm <ref>). Because Algorithm <ref> features Monte Carlo rather than MCMC, it avoids the need for lengthy runs and convergence diagnostics, yet still controls the approximation error via the number of simulations. The critical term F_Z| X is discussed thoroughly in Section <ref> and the subsequent examples. Now consider P_Y. Many options from Bayesian nonparametrics are available, and at minimum must respect the support 𝒴. Our default specification is again the Bayesian bootstrap, for which the posterior distribution F_Y |_n is accessed using Monte Carlo sampling (Algorithm <ref>). We present our main algorithm for joint posterior and predictive inference in Algorithm <ref>. Here, ỹ(x) is a posterior predictive variable from p{ỹ(x) |_n}, i.e., the posterior distribution of future or unobserved data at x according to the model (<ref>)–(<ref>). Algorithm <ref> is a Monte Carlo sampler for (g, θ, ỹ(x)) jointly whenever the sampler for p(θ|_n, g = g^*)—equivalently, the posterior distribution from model (<ref>) using data z_i^* = g^*(y_i)—is Monte Carlo, which is available for certain Gaussian <cit.>, probit <cit.>, multinomial probit <cit.>, and count data <cit.> regression models, among other special cases. Even when approximate sampling algorithms are required for the posterior of θ, Algorithm <ref> crucially avoids a Gibbs blocking structure for [g |_n, θ] and [θ|_n, g], which distinguishes the proposed approach from existing sampling algorithms. The monotonicity of each sampled g^* is guaranteed by construction. The role of n(n+1)^-1 is to avoid boundary issues: when the latent data model (<ref>) is supported on , F̃_Z|_n^-1(1) = ∞ for any t such that F̃_Y|_n(t) = 1. Under the Bayesian bootstrap for P_Y, this occurs for t ≥max{y_i} and thus cannot be ignored. The rescaling eliminates this nuisance to ensure finite g but is asymptotically negligible. The predictive sampling step requires application of (g^*)^-1(s) = F̃_Y|_n^-1{F̃_Z|_n(s)}. Thus, the posterior predictive distribution matches the support of P_Y. However, the Bayesian bootstrap for P_Y is supported only on the observed data values {y_i}_i=1^n, even though 𝒴 is continuous. We apply a monotone and smooth interpolation of g^* <cit.> prior to computing its inverse, which only impacts the predictive sampling step—not the sampling of θ—yet expands the support of ỹ(x) to [min(y), max(y)]. These endpoints may be extended as appropriate. §.§ Correction factors and robustness Algorithm <ref> faces two noteworthy obstacles. First, the sampler for p(g |_n) does not use the exact likelihood under model (<ref>)–(<ref>). We characterize this discrepancy with the following result. Suppose that F_Y and F_Z are continuous distribution functions with densities that exist. Under model (<ref>)–(<ref>) with θ∼ p(θ) and x_i ∼ P_X independently, the likelihood for g is p(y_1,…, y_n | g) = 𝔼_θ [∏_i=1^n p_Z |θ{g(y_i)} ] /∏_i=1^n 𝔼_θ[p_Z |θ{g(y_i)}] ∏_i=1^n p_Y(y_i) = ω(y | g)∏_i=1^n p_Y(y_i) where 𝔼_θ is the expectation under p(θ) and ω(y | g) is a correction factor. The correction factor ω(y | g) appears because model (<ref>)–(<ref>) implies (marginal) exchangeability but not independence for {y_i}_i=1^n. Algorithm <ref> intentionally omits ω(y | g) and instead uses only ∏_i=1^n p_Y(y_i), which we refer to as the surrogate likelihood for g. The remaining sampling steps use the correct likelihood. This strategy is fruitful: it delivers efficient Monte Carlo (not MCMC) sampling (Algorithm <ref>) and consistent posterior inference for g (Section <ref>), which further indicates that the omitted term ω(y | g) is indeed asymptotically negligible. It is possible to correct for the surrogate likelihood using importance sampling. First, we apply Algorithm <ref> to obtain S draws {g^s, θ^s, ỹ^s(x)}_s=1^S∼ p{g, θ, ỹ(x) |_n}. Using this as the proposal distribution, the importance weights are ω(y | g), and may be used to estimate expectations or obtain corrected samples via sampling importance resampling. The latter version draws indices {s_1^*, …, s_S^*^*}⊂{1,…, S} with replacement proportional to ω(y | g^s) and retains the subsampled draws {g^s, θ^s, ỹ^s(x)}_s=s_1^*^s_S^*^* with S^* < S. However, our empirical results (see the supplementary material) suggest that even for n=50, this adjustment has minimal impact and is not necessary to achieve excellent performance. The second challenge pertains to F_Z |_n in (<ref>) and Algorithm <ref>, which depends on p(θ|_n). At first glance, this is disconcerting: the posterior of θ under model (<ref>)–(<ref>)—unconditional on the transformation g—is not easily accessible. However, a critical feature of the proposed approach (Algorithm <ref>) is that posterior and predictive inference is remarkably robust to approximations of p(θ|_n) in (<ref>). In particular, this quantity is merely one component that defines g in (<ref>), while the remaining posterior and predictive sampling steps in Algorithm <ref> use the exact (conditional) posterior p{θ, ỹ(x) |_n, g}. In fact, we show empirically that using the prior p(θ) as a substitute for p(θ|_n) in (<ref>) yields highly competitive results, even with noninformative priors (Section <ref>). The general idea is to substitute an approximation p̂(θ|_n) into Algorithm <ref> via F̂_Z | X(t) = ∫_Θ F_Z |θ, X(t) p̂(θ|_n) d θ which modifies only the sampling step for p(g |_n) in Algorithm <ref> and not the subsequent draws of θ or ỹ(x). When (<ref>) is not available analytically, we may estimate it using Monte Carlo: F̂_Z | X(t) ≈ S^-1∑_s=1^S F_Z |θ = θ^s, X(t) where {θ^s}_s=1^S ∼p̂(θ|_n) and F_Z |θ, X corresponds to (<ref>). We consider three options for p̂(θ|_n): (i) the prior p(θ), (ii) rank-based procedures that directly target p(θ|_n) without estimating g <cit.>, and (iii) plug-in approximations that target p(θ|_n, g = ĝ) for some point estimate ĝ. The rank-based approaches offer appealing theoretical properties, but are significantly slower, designed primarily for linear models, and produce nearly identical initial estimates and results as our implementation of (iii) (see the supplementary material). Thus, we focus on (i) and (iii) and discuss (ii) in the supplementary material. Option (iii) considers approximations of the form p(θ|_n, g = ĝ) = p(θ|{x_i, ĝ(y_i)}_i=1^n), which is the posterior under model (<ref>) given data {x_i, ĝ(y_i)}_i=1^n and a fixed transformation, such as ĝ(t) = F̂_Z |_n^-1{n(n+1)^-1F̂_Y|_n(t)}. Many approximation strategies exist for p(θ|_n, g = ĝ); our default is the fast and simple Laplace approximation p̂(θ|_n, g = ĝ) = N(θ̂, σ^2 Σ_θ̂), where θ̂ estimates the posterior mode and σ^2Σ_θ̂ approximates the posterior covariance using data {x_i, ĝ(y_i)}_i=1^n. In (<ref>), F̂_Y |_n(t) = n^-1∑_i=1^n I(y_i ≤ t) is the empirical distribution function of y, while F̂_Z |_n is updated in two stages. First, we initialize F̂_Z |_n = Φ and then ĝ via (<ref>), and then compute p̂(θ|_n, g = ĝ). From this initialization, we update F̂_Z |_n(t) = n^-1∑_i=1^n F̂_Z | X=x_i(t) using F̂_Z | X in (<ref>), which supplies an updated transformation ĝ via (<ref>) and thus an updated approximation p̂(θ|_n, g = ĝ). Finally, this approximation is deployed for (<ref>) and substituted into Algorithm <ref>, and is a one-time cost for all samples of p(g |_n) in Algorithm <ref>. Again, this approximation utilized only for sampling p(g |_n), while the remaining steps of Algorithm <ref> use the exact (conditional) posterior p{θ, ỹ(x) |_n, g}. Lastly, we may add a layer of robustness to partially correct for the approximation p̂ in (<ref>). First, observe that the location and scale of the latent data model (<ref>) map to the location and scale of the transformation, and vice versa: Consider a transformation g_1 that uses the distribution of [μ + σ Z |_n] instead of [Z |_n], where μ and σ are fixed constants. Then g_1(t) = μ + σ g(t). Lemma <ref> is not merely about identifiability, but also suggests a triangulation strategy for robustness to p̂. If p̂ induces the wrong location or scale for F_Z |_n via (<ref>) and (<ref>), then this propagates to p(g |_n). Crucially, this effect is also reversible: the wrong location or scale for p(g |_n) can be corrected by suitably adjusting for the location and scale in the sampling step for p{θ, ỹ(x)|_n, g}. The proposed robustness strategy is as follows. First, we compute (or sample from) the posterior p(g |_n) under an identified model. Identifiability is typically ensured by fixing the center (e.g., f_θ(0) = 0) and scale (e.g., σ = 1) of P_Z |θ, X. Next, we reintroduce the location and scale in P_Z |θ, X to sample p{θ, ỹ(x) |_n, g} in Algorithm <ref>. Specifically, we replace P_Z | X, θ with μ + σ P_Z | X, θ in (<ref>), specify a diffuse prior for (μ, σ), and target the joint posterior (predictive) distribution p{μ, σ,θ, ỹ(x) |_n, g}. This modification is typically simple, but provides inference for (θ, ỹ(x)) that is robust against location-scale misspecification of p(g |_n). We confirm this effect both empirically (Section <ref>) and theoretically (Section <ref>). It is natural to question the coherency of introducing model parameters in the midst of a sampling algorithm (Algorithm <ref>). First, we emphasize that these parameters (μ, σ) are not identified under model (<ref>)–(<ref>), and thus not strictly necessary in any analysis. Second, we may view (μ, σ) as an accompaniment to the approximation p̂, which determines p(g |_n) via (<ref>) and Algorithm <ref>. Instead of supplying a more sophisticated p̂ to infer g, Lemma <ref> suggests that we may equivalently apply a downstream adjustment via the latent data model (<ref>). This motivates our use of μ + σ P_Z | X, θ. Monte Carlo estimation of F_Z | X. Sample {θ^s}_s=1^S ∼ p(θ) Compute F̂_Z | X(t) = S^-1∑_s=1^S F_Z |θ = θ^s, X(t) Output F̂_Z | X Algorithm <ref> merely requires simulating from the prior—which is a one-time cost for all subsequent steps (e.g., Algorithm <ref>) and valid under any proper prior—and evaluating the distribution function F_Z |θ, X from (<ref>). These algorithms are discussed and applied for Examples <ref>–<ref> below. We detail the proposed approach for semiparametric linear regression (Section <ref>), quantile regression (Section <ref>), and Gaussian processes (Section <ref>). §.§ Semiparametric Bayesian linear regression Suppose that (<ref>) is the Gaussian linear regression model P_Z |θ, X=x = N(x^θ, σ^2) with prior θ∼ N(μ_θ, σ^2Σ_θ). We focus on the g-prior <cit.> with μ_θ = 0 and Σ_θ = ψ (X^ X)^-1 for ψ > 0. For model identifiability, the scale is fixed at σ=1 and the intercept is omitted. We apply Algorithm <ref> to obtain Monte Carlo posterior draws of (g^*, θ^*, ỹ^*(x)) as follows. First, consider g. The necessary step is to construct F̂_Z| X = x_i and apply Algorithm <ref> to sample g^* ∼ p(g |_n). Using a preliminary approximation of the form p̂(θ|_n, g = ĝ) = N(θ̂, Σ_θ̂), this key term is F̂_Z | X = x_i(t) = Φ(t; x_i^θ̂, 1 + x_i^Σ_θ̂ x_i). We consider two options for p̂: the prior, θ̂= 0 and Σ_θ̂ = ψ (X^ X)^-1, or the Laplace approximation, θ̂= Σ_θ̂X^ĝ(y) with Σ_θ̂ = ψ(1+ψ)^-1(X^ X)^-1. The remaining steps for sampling g^* ∼ p(g |_n) are straightforward. Next, we sample p(θ|_n, g = g^*). The intercept is reintroduced and absorbed into θ and the scale is assigned the prior σ^-2∼(a_σ, b_σ). We jointly sample p(σ, θ|_n, g ) = p(σ|_n, g) p(θ|_n, g, σ), where [σ^-2|_n, g = g^*] ∼(a_σ + n/2, b_σ + {‖ z^* ‖^2 - ψ(1+ψ)^-1 (z^*)^ X(X^ X)^-1 X^ z^*}/2) for z_i^* = g^*(y_i) and [θ|_n, g = g^*, σ = σ^*] ∼ N(Q_θ^-1ℓ_θ, Q_θ^-1) with Q_θ = (σ^*)^-2(1+ψ)ψ^-1 X^ X and ℓ_θ = (σ^*)^-2 X^ z^*. Even with the location-scale adjustment, all quantities are drawn jointly using Monte Carlo (not MCMC) sampling. Finally, the predictive sampling step is z̃^*(x) ∼ N(x^θ^*, (σ^*)^2) and ỹ^*(x) = (g^*)^-1{z̃^*(x)}. To apply the importance sampling adjustment (<ref>), we may use the sampled weights α^x to target the densities that determine ω(y | g): ω(y | g) ≈ S^-1∑_s=1^S ∏_i=1^n ∑_i'=1^n α_i'^x ϕ{g(y_i); x_i'^θ^s, 1}/∏_i=1^n ∑_i'=1^n α_i'^x ϕ{g(y_i); x_i'^μ_θ, 1 + x_i'^Σ_θ x_i'^}, {θ^s}_s=1^S ∼ p(θ) where ϕ is the Gaussian density function and {α_i^x}_i=1^n and g are sampled in Algorithm <ref>. In our simulated data examples, this adjustment has minimal impact on posterior (predictive) inference (see the supplementary material), which suggests that Algorithm <ref> may be applied directly (i.e., without adjustment) in certain settings. §.§ Semiparametric Bayesian quantile regression We apply model (<ref>)–(<ref>) and Algorithm <ref> to improve quantile estimation and posterior inference for Bayesian linear quantile regression. Posterior inference for Bayesian quantile regression is facilitated by a convenient parameter expansion for an asymmetric Laplace variable: ϵ = a_τξ + b_τ√(ξ)η, where a_τ = (1-2τ)/{τ(1-τ)}, b_τ = √(2/{τ(1-τ)}), ξ and η are independent standard exponential and standard Gaussian random variables, respectively. Thus, the regression model (<ref>) with f_θ(x) = x^θ and ALD errors can be written conditionally (on ξ) as a Gaussian linear model. This representation suggests a Gibbs sampling algorithm that alternatively draws θ from a full conditional Gaussian distribution and each ξ_i independently from a generalized inverse Gaussian distribution <cit.>. We adapt this strategy for Algorithm <ref>. The key step again is to construct F̂_Z| X = x_i, which is necessary to apply Algorithm <ref> and sample g^* ∼ p(g |_n). For computational convenience, we pair an approximation p̂(θ|_n, g = ĝ) = N(θ̂, Σ_θ̂) with a parameter expansion of the ALD for P_Z |θ, X in (<ref>)–(<ref>). Let θ∼ N(μ_θ, Σ_θ) be the prior. The preliminary approximation p̂ may be set at the prior, θ̂= μ_θ and Σ_θ̂ = Σ_θ, or estimated from the data, e.g., using classical quantile regression for {x_i, ĝ(y_i)}_i=1^n. By marginalizing over this p̂ as in (<ref>), the parameter-expanded distribution is P_Z | X, ξ = N(x^θ̂+ a_τξ, b_τ^2 ξ + x^Σ_θ̂ x). Integrating over ξ∼(1) requires a simple modification of the estimator from Section <ref>: F̂_Z | X=x_i(t) ≈ S^-1∑_s=1^S Φ(t; x_i^θ̂+ a_τξ^s, b_τ^2 ξ^s + x_i^Σ_θ̂ x_i), where {ξ^s}_s=1^S ∼(1). From our empirical studies, this approximation is accurate even when S is small. The remaining steps to sample g^* ∼ p(g |_n) are straightforward. Next, we sample θ using the traditional Gibbs steps, p(θ|_n, g = g^*, ξ = ξ^*) = N(Q_θ^-1ℓ_θ, Q_θ^-1) with Q_θ = X^Σ_ξ^*^-1 X + Σ_θ^-1, ℓ_θ = X^Σ_ξ^*^-1{g^*(y) - a_τξ^*} + Σ_θ^-1μ_θ, and Σ_ξ^* = (b_τ^2 ξ^*) with ξ^* = (ξ_1^*,…,ξ_n^*)^ drawn from the usual independent generalized inverse Gaussian full conditional distributions. As in Section <ref>, θ now includes an intercept; we omit the scale parameter for simplicity, but modifications to include σ are available <cit.>. Finally, the predictive sampling step may draw z̃^*(x) directly from an ALD or use the parameter expansion z̃^*(x) ∼ N(x^θ^* + a_τξ̃, b_τ^2 ξ̃) with ξ̃∼(1), and then set ỹ^*(x) = (g^*)^-1{z̃^*(x)}. Although this version of Algorithm <ref> is MCMC, we emphasize that the key parameters (g, θ) are still blocked efficiently, with g sampled unconditionally on θ. §.§ Scalable semiparametric Gaussian processes An immensely popular model for (<ref>) is the Gaussian process model f_θ∼𝒢𝒫(m_θ, σ^2K_θ) for mean function m_θ and covariance function K_θ parameterized by θ∼ p(θ). The nonparametric flexibility of f_θ is widely useful for spatio-temporal modeling and regression analysis. However, the usual assumption of Gaussian errors is often inappropriate, especially for data that exhibit multimodality, skewness, zero-inflation, or with discrete or compact support. The transformation (<ref>) helps resolve this critical limitation. However, existing Bayesian approaches rely on Box-Cox transformations <cit.> and often report only posterior modes <cit.> or variational approximations <cit.>. Algorithm <ref> offers a solution. Once again, the critical step is to construct F̂_Z | X=x_i to apply Algorithm <ref> and sample g^*∼ p(g |_n). To facilitate direct and feasible computation, we prioritize the uncertainty from (g, f_θ, ỹ(x)) and fix θ at an optimal value; generalizations are discussed below. For inputs {x_i, ĝ(y_i)}_i=1^n, we compute the maximum likelihood estimator θ̂ for θ and the (conditional) posterior distribution for the regression function, p̂(f_θ̂|_n, g = ĝ) = N(f̂_θ̂, σ^2 Σ_f_θ̂), where f̂_θ̂ = {f̂_θ̂(x_i)}_i=1^n are the point predictions at {x_i}_i=1^n given data {x_i, ĝ(y_i)}_i=1^n and Σ_f_θ̂ = (K_θ̂^-1 + I_n)^-1 with K_θ̂ = {K_θ̂(x_i, x_i')}_i,i'=1^n. Importantly, these are standard quantities in Gaussian process estimation, and thus we can leverage state-of-the-art algorithms and software. Finally, the critical term for sampling g^* ∼ p(g |_n) is F̂_Z | X = x_i(t) = Φ[t; f̂_θ̂(x_i), σ^2{1 + (Σ_f_θ̂)_ii}], where σ = 1 is fixed for identifiability. The remainder of Algorithm <ref> is straightforward. Given g^* ∼ p(g |_n) and reintroducing the scale σ for robustness, we sample f_θ̂^* ∼p̂(f_θ̂|{x_i, g^*(y_i)}_i=1^n) = N(f̂_θ̂, σ^2Σ_f_θ̂), where f̂_θ̂ are now the point predictions at {x_i}_i=1^n given data {x_i, g^*(y_i)}_i=1^n. Sampling σ may proceed using similar strategies as in Section <ref>. The predictive sampling step is z̃^*(x) ∼ N(f_θ̂^*(x_i), σ^2) and ỹ^*(x) = (g^*)^-1{z̃^*(x)}; modifications for out-of-sample predictive draws are readily available. If instead we wish to also account for the uncertainty of θ, there are two main modifications required. First, given an approximate posterior p̂(θ|_n), we modify the key term in Algorithm <ref>: F̂_Z | X = x_i(t) ≈ S^-1∑_s=1^S Φ[t; f̂_θ^s(x_i), σ^2{1 + (Σ_f_θ^s)_ii}], where {θ^s}_s=1^S ∼p̂(θ|_n). Second, we must sample θ^* ∼ p(θ|_n, g = g^*) and replace θ̂ with θ^* in the sampling steps for f_θ. Of course, these approximate and conditional posterior distributions for θ will be specific to the mean function m_θ and covariance function K_θ in the Gaussian process model. Yet even when θ = θ̂ is fixed, posterior sampling of f_θ is a significant computational burden. We use a fast approximation that bypasses these sampling steps. Specifically, we fix θ̂= θ, f_θ̂ = f̂_θ̂, and σ = σ̂ at their maximum likelihood estimates from the initialization step using data {x_i, ĝ(y_i)}, where σ̂ is included for robustness akin to Section <ref>. The key term in Algorithm <ref> is now F̂_Z | X = x_i(t) = Φ[t; f̂_θ̂(x_i), σ̂^2{1 + (Σ_f_θ̂)_ii}]. Bypassing the sampling steps for (θ, f_θ, σ), the predictive sampling step is now simply z̃^*(x) ∼ N(f̂_θ̂(x_i), σ̂^2) and ỹ^*(x) = (g^*)^-1{z̃^*(x)}. Relative to point estimation for (untransformed) Gaussian processes, this latter approach requires only one additional optimization step and a series of simple and fast sampling steps. The fully Bayesian model for the transformation g is especially important here: it helps correct not only for model inadequacies that may arise from a Gaussian model for y—for example, if the errors are multimodal or skewed or if 𝒴 is discrete or compact—but also for the approximations obtained by fixing parameters at point estimates. This strategy is evaluated empirically in Section <ref>. § EMPIRICAL RESULTS §.§ Simulation study for semiparametric Bayesian linear regression We evaluate the proposed semiparametric Bayesian linear models for prediction and inference using simulated data. Data are generated from a transformed linear model with n observations and p covariates, where the covariates are marginal standard Gaussian with (x_ij, x_ij') = (0.75)^| j - j'| and randomly permuted columns. Latent data are simulated from a Gaussian linear model with p/2 true regression coefficients set to one and the rest set to zero, and unit error standard deviation (0.25 and 1.25 produced similar results). We consider three inverse transformation functions, which are applied to these latent data (after centering and scaling) to generate y. These transformations determine both the support 𝒴 and the complexity of the link between the linear term and y. First, we induce an approximate Beta marginal distribution with g^-1(t) = F_ Beta(0.1, 0.5)^-1{Φ(t)}, which yields 𝒴=[0,1] with many y values near zero (). Second, we generate a monotone and locally linear function by simulating 10 increments identically from a standard exponential distribution at equally-spaced points on [-3,3] and linearly interpolating the cumulative sums, which produces positive data 𝒴 = ℝ^+ with a nontrivial transformation (). Third, we specify an inverse (signed) Box-Cox function with λ = 0.5 (see below), which corresponds to a (signed) square-root transformation and thus 𝒴 = ℝ (). For each simulation, a testing dataset (X^test, y^test) with 1000 observations is generated independently and identically. This process is repeated 100 times for each inverse transformation function and (n,p) ∈{(50, 10), (200,50)}. We evaluate several Bayesian approaches. In each case, the linear coefficients are assigned a g-prior with μ_θ = 0 and Σ_θ = n σ^2 (X^ X)^-1. For the proposed approach, we implement Algorithm <ref> for the semiparametric Bayesian linear model as in Section <ref> using the Laplace approximation for p̂(θ|_n) () or the prior (). We also include a simplification that fixes the transformation at the initialization (<ref>), which does not account for the uncertainty in g (). For benchmarking, we include a Bayesian linear model without a transformation () and with a Box-Cox transformation g(t; λ) = {(t)| t|^λ - 1}/λ (). The Box-Cox model uses the prior λ∼ N(0.5, 0.5^2) truncated to (0,2), which centers the transformation at the (signed) square-root, and samples this parameter within the Gibbs sampler using a slice sampler <cit.>. For each implementation, we generate and store 1000 samples from the posterior of θ and the joint posterior predictive distribution on the testing data ỹ(X^test). First, we evaluate predictive performance by comparing the width and empirical coverage of the 90% out-of-sample posterior prediction intervals (Figure <ref>); similar trends are observed for continuous ranked probability scores (see the supplement). Most notably, both and are precise and well-calibrated: the prediction intervals are narrow and achieve approximately the nominal coverage. As expected, produces narrower intervals but below-nominal coverage, which shows the importance of accounting for the uncertainty of g. The competing methods and fail to provide both precision and calibration, even for the true design. Next, we evaluate inference for the regression coefficients θ using variable selection. Although the scale of θ depends on the transformation—and thus differs among competing methods—the determination of whether each θ_j 0 is more comparable. Here, we select variables if the 95% highest posterior density interval for θ_j excludes zero. The true positive and negative rates are averaged across 100 simulations and presented in Table <ref>. The transformation is critical: for the and designs, the proposed methods offer a massive increase in the power to detect true effects without incurring more false discoveries. Remarkably, both and are highly robust to the true transformation and improve rapidly with the sample size, even as p grows. By comparison, and are excessively conservative in their interval estimates for the regression coefficients and perform well only when the true transformation belongs to the Box-Cox family. We highlight the ability of the and to infer the various transformations on = [0,1], = ℝ^+, and = ℝ. Figure <ref> presents 95% pointwise credible intervals for g under these models and for a single simulated dataset from each design. The transformations are rescaled such that the posterior means and the true transformations are centered at zero with unit scale, and thus are comparable. Most notably, and are virtually indistinguishable and successfully concentrate around each true transformation as n grows. By comparison, is insufficiently flexible and substantially underestimates the uncertainty about g. Remarkably, using the prior distribution for p̂(θ|_n) in (<ref>) and Algorithm <ref> () performs nearly identically to the data-driven Laplace approximation () for prediction of y^test, selection of coefficients θ_j 0, and estimation of the transformation g. This suggests that our approach, including the approximation strategy from Section <ref>, is robust to the choice of p̂(θ|_n). Clearly, the ability to substitute p(θ) is highly beneficial, as it is always available and requires no additional computations or tuning. We briefly mention computing performance. Applying Algorithm <ref> as described in Section <ref>, the joint Monte Carlo sampler for (θ, g, ỹ(X^test)) requires about 3.5 seconds per 1000 samples for the larger n=200, p=50 design (using on a MacBook Pro, 2.8 GHz Intel Core i7). Because these are Monte Carlo (not MCMC) algorithms, no convergence diagnostics, burn-in periods, or inefficiency factors are needed. §.§ Simulation study for semiparametric Bayesian quantile regression We modify the simulation design from Section <ref> to evaluate the proposed semiparametric approach for quantile regression. First, the latent data are generated from z_i = x_i^θ_true(1 + ϵ_i) with ϵ_i∼ N(0,1), which introduces heteroskedasticity. Heteroskedasticity is a common motivation for quantile regression, since it often leads to different conclusions compared to mean regression. Second, the inverse transformation is simply the identity. Thus, the data-generating process does not implicitly favor the transformed regression model (<ref>)–(<ref>). We implement the proposed semiparametric Bayesian quantile regression from Section <ref> with similar variations for inferring g as in Section <ref>. To specify p̂(θ|_n) in (<ref>) and Algorithm <ref>, we consider both a Laplace approximation using classical quantile regression with bootstrap-based covariance estimate from the package () and the prior p(θ) (). We also consider the simplification with the transformation fixed at ĝ in (<ref>) (). For comparisons, we include Bayesian quantile regression () without the transformation, which otherwise uses the same sampling steps as in Section <ref>, and frequentist quantile regression () using default settings in . The Bayesian models use the same g-prior with μ_θ = 0 and Σ_θ = n σ^2 (X^ X)^-1. The models are estimated for quantiles τ∈{0.05, 0.25, 0.50}; performance is comparable for large quantiles (1-τ) and the results for τ =0.05 and τ = 0.10 are similar. Quantile estimates on the testing data X^test are computed using X^testθ̂ for and and the posterior mean of g^-1(X^testθ) for the semiparametric methods. We evaluate the quantile estimates by computing the proportion of testing data points that are below the estimated τth quantile (Figure <ref>). For a well-calibrated quantile estimate, this quantity should be close to τ. Although all methods are well-calibrated for the median (τ = 0.5), the existing frequentist () and Bayesian () estimates become poorly calibrated as τ decreases (or increases; not shown). By comparison, the proposed semiparametric methods maintain calibration across all values of τ, especially for the fully Bayesian implementations. Again, we see little difference between the data-driven approximation () and the prior approximation () central to (<ref>), which highlights the robustness of Algorithm <ref>. We emphasize the value of the transformation for improving Bayesian model adequacy. Specifically, we compute continuous ranked probability scores for the posterior predictive draws ỹ(X^test) from each Bayesian model, and average those across all simulations (Table <ref>). These scores provide a comprehensive assessment of the out-of-sample posterior predictive distributions. Compared to the standard Bayesian quantile regression model (), the proposed semiparametric modification offers massive improvements in predictive distributional accuracy. In particular, is highly inaccurate for smaller τ, while the methods are robust across τ. Thus, the semiparametric approach alleviates concerns about the inadequacy of an ALD model and produces more accurate quantile estimates. These important conclusions are confirmed visually by examining posterior predictive diagnostics for a single simulated dataset (Figure <ref>). We compute the empirical cumulative distribution function for the observed data and for each posterior predictive draw under the Bayesian quantile regression model for each τ∈{0.05, 0.25, 0.5}. Traditional Bayesian quantile regression based on the ALD is clearly inadequate as a data-generating process for all values of τ. By comparison, the proposed semiparametric alternative (and ; not shown) completely corrects these inadequacies to deliver a model that both infers the target quantiles accurately (Figure <ref>) and is globally faithful to the observed data (Table <ref> and Figure <ref>). §.§ Semiparametric Gaussian processes for Lidar data We apply the semiparametric Bayesian Gaussian process model to the Lidar data from <cit.>. These data are a canonical example of a nonlinear and heteroskestastic curve-fitting problem (Figure <ref>). Instead of augmenting a Gaussian process model with a variance model—which requires additional model specification, positivity constraints, and more demanding computations—we simply apply the proposed semiparametric Bayesian Gaussian process () approach from Section <ref>. The latent Gaussian process f_θ∼𝒢𝒫(m_θ, K_θ) features an unknown constant for the mean function m_θ and an isotropic Matérn covariance function K_θ with unknown variance, range, and smoothness parameters; these unknowns constitute θ. Computations of the Gaussian process point estimates, predictions, and covariances are done using the package in <cit.>. First, we assess the fit to the full dataset (n=221). Figure <ref> (left) presents the fitted curve and 90% pointwise prediction intervals. The model is capable of smoothly capturing the trend and the heteroskedasticity in the data—which is not explicitly modeled. More thorough posterior predictive diagnostics (Figure <ref>, right) confirm the adequacy of the model. Specifically, we compute the empirical cumulative distribution on the data and on each simulated predictive dataset, in both cases restricted to smaller (x < 500) and larger (x > 500) covariate values. Despite the notable differences in the distributions, the proposed is faithful to the data. For comparison, we include an approximate version that fixes the transformation at ĝ in (<ref>) (), and consider more standard Gaussian process models that omit the transformation () or apply an unknown Box-Cox transformation () using the same prior and sampler for λ as in Section <ref>. Figure <ref> shows that naturally produces narrower prediction intervals, but more importantly, that is unable to capture the heteroskedasticity in the data. Thus, unlike the proposed nonparametric Bayesian transformation, the parametric Box-Cox transformation is inadequate for this key distributional feature. We proceed with more formal evaluations based on 100 random training/testing splits of the data (80% training). Table <ref> presents the average interval widths and empirical coverage for 95%, 90%, and 80% out-of-sample prediction intervals. Remarkably, delivers the exactly correct nominal coverage, often with narrower intervals than the competing and models. The intervals from the approximate version are narrower and typically close to nominal coverage. The proposed methods are both calibrated and sharp. This is confirmed using out-of-sample ranked probability scores (not shown), which show statistically significant improvements for and relative to the Gaussian process competitors. Finally, we modified the approximate approach from Section <ref> to include posterior sampling of f_θ̂, which accounts for the uncertainty in the regression function (but not the hyperparameters θ). The results were visually indistinguishable from Figure <ref>, but the computing cost increased substantially: for the full dataset, required only about 4 seconds per 1000 Monte Carlo samples, while the augmented posterior sampler needed about 111 seconds. These discrepancies increase with n. In aggregate, our results suggest that successfully combines the efficiency of point optimization with the uncertainty quantification from the Bayesian nonparametric model for g to deliver fast, calibrated, and sharp posterior predictive inference. § THEORY An important advantage of our modeling and algorithmic framework is that it enables direct asymptotic analysis. We consider generic models for P_X and P_Y within model (<ref>)–(<ref>) and show that our joint posterior for (g, θ) under Algorithm <ref> is consistent under conditions on P_Y, P_X, and P_Z|θ, X. Importantly, these results verify the asymptotic validity of (i) the surrogate likelihood, (ii) the approximation (<ref>), and (iii) the location-scale robustness adjustment (Section <ref>). Let () denote the space of monotone increasing functions mapping to and let 𝒯 be the topology of pointwise convergence. Let F_X,0 and F_Y,0 denote the true distribution functions of X and Y, respectively. Finally, let g̃ be the restriction of g to = {y ∈: 0 < F_Y,0(y) < 1} and g̃_0 defined similarly for the true transformation g_0. First, we establish posterior consistency for p(g̃|_n) at g̃_0. Suppose that the true data-generating process P_g_0, θ_0 is identified by the parameters g_0∈ ((), 𝒯) and θ_0 ∈^d under model (<ref>)–(<ref>). Under the assumptions * F_Z |θ, X(t) is continuous in X, t, and in θ for an open neighborhood of θ_0 invariant to (t, X); * the posterior approximation p̂( θ|_n) is (strongly) consistent at θ_0; and * the marginal models F_X |_n, F_Y |_n are (strongly) consistent at F_X,0, F_Y,0, then the posterior distribution p(g̃|_n) is (strongly) consistent at g̃_0 under (i) Τ and (ii) the L_∞-topology on any bounded subset of . Importantly, this posterior p(g̃|_n) refers to the one targeted by Algorithm <ref>, which uses (i) the surrogate likelihood and (ii) the approximation p̂(θ|_n) in (<ref>). The examples from Section <ref> satisfy the conditions on F_Z |θ, X (<ref>.1), while the Bayesian bootstrap models for P_X and P_Y are (weakly) consistent for the margins (<ref>.3). The requirement (<ref>.2) admits many choices of p̂(θ|_n). Perhaps the simplest option is a point mass at some consistent estimator of θ_0, such as using rank-based estimators (; see the supplementary material). Let p̂(θ|_n) = δ_θ̃, where θ̃p→θ_0 is a consistent point estimator. Under the conditions of Theorem <ref>, p(g̃|_n) is weakly consistent at g̃_0 under Τ. To strengthen Theorem <ref>, we consider special cases of the support . The restriction to ensures that g is finite, but the limiting cases are well-defined: we may set g(t) = -∞ for any t such that F_Y,0(t) = 0 and similarly g(t) = ∞ whenever F_Y,0(t)=1. No such consideration is needed when is unbounded. Suppose is unbounded. Under the conditions of Theorem <ref>, p(g |_n) is (strongly) consistent at g_0 under (i) Τ and (ii) the L_∞-topology on any bounded subset of . When is compact, we can strengthen this result to uniform posterior consistency of p(g |_n) with no restrictions. Uniform posterior consistency ensures that the posterior converges to the true parameter at the same rate in all regions of the domain. This is particularly important when the shape of the true transformation is not known a priori. Suppose is compact. Under the conditions of Theorem <ref>, p( g |_n) is weakly consistent at g_0 under the L_∞-topology. Building upon Theorem <ref>, we establish joint posterior consistency of (g, θ) by showing consistency of the conditional posterior p(θ|_n, g) with fixed transformation g. For robustness, we provide sufficient conditions for strong posterior consistency of p(θ|_n, g) without assuming the correctness of the model for θ. Instead, we target the parameter that minimizes Kullback–Leibler (KL) divergence from an arbitrary data-generating process to the assumed model. Let P_0 be the true data-generating process and P_g,θ the data-generating model induced by (<ref>)–(<ref>) with g∈ ((), 𝒯) and θ∈Θ⊆ℝ^d. Let Π_θ be the prior on θ and p(Y | g, θ, X) the likelihood of θ at (X,Y) and conditional on g. Under the assumptions * there exists a unique θ^*(g) ∈(Θ) such that θ^*(g) = min_θ∈Θ KL(P_0, P_g, θ); * |𝔼_P_0log p(Y | g, θ, X) | <∞ for all θ∈Θ; * the mapping θ↦log p(Y | g, θ, X) is concave almost surely [P_0]; and * Π_θ() > 0 for any open neighborhood that contains θ^*(g), then p(θ|_n, g) is strongly consistent at θ^*(g) under the Euclidean topology for every fixed g. The target posterior p(θ|_n, g) = p(θ|{x_i, g(y_i)}_i=1^n) is equivalently the posterior distribution under (<ref>) using transformed data {x_i, g(y_i)}_i=1^n with known g. Thus, some form of posterior consistency is unsurprising for many continuous regression models (<ref>). Instead, Theorem <ref> is valuable because (i) it establishes strong consistency for θ^*(g) under model misspecification and (ii) it may be combined with the previous (strong) consistency results for p(g |_n) to establish the joint posterior consistency of (g, θ). Suppose that the true data-generating process P_g_0, θ_0 is uniquely identified by the parameters g_0∈ ((), 𝒯) and θ_0 ∈^d under model (<ref>)–(<ref>). If the assumptions (<ref>.1)–(<ref>.3), (<ref>.1)–(<ref>.4) hold for all g, and if θ^*(g) is continuous at g_0, then the joint posterior distribution p(g̃, θ|_n) is (strongly) consistent at (g̃_0, θ_0). The conditions (<ref>.2)–(<ref>.3) refer to our model for g under (<ref>) and Algorithm <ref>, while (<ref>.1) and (<ref>.1)–(<ref>.4) are regularity requirements on the model (<ref>) and the prior for θ. Additional restrictions such as those in Corollaries <ref>–<ref> may be applied similarly as before. Finally, we assess the proposed robustness strategy to misspecification of F_Z |_n. When F_Z|_n is misspecified in location or scale, accurate estimation of g is impossible (Lemma <ref>). We consider the case in which p(g|_n) converges to the wrong transformation, and specifically one that differs by a shift and scaling. The proposed strategy (Section <ref>) is to replace P_Z | X, θ with μ + σ P_Z | X, θ in (<ref>), but only for the conditional posterior p(θ|_n, g) in (<ref>). Let P_0 be the true data-generating process and P_g,θ the data-generating model induced by (<ref>)–(<ref>) with g∈ ((), 𝒯) and θ∈Θ⊆ℝ^d. Suppose that (g_0, θ_0) = min_g, θKL(P_0, P_g, θ) exists and is unique. Let μ + σ P_Z |θ, X be identifiable with respect to (μ, σ, θ) and assume prior independence Π_μ,σ,θ = Π_μ,σ×Π_θ. Suppose that p(g̃|_n) is (strongly) consistent at μ_0 + σ_0 g̃_0, where μ_0 and σ_0 0 are constants and (μ_0, σ_0) ∈supp(Π_μ, σ). Under the key assumption * there exists a neighborhood 𝒢 around μ_0 + σ_0 g_0 under the topology 𝒯 such that for any g_n → g in 𝒢, sup_μ∈, σ∈^+, θ∈Θ|KL(P_0, P_(g_n - μ)/σ, θ) - KL(P_0, P_(g - μ)/σ, θ)| → 0, and if assumptions (<ref>.1)–(<ref>.4) hold for all g, then the joint posterior distribution p(g̃, θ|_n) is (strongly) consistent at (μ_0 + σ_0 g̃_0, θ_0). This theorem provides robustness guarantees for the model (<ref>)–(<ref>) under double misspecifications of both θ and g, and in particular ensures marginal (strong) consistency of p(θ|_n) at θ_0. Specifically, θ may be misspecified in the sense that the true parameter is not contained in the parameter set Θ, and g may be misspecified as a location-scale shift around g_0, where g_0 is a fixed monotonic transformation. Notably, this result does not require g_0 and θ_0 to be the true parameters, but instead establishes posterior consistency for any pair of the transformation g_0 and the KL-minimizer θ_0, as long as the conditions are satisfied. The main condition, (<ref>.1) appears complex but is a common requirement in the asymptotic analysis of misspecified Bayesian semiparametric models. It is a variant of similar conditions related to posterior convergence under perturbations around “least-favorable submodels" of the true model <cit.>. § DISCUSSION We introduced a Bayesian approach for semiparametric regression analysis. Our strategy featured a transformation layer (<ref>) atop a continuous regression model (<ref>) to enhance modeling flexibility, especially for irregular marginal distributions and various data domains. Most uniquely, the proposed sampling algorithm (Algorithm <ref>) delivered efficient Monte Carlo (not MCMC) inference with easy implementations for popular regression models such as linear regression, quantile regression, and Gaussian processes. Empirical results demonstrated exceptional prediction, selection, and estimation capabilities for Bayesian semiparametric linear models; substantially more accurate quantile estimates and model adequacy for Bayesian quantile regression; and superior predictive accuracy for Gaussian processes. Finally, our asymptotic analysis established joint posterior consistency under general conditions, including multiple model misspecifications. The primary concerns with Algorithm <ref> are (i) the use of the surrogate likelihood in place of (<ref>) and (ii) the need for an approximation p̂(θ|_n) to infer g via (<ref>). We have attempted to address these concerns using both empirical and theoretical analysis. The empirical results are highly encouraging: the Monte Carlo samplers are efficient and simple for a variety of semiparametric Bayesian models and the posterior predictive distributions are calibrated and sharp for many challenging simulated and real datasets. These results are robust to the choice of the approximation, including simply the prior p̂(θ|_n) = p(θ). Further, we introduced an importance sampling adjustment to provide posterior inference under the correct likelihood. Yet this adjustment does not appear to make any difference in practice, even for n=50, which is reassuring for direct and unadjusted application of Algorithm <ref>. Finally, the theoretical analysis establishes the asymptotic validity of the posterior targeted by Algorithm <ref>, even under multiple model misspecifications. This remains true for a broad class of (consistent) marginal models for P_Y and P_X, which allows customization for settings in which the Bayesian bootstrap is not ideal. § ACKNOWLEDGEMENTS We thank David Ruppert and Surya Tokdar for their helpful comments. Research (Kowal) was sponsored by the Army Research Office (W911NF-20-1-0184), the National Institute of Environmental Health Sciences of the National Institutes of Health (R01ES028819), and the National Science Foundation (SES-2214726). The content, views, and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, the National Institutes of Health, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein apalike
http://arxiv.org/abs/2306.10386v1
20230617161102
Blind Video Quality Assessment at the Edge
[ "Zhanxuan Mei", "Yun-Cheng Wang", "C. -C. Jay Kuo" ]
eess.IV
[ "eess.IV" ]
Adjoint functor theorems for lax-idempotent pseudomonads Fosco Loregian Accepted 16 June 2023 ======================================================== Owing to the proliferation of user-generated videos on the Internet, blind video quality assessment (BVQA) at the edge attracts growing attention. The usage of deep-learning-based methods is restricted by their large model sizes and high computational complexity. In light of this, a novel lightweight BVQA method called GreenBVQA is proposed in this work. GreenBVQA features a small model size, low computational complexity, and high performance. Its processing pipeline includes: video data cropping, unsupervised representation generation, supervised feature selection, and mean-opinion-score (MOS) regression and ensembles. We conduct experimental evaluations on three BVQA datasets and show that GreenBVQA can offer state-of-the-art performance in PLCC and SROCC metrics while demanding significantly smaller model sizes and lower computational complexity. Thus, GreenBVQA is well-suited for edge devices. § INTRODUCTION It is often to classify objective video quality assessment methods into three categories: full-reference video quality assessment (FR-VQA), reduced-reference video quality assessment (RR-VQA), and no-reference video quality assessment (NR-VQA). NR-VQA is also known as blind video quality assessment (BVQA). FR-VQA methods assess video quality by measuring the difference between distorted videos and their reference videos. One well-known example is VMAF <cit.>. RR-VQA <cit.> methods evaluate video quality by utilizing a part of information from reference videos, which offers greater flexibility than FR-VQA. Finally, BVQA is the only choice if there is no reference video available. Most user-generated videos do not have reference videos. BVQA is challenging due to the huge variety of content and the absence of reference videos. Recently, BVQA attracts growing attention owing to rich video contents in social media and the popularity of multi-party video conferencing. One straightforward BVQA solution is to build it upon blind image quality assessment (BIQA) methods. That is, the application of BIQA methods to a set of keyframes of distorted videos individually. BIQA methods can be classified into three categories: natural scene statistic (NSS) based methods <cit.>, codebook-based methods <cit.> and deep-learning-based (DL-based) methods <cit.>. However, directly applying BIQA followed by frame-score aggregation does not produce satisfactory outcomes because of the absence of temporal information. Thus, it is essential to incorporate temporal or spatio-temporal information. Other BVQA methods with handcrafted features <cit.> were evaluated on synthetic-distortion datasets with simulated distortions such as transmission and compression. Recently, they were evaluated on authentic-distortion datasets as well as reported in <cit.>. Their performance on authentic-distortion datasets is somehow limited. Authentic-distortion VQA datasets arise from the user-generated content (UGC) in the real-world environment. They contain complicated and mixed distortions with highly diverse contents, devices, and capture conditions. Deep learning (DL) methods have been developed for BIQA and BVQA <cit.>. To further enhance performance and reduce distributional shifts <cit.>, pre-trained models on large-scale image datasets, such as the ImageNet <cit.>, are adopted in <cit.>. However, it is expensive to adopt large pre-trained models on mobile or edge devices. Edge computing is a rapidly growing field in recent years due to the popularity of smartphones and Internet of Things (IoT). It involves processing and analyzing data near their source, typically at the “edge” of the network (rather than transmitting them to a centralized location such as a cloud data center). Given that a high volume of videos triggers heavy Internet traffic, video processing at the edge reduces the video transmission burden and saves the network bandwidth. The VQA task can enhance many video processing modules, such as video bitrate adaptation <cit.>, video quality assurance <cit.>, and video pre-processing <cit.>. The demand for high-quality videos is increasing on edge devices, while most user-generated contents lack reference videos, necessitating the use of BVQA methods. Furthermore, DL-based BVQA methods are expensive to deploy on edge devices due to their high computational complexity and large model sizes. A lightweight BVQA method is demanded at the edge. Based on the green learning principle <cit.>, a lightweight BIQA method, called GreenBIQA, was proposed in <cit.> recently. GreenBIQA does not perform well on VQA datasets since it does not incorporate temporal information. To address its shortcoming, we propose a lightweight BVQA method and call it GreenBVQA in this work. GreenBVQA features a smaller model size and lower computational complexity while achieving highly competitive performance against state-of-the-art DL methods. The processing pipeline of GreenBVQA contains four modules: 1) video data cropping, 2) unsupervised representation generation, 3) supervised feature selection, and 4) mean-opinion-score (MOS) regression and ensembles. The video data cropping operation in Module 1 is a pre-processing step. Then, we extract spatial, spatio-color, temporal, and spatio-temporal representations from 3D spatio-temporal cubes in an unsupervised manner to obtain a rich set of representations at low complexity in Module 2 and select a subset of the most relevant features using the relevant feature test (RFT) <cit.> in Module 3. Finally, all selected features are concatenated and fed to a trained MOS regressor to predict multiple video quality scores, and then an ensemble scheme is used to aggregate multiple regression scores into one ultimate score. We conduct experimental evaluations on three VQA datasets and show that GreenBVQA can offer state-of-the-art performance in PLCC and SROCC metrics while demanding significantly small model sizes, short inference time, and low computational complexity. Thus, GreenBVQA is well-suited for edge devices. There are three main contributions of this work. * A novel lightweight BVQA method is proposed. Four different types of representation (i.e., spatial, spatio-color, temporal, and spatio-temporal representations) are considered jointly. Each type of representation is passed to the supervised feature selection module for dimension reduction. Then, all selected features are concatenated to form the final feature set. * Experiments are conducted on three commonly used VQA datasets to demonstrate the advantages of the proposed GreenBVQA method. Our method outperforms all conventional BVQA methods in terms of MOS prediction accuracy. Its performance is highly competitive against DL-based methods while featuring a significantly smaller model size, shorter inference time, and lower computational complexity. * A video-based edge computing system is presented to illustrate the role of GreenBVQA in facilitating various video processing tasks at the edge. The rest of this paper is organized as follows. Related work is reviewed in Sec. <ref>. The proposed GreenBVQA is presented in Sec. <ref>. Experimental results are shown in Sec. <ref>. An edge computing system employing GreenBVQA is presented to demonstrate its potential real-world applications in Sec. <ref>. Finally, concluding remarks are given and future research directions are pointed out in Sec. <ref>. § REVIEW OF RELATED WORK Quite a few BVQA methods have been proposed in the last two decades. Existing work can be classified into conventional and DL-based methods two categories. They are first reviewed in Sec. <ref> and sec. <ref>, respectively. Then, we examine previous work on video edge computing in Sec. <ref> §.§ Conventional BVQA Method Conventional BVQA methods extract quality-related features from input images using an ad hoc approach. Then, a regression model (e.g., Support Vector Regression (SVR) <cit.> or XGBoost <cit.>) is trained to predict the quality score using these handcrafted features. One family of methods is built upon the Natural Scene Statistics (NSS). NSS-based BIQA methods <cit.> can be extended to NSS-based BVQA methods since videos are formed by multiple image frames. For example, V-BLIINDS <cit.> is an extension of a BIQA method by incorporating a temporal model with motion coherency. Spatio-temporal NSS can be derived from a joint spatio-temporal domain. For instance, the method in <cit.> conducts the 3D discrete cosine transform (DCT) and captures the spatio-temporal NSS of 3D-DCT coefficients. The spatio-temporal statistics of mean-subtracted and contrast-normalized (MSCN) coefficients of natural videos are investigated in <cit.>, where an asymmetric generalized Gaussian distribution (AGGD) is adopted to model the statistics of both 3D-MSCN coefficients and bandpass filter coefficients of natural videos. Another family of BVQA methods, called codebook-based BVQA, is inspired by CORNIA <cit.>, <cit.>. They first obtain frame-level quality scores using unsupervised frame-based feature extraction and supervised regression. Then, they adopt temporal pooling to derive the target video quality. A two-level feature extraction mechanism is employed by TLVQM <cit.>, where high- and low-level features are extracted from the whole sequence and a subset of representative sequences, respectively. §.§ DL-based BVQA Method DL-based BVQA methods have been investigated recently. They offer state-of-the-art prediction performance. Inspired by MEON <cit.>, V-MEON <cit.> provides an end-to-end learning framework by combining feature extraction and regression into a single stage. It adopts a two-step training strategy: one for codec classification and the other for quality score prediction. COME <cit.> adopts CNNs to extract spatial features and uses motion statistics as temporal features. Then, it exploits a multi-regression model, including two types of SVR, to predict the final score of videos. A mixed neural network is derived in <cit.>. It uses a 3D convolutional neural network (3D-CNN) and a Long-Short-Term Memory (LSTM) network as the feature extractor and the quality predictor, respectively. Following the VSFA method <cit.>, MDTVSFA <cit.> adopts an unified BVQA framework with a mixed-dataset training strategy to achieve better prediction performance against authentic-distortion video datasets. PVQ <cit.> adopts a local-to-global region-based BVQA architecture, which is trained with different kinds of patches. Also, a large authentic-distortion video dataset is built and reported in <cit.>. QSA-VQM <cit.> uses two CNNs to extract quality attributes and semantic content of each video frame, respectively, and one Recurrent Neural Network (RNN) to estimate the quality score of the whole video by incorporating temporal information. To address the diverse range of natural temporal and spatial distortions commonly observed in user-generated-content datasets, CNN-TLVQM <cit.> integrates spatial features obtained from a CNN model and handcrafted statistical temporal features obtained via TLVQM. The CNN model was originally trained for image quality assessment using a transfer learning technique. §.§ Video Quality Assessment at the Edge Machine learning models have been extensively deployed on edge devices <cit.>. Several video analytics tasks are implemented in the edge computing platform <cit.>. Besides prediction accuracy, important metrics to be considered at the edge include latency, computational complexity, memory size, etc. The remarkable success achieved by DL in various domains, such as computer vision and natural language processing, has inspired the application of DL to edge devices (say, smartphones and IoT sensors) <cit.>. They often generate a significant amount of data that demand local processing due to the high data communication cost. Edge computing has emerged in video analytics in recent years. One objective is to optimize the tradeoff between accuracy and cost <cit.>. For example, VideoEdge <cit.> is an edge-based video analysis tool that is implemented in a distributed cloud-edge architecture, comprising edge nodes and the cloud. Given heavy video traffics on mobile or edge networks, there is a growing demand for higher transmission rates and lower network latency. In order to tackle these challenges, adaptive bitrate (ABR) technologies <cit.> are commonly used in video distribution. The ABR scheme consists of two main components. First, the video is encoded into multiple streaming versions with different bit rates. Second, each streaming version is segmented into multiple segments based on the terminal capabilities and network conditions. Consequently, the most suitable streaming version is dynamically provided to the user. The advantage of ABR lies in its ability to reduce the occurrence of choppy videos while enhancing user's quality of experience (QoE). To enhance QoE at the edge, one essential technology is the automatic measure of user's perceptual video quality. As stated in <cit.>, a better understanding of human perceptual experience and behavior is the most dominating factor in improving the performance of ABR algorithms. Since reference videos are not available on edge devices, BVQA methods become the only option. State-of-the-art BVQA methods rely on large pre-trained models and exhibit high computational complexity, making them impractical for deployment on edge devices. Lightweight BVQA methods are in urgent need to address this void. Along this line, Mirko et al. <cit.> propose an efficient BVQA method with two lightweight pre-trained MobileNets <cit.> with certain limitations, such as degraded prediction accuracy. § GREENBVQA METHOD The system diagram of the proposed GreenBVQA method is shown in Fig. <ref>. It has a modularized system consisting of four modules: 1) video data cropping, 2) unsupervised representation generation, 3) supervised feature selection, and 4) MOS regression and ensemble. We will elaborate the operations in each module below. §.§ Video Data Cropping GreenBVQA adopts a hierarchical data cropping approach as illustrated in Fig. <ref>. This serves as a pre-processing step for later modules. Upon receiving an input video clip, we first split it into multiple sub-videos in the time domain. For instance, a ten-second video can be partitioned into ten non-overlapping sub-videos, each of one-second duration. The sub-video serves as the basic unit for future processing. Given a sub-video, we consider the following three cropping schemes. * Frame Cropping One representative frame is selected from each sub-video. It can be the first frame, an I-frame, or an arbitrary frame. Here, we aim to get spatial-domain information. For example, several sub-images can be cropped from a representative frame. Both spatial and spatio-color representations will be computed from each sub-image to be discussed in the next subsection. * Cube Cropping We collect co-located sub-images from all frames in one sub-video as shown in Fig. <ref>. This process is referred to as “cube cropping" since it contains both spatial and temporal information. The purpose of cube cropping is to reduce the amount of data to be processed in the later modules. This is needed as the data size of videos is substantially larger than that of images. * Sub-cube Cropping We crop out a sub-cube from a cube that has a shorter length in the time domain and a smaller size in the spatial domain (see Fig. <ref>). It is used to extract spatio-temporal representations. The rationale for sub-cube cropping is akin to that of cube cropping - reducing the amount of data to be processed later. After the above cropping steps, we obtain sub-images, cubes, and sub-cubes as shown in Fig. <ref>. Spatial and spatio-color representations will be derived from sub-images cropped from representative frames, while temporal and spatio-temporal representations will be obtained from cubes and sub-cubes, respectively. §.§ Unsupervised Representation Generation We consider the following four representations in GreenBVQA. * Spatial representations. They are extracted from the Y channel of sub-images cropped from representative frames. * Spatio-Color representations. They are extracted from cubes of size (H × W) × C, where H and W are the height and width of sub-images and C=3 is the number of color channels, respectively. * Temporal representations. They are the concatenation of statistical temporal information of cubes. * Spatio-Temporal representations. They are extracted from sub-cubes of size (H × W) × T, where H and W are the height and width of sub-images and T is the length of sub-cubes in the time domain, respectively. GreenBVQA employs all four types of representations collectively to predict perceptual video quality scores. On the other hand, spatial and spatio-color representations can be utilized to predict the quality scores of individual images or sub-images. §.§.§ Spatial Representations As discussed in deriving the spatial representation for GreenBIQA <cit.>, a three-layer structure is adopted to extract local and global spatial representations from the sub-images. This is summarized in Table <ref> and depicted in Fig. <ref>. Input sub-images are partitioned into non-overlapping blocks of size 8 × 8, and the Discrete Cosine Transform (DCT) coefficients are computed through the block DCT transform. These coefficients, consisting of one DC coefficient and 63 AC coefficients (AC1-AC63), are organized into 64 channels. The DC coefficients exhibit correlations among spatially adjacent blocks, which are further processed by using the Saab transform <cit.>. The Saab transform computes the patch mean, referred to as the DC component, using a constant-element kernel. Principal Component Analysis (PCA) is then applied to the mean-removed patches to derive data-driven kernels, known as AC kernels. The AC kernels are applied to each patch, resulting in AC coefficients of the Saab transform. To decorrelate the DC coefficients, a two-stage process, namely Hop1 and Hop2, is employed. The coefficients obtained from each channel, either with or without down-sampling at different Hops and the DCT layer, are utilized to calculate standard deviations, PCA coefficients, or are left unchanged. According to the spectral frequency in DCT and Saab domain, the coefficients from Hop2, Hop1, and the DCT layer are denoted as low-frequency, mid-frequency, and high-frequency representations, respectively. Low- and mid-frequency representations contain global information from large receptive fields, while high-frequency representations contain information of details from a small receptive field. Then, all representations are concatenated to form spatial representations. §.§.§ Spatio-Color Representations The representations for spatio-color cubes are derived using 3D Saab and PCA methods. The hyper-parameters are given in Table <ref>, and the data processing block diagram is depicted in Fig. <ref> (a). A spatio-color cuboid has dimensions of H × W × C, where H and W represent the height and width of the sub-image respectively, and C = 3 denotes the number of color channels. It is fed to a two-hop structure. In Hop1, it is divided into non-overlapping cuboids of size 4 × 4 × 3, and the 3D Saab transform is applied individually, resulting in one DC channel and 47 AC channels (AC1-AC47). Each channel has a spatial dimension of 40 × 40. Since the DC, AC1, and AC2 coefficients exhibit high spatial correlation, in Hop2, a 2D Saab transform is used to decompose these channels of size 40 × 40 into non-overlapping blocks of size 4 × 4. For the other 45 AC coefficients obtained from Hop1, their absolute values are taken and a 2 × 2 max pooling operation is performed, yielding 45 channels with a spatial dimension of 20 × 20. In total, we obtain 93 channels, comprising 48 low-frequency channels from Hop2 and 45 high-frequency channels from Hop1, with spatial size of 10 × 10 and 20 × 20, respectively. The coefficients obtained from each channel are utilized to calculate standard deviations and PCA coefficients. These computed coefficients are then concatenated to form spatio-color representations. §.§.§ Temporal Representation The spatial and spatio-color representations are extracted from the sub-images on representative frames. Both of them represent the information within individual frames while disregarding the temporal information across frames. Here, a temporal representation generation is proposed to capture the temporal information from motion vectors (mvs). Consider a cube of dimensions H × W × T, where H and W represent the height and width of sub-images, respectively, and T is the number of frames in the time domain. For each H × W sub-image within a cube, motion vectors of small blocks are computed (or collected from compressed video streaming). They are denoted as V=((x_1, y_1) ⋯, (x_n, y_n))^T, where (x_n, y_n) represents the motion vector of the n^th block. Specifically, x_n and y_n are the horizontal magnitude and vertical magnitude of the motion vector and are named x-mv and y-mv, respectively. The magnitude of the motion vector can be computed by √(x_n^2 + y_n^2). The motion representation of a cube is computed based on its motion vectors. This statistical analysis yields a 14-D temporal representation of each sub-image as shown in Table <ref>. They are arranged in chronological order to form raw temporal representations. Furthermore, PCA is applied to them to derive spectral temporal representations. Finally, the raw and spectral temporal representations are concatenated to form the final temporal representations. §.§.§ Spatio-Temporal Representation Both spatial representations from sub-images of representative frames and temporal representations from cubes are extracted individually from a single domain. It is also important to consider the correlation between spatial information and temporal information in subjective score prediction, as subjective assessments often take both aspects into account when providing scores. To extract spatio-temporal features from both spatial and temporal domains, a two-hop architecture is adopted, where the 3D Saab transform is conducted as depicted in Fig. <ref> (b). The hyper-parameters are summarized in Table <ref> The dimension of the spatio-temporal cube, which is the same as sub-cubes in Fig. <ref>, is H × W × T, where H, W and T represent the height, width, and the frame number of the sub-cube, respectively. These sub-cubes are fed into a two-hop architecture, where the first and the second hops are used to capture local and global representations, respectively. The procedure used to generate spatio-temporal representation is similar to that for the spatio-color representation generation, except the 3D channel-wise Saab transform is applied in both hops. In Hop 1, we split the input sub-cubes into non-overlapping 3-D cuboids of size 8 × 8 × 3. They are converted to one-dimensional vectors for Saab coefficient computation, leading to one DC channel and 191 AC channels, denoted by AC1-AC191. The size of each channel is 12 × 12 × 5. The coefficients in DC and low-frequency AC (e.g., AC1-AC3) channels are spatially and temporally correlated because the adjacent 8 × 8 × 3 cuboids are strongly correlated. Therefore, another 3-D Saab transform is applied in Hop 2 to decorrelate the DC and low-frequency AC channels from Hop 1. Similarly, we split these channels into several non-overlapping 3D cuboids of size 2 × 2 × 5. Coefficients in each cuboid are flattened into a 20-D vector denoted by y=(y_1, ⋯, y_20)^T, and their Saab coefficients are computed. The DC coefficients in Hop 2 are computed by the mean of the 20-D vector, y̅=(∑_i=1^20 y_i)/20. The remaining 19 AC coefficients, denoted by AC1 to AC19, are generated by the principle component analysis (PCA) on the mean-removed 20-D vector. To lower the number of coefficients that need to be processed, blocks in Hop 1 are downsampled to cuboids of size 6 × 6 × 5 by using 2 × 2 max pooling in the spatial domain. Given low-frequency channels of size 6 × 6 × 1 from Hop 2 and downsampled 6 × 6 × 5 high-frequency channels from Hop 1, we generate two sets of representations as follows. * The coefficients in each channel are first flattened to 1-D vectors. Next, we conduct PCA and select the first N PCA coefficients of each channel to form the spectral features. * We compute the standard deviation of coefficients from the same channel across the spatio-temporal domain. Finally, we concatenate the two sets of representations to form the spatio-temporal representations. §.§ Supervised Feature Selection The number of unsupervised representations is large. To reduce the dimension of unsupervised representations, we select quality-relevant features from the 4 sets of representations obtained in the unsupervised representation generation part, by adopting the relevant feature test (RFT) <cit.>. RFT enables the calculation of independent losses for each representation, with lower loss values indicating superior representations. The RFT procedure involves splitting the dynamic range of a representation into two sub-intervals using a set of partition points. For a given partition, the means of the training samples in the left and right regions are computed as representative values, and their respective mean-squared errors (MSE) are calculated. By combining the MSE values of both regions, a weighted MSE for the partition is obtained. The search for the minimum weighted MSE across the set of partition points determines the cost function for the representation. It is important to note that RFT is a supervised feature selection algorithm as it utilizes the labels of the training samples. We computed the RFT results for spatial, spatio-color, temporal, and spatio-temproal representations individually. Fig. <ref> illustrates the sorting of representation indices based on their MSE values, with separate curves. The elbow point on each curve can be used to select a subset of representations. Given the four types of unsupervised representations, the top n dimensions for each representation are selected to be concatenated as the supervised quality-relevant features for cubes. The dimensions of unsupervised representations and supervised selected features on KoNViD-1k <cit.> dataset are shown in Table <ref>. §.§ MOS Regression and Ensembles Once the quality-relevant features are selected, we employ the XGBoost <cit.> regressor as the quality score prediction model that maps d-dimensional quality-relevant features to a single quality score. After the regressor's prediction, each cube is assigned a predicted score. The scores of cubes belonging to the same sub-video are then ensembled using a median filter, resulting in the score of the sub-video, which predict the Mean Opinion Score (MOS) of a short interval of frames from the input video. To obtain the final MOS for the entire input video, a mean filter is applied to aggregate the scores from all sub-videos belonging to the same input video. § EXPERIMENTS §.§ Experiments Setup We discuss VQA datasets, performance benchmarking methods, evaluation metrics, and some implementation details below. §.§.§ Datasets We evaluate GreenBVQA on three VQA datasets: CVD2014 <cit.>, KoNViD-1k <cit.>, and LIVE-VQC <cit.>. Their statistics are summarized in Table <ref>. CVD2014 is captured in a controlled laboratory environment. Thus, it is also called the lab-generated dataset. It comprises 234 video sequences of resolution 640 × 480 or 1280 × 720. They are acquired with 78 cameras ranging from low-quality mobile phones to high-quality digital single-lens reflex cameras. Each video displays one of five scenes with distortions associated with the video acquisition process. KoNViD-1k and LIVE-VQC are authentic-distortion datasets, also known as user-generated content (UGC) datasets. KoNViD-1k comprises 1200 video sequences, each of which lasts for 8 seconds with a fixed resolution. LIVE-VQC consists of a collection of video sequences of a fixed duration in multiple resolutions. Both of them contain diverse content and a wide range of distortions. §.§.§ Benchmarking Methods We compare the performance of GreenBVQA with eleven benchmarking methods in Table <ref>. These methods can be classified into three categories. * Three conventional BIQA methods: NIQE <cit.>, BRISQUE <cit.>, and CORNIA <cit.>. They are applied to frames of distorted videos. Then, the predicted scores are ensembled to yield the ultimate BVQA score. * Three conventional BVQA methods without neural networks: V-BLIINDS <cit.>, TLVQM <cit.>, and VIDEVAL <cit.>. * Five state-of-the-art DL-based methods with pre-trained models: VSFA <cit.>, RAPIQUE <cit.>, QSA-VQM <cit.>, Mirko et al. <cit.>, and CNN-TLVQM <cit.>. They are also called advanced DL methods. §.§.§ Evaluation Metrics The MOS prediction performance is measured by two well-known metrics: the Pearson Linear Correlation Coefficient (PLCC) and the Spearman Rank Order Correlation Coefficient (SROCC). PLCC is employed to assess the linear correlation between the predicted scores and the subjective quality scores. It is defined as PLCC = ∑_i(p_i - p_m)(p̂_̂î-p̂_̂m̂)/√(∑_i(p_i - p_m)^2)√(∑_i(p̂_̂î-p̂_̂m̂)^2), where p_i and p̂_̂î denote the predicted score and the corresponding subjective quality score, respectively, for a test video sample. Additionally, p_m and p̂_̂m̂ represent the means of the predicted scores and subjective quality scores, respectively. SROCC is used to measure the monotonic relationship between the predicted scores and the subjective quality scores, considering the relative ranking of the samples. It is defined as SROCC = 1 - 6∑_i=1^L(m_i - n_i)^2/L(L^2-1), where m_i and n_i represent the ranks of the predicted score p_i and the corresponding subjective quality score p̂_̂î, respectively, within their respective sets of scores. The variable L represents the total sample number. §.§.§ Implementation Details Video Data Cropping. Each sub-video has a length of 30 frames. Six sub-images of size 320 × 320 are cropped from each representative frame. A cube consists of (320 × 320) × 30 pixels. Then, one sub-video has 6 cubes. The size of the sub-cube, which is used to generate the spatio-temporal representation, is (96 × 96) × 15. Only one sub-cube is cropped from each cube. Unsupervised Representation Generation. For spatial representation generation, the 8 × 8 DCT transform and the 4 × 4 Saab transform are used to generate spatial representations of 6,637 dimensions. The dimensions of temporal, spatio-temporal, and spatio-color representations are 420, 8878, and 6793, respectively. Supervised Feature Selection. Following the application of RFT to each type of representation generated from the KoNViD-1k dataset, independently, the resulting selected features exhibit dimensions of 220 for spatial features, 200 for spatio-color features, 140 for temporal features, and 240 for spatio-temporal features. It is important to note that the dimensions of the selected features may vary across different datasets, as the distribution of data and content can differ significantly among various datasets. MOS Regression and Ensembles. The XGBoost regressor is used to train and predict the MOS score of each cube. The max depth of each tree is 5 and the subsampling rate is 0.6. The maximum number of trees is 2,000 with early termination. Given the score of each cube, a median filter is used to obtain the score of each sub-video. Next, we take the average of all sub-videos' scores to obtain the final score of the input video. Performance Evaluation. To ensure reliable evaluation, we partition a VQA dataset into two disjoint sets: the training set (80%) and the testing set (20%). We set 10% aside in the training set for validation purposes. We conduct experiments in 10 runs and report the median values of PLCC and SROCC. §.§ Performance Comparison §.§.§ Same-Domain Training Scenario We compare the PLCC and SROCC performance of GreenBVQA with that of the other eleven benchmarking methods in Table <ref>. GreenBVQA outperforms all three conventional BIQA methods (i.e., NIQE, BRISQUE, and CORNIA) and all three conventional BVQA methods (i.e., V-BLIINDS, TLVVQM, and VIDEVAL) by a substantial margin in all three datasets. This shows the effectiveness of GreenBVQA in extracting quality-relevant features to cover diverse distortions and content variations. GreenBVQA is also competitive with the five DL-based BVQA methods. Specifically, GreenBVQA achieves the second-best performance for the LIVE-VQC dataset. It also ranks second in the average performance of SROCC across all three datasets. As to the five DL-based BVQA methods, the performance of GreenBVQA is comparable with that of QSA-VQM. However, there exists a performance gap between GreenBVQA and CNN-TLVQM, which is a state-of-the-art DL-based method employing pre-trained models. The VQA datasets, particularly user-generated content datasets, pose significant challenges due to non-uniform distortions across videos and a wide variety of content without duplication. Pre-trained models, trained on large external datasets, have an advantage in extracting features for non-uniform distortions and unseen content. Nonetheless, these advanced DL-based methods come with significantly larger model sizes and inference complexity as analyzed in Sec. <ref>. §.§.§ Cross-Domain Training Scenario To evaluate the generalizability of BVQA methods, we investigate the setting where training and testing data come from different datasets. Here, we focus on the two UGC datasets (i.e., KoNViD-1k and LIVE-VQC) due to their practical significance. Two settings are considered: I) trained with KoNViD-1k and tested on LIVE-VQC, and II) trained with LIVE-VQC and tested on KoNViD-1k. We compare the SROCC performance of GreenBVQA and five benchmarking methods under these two settings in Table <ref>. The five benchmarking methods include two conventional BVQA methods (TLVQM, and VIDEVAL) and three DL-base BVQA methods (VSFA, QSA-VQM, and Mirko et al.). We see a clear performance drop for all methods in the cross-domain condition by comparing Tables <ref> and <ref>. We argue that setting II provides a more suitable scenario to demonstrate the robustness (or generalizability) of a learning model. This is because KoNViD-1k has a larger video number and scene number as shown in Table <ref>. Thus, we compare the performance gaps in Table <ref> under Setting II with those in the KoNViD-1k/SROCC column in Table <ref>. The gaps between VSFA, QSA-VQM, and CNN-TLVQM against GreenBVQA become narrower for KoNViD-1k. They are down from 0.019, 0.023, and 0.038 (trained by the same dataset) to 0.015, -0.066, and 0.024 (trained by LIVE-VOC), respectively. This suggests a high potential for GreenBVQA in the cross-domain training setting. §.§ Comparison of Model Complexity We evaluate the model complexity of various BVQA methods in three aspects: model size, inference time, and computational complexity. §.§.§ Model sizes There are two ways to measure the size of a learning model: 1) the number of model parameters and 2) the actual memory usage. Floating-point and integer model parameters are typically represented by 4 bytes and 2 bytes, respectively. Since a great majority of model parameters are in the floating point format, the actual memory usage is roughly equal to 4 × bytes. Here, we use the “model size" to refer to actual memory usage below. The model sizes of GreenBVQA and four benchmarking methods are compared in Table <ref>. The size of the GreenBVQA model includes: the representation generator (4.28MB) and a regressor (2.08MB), leading to a total of 6.36 MB. As compared with four DL-based benchmarking methods, GreenBVQA achieves comparable SROCC and PLCC performance with a much smaller model size. §.§.§ Inference time One measure of computational efficiency is inference time. We compare the inference time of various BVQA methods on a desktop with an Intel Core i7-7700 [email protected], 16 GB DDR4 RAM 2400 MHz. The benchmarking methods include NIQE, BRISQUE, TLVQM, Mirko et al., V-BLIINDS, VSFA, and QSA-VQM. We run their original codes with the default settings using the CPU only. As shown in Table <ref>, we conduct experiments on three test videos of various lengths and resolutions: a 240-frame video of resolution of 960×540, a 346-frame video of resolution of 640×480, and a 467-frame video of resolution 1280×720. We repeat the test for each method ten times and report the average inference time (in seconds) in Table <ref>. GreenBVQA has a significantly shorter inference time as compared to other methods across all resolutions. The efficiency gap widens as the video resolution increases. It is approximately 2.1x faster than Mirko et al. on average, which is the second most efficient method. Furthermore, GreenBVQA provides comparable performance with Mirko et al. in prediction accuracy as shown in Table <ref>, while demanding a smaller model size. GreenBVQA can process videos in real-time, achieving an approximate speed of 75 frames per second, solely relying on a CPU. It is worthwhile to mention that, as an emerging trend, edge computing devices will contain heterogeneous computing units such as CPUs, GPUs, and APUs (AI processing units). Several DL-based methods support GPU acceleration, benefiting from mature coding libraries and environments. Since GreenBVQA can be easily performed with parallel processing, we expect GreenBVQA to benefit from these accelerators as well. §.§.§ Computational complexity The number of floating point operations (FLOPs) provides another way to assess the complexity of a BVQA model. We estimate the FLOPs number of several BVQA methods and compare them with that of GreenBVQA. The FLOPs required by one 240frs@540p test video in the KoNViD-1k dataset are shown in the last column of Table <ref>. QSA-VQM, CNN-TLVQM, and VSFA demand remarkably higher FLOPs numbers, ranging from 1250 to 2500 times of GreenBVQA. Mirko et al., which is an efficient BVQA method specifically designed to reduce inference time and computational complexity, still requires about 100 times of GreenBVQA. To be consistent with the inference time analysis, three test videos of different lengths and resolutions are selected for the FLOPs comparison in Fig. <ref>. TLVQM and NIQE have the lowest FLOPs, which are in the order of 10^8 to 10^9. The FLOPs of GreenBVQA are in the order of 10^10. However, this discrepancy is not reflected by the inference time comparison in Table <ref> since GreenBVQA can be parallelized more easily. GreenBVQA allows SIMD instructions which are commonly used in the multi-thread CPU. Furthermore, GreenBVQA attains higher prediction accuracy than TLVQM and NIQE. All DL-based BVQA methods have significantly higher FLOPs. §.§ Abalation Study We conduct an ablation study on the choice of selected features in GreenBVQA. The results are reported in Table <ref>. The examined features include spatial features (S-features), temporal features (T-features), spatio-temporal features (ST-features), and spatio-color features (SC-features). Our study begins with the assessment of the effectiveness of spatial features (the first row), followed by adding temporal features (the second row). We see that both SROCC and PLCC improve in all three datasets. The addition of ST-features (the third row) can improve SROCC and PLCC for all datasets as well. Finally, we use all four feature types and observe further improvement in SROCC and PLCC (the last row). Note that the performance of ST and SC features is not reported for the CVD2014 dataset since their improvement is little. A combination of S and T features already reaches high performance for this dataset. § AN EDGE COMPUTING SYSTEM WITH BVQA Existing bitrate adaptive video augmentation methods <cit.> primarily consider the tradeoff between video bitrate and bandwidth consumption to improve the quality of experience. Yet, most of them ignore the perceptual quality of streaming videos. Perceptual video quality is more relevant to the human visual experience. A video with higher bitrates does not guarantee perceptual friendliness due to the presence of perceptual distortions such as blurriness, noise, blockiness, etc. Although the FR-VQA technique can account for the perceptual quality of streaming video, the resulting methods rely on reference videos, which are not available on edge or mobile devices. As a blind video quality assessment method, GreenBVQA can operate without any reference. Its small model size and low computational complexity make it well-suited for deployment on edge devices. Furthermore, its energy efficiency and cost-effectiveness, evidenced by short inference time in a CPU, support its applicability in an edge computing system since low-cost edge devices may not have GPUs or APUs. GreenBVQA can be used as a perceptual quality monitor on edge devices. An edge computing system that employs GreenBVQA is shown in Fig. <ref>, where GreenBVQA is used to enhance users' experience in watching videos. As shown in the figure, the system involves predicting the perceptual quality of video streams with no reference. The predicted quality score can be utilized by other modules in the system. * The predicted score can be used as feedback to the phone camera in video capturing. In certain extreme situations, such as dark or blurred video capturing conditions, a low predicted video quality score can serve as an alert so that the user can change the camera setting to get improved video quality. * In the context of video streaming over the network, it can assist the adaptive bitrate module in adjusting the bitrate of subsequent video streams. If the predicted score is higher, the coding module at the transmitter end can provide a lower bitrate video stream to save the bandwidth. * Several video pre-processing modules (e.g., video enhancement <cit.> and video denoising <cit.>) are commonly implemented on edge devices to alleviate the computational burden of the server. By leveraging the predicted video quality scores, unnecessary pre-processing operations can be saved. For instance, when a sequence of video frames is predicted to have a good visual quality, there is no need to denoise or deblur the frame sequence. GreenBVQA can also be used to evaluate the performance of video pre-processing tasks. § CONCLUSION AND FUTURE WORK As the demand for high-quality videos captured and consumed at the edge continues to grow, there is an urgent need for a perceptual video quality prediction model that can guide these tasks effectively. A lightweight blind video quality assessment method called GreenBVQA was proposed in this work. Its SROCC and PLCC prediction performance was evaluated on three popular video quality assessment datasets. GreenBVQA outperforms all conventional (non-DL-based) BVQA methods and achieves performance comparable with state-of-the-art (DL-based) BVQA methods. GreenBVQA's small model size and low computational complexity, which implies high energy efficiency, make it well-suited for integration into an edge-based video system. GreenBVQA exhibits short inference times, enabling real-time prediction of perceptual video quality scores using the CPU only. There are several promising research directions worth future investigation. First, in the context of high frame rate video capturing and transmission, it is desired to adopt adaptive bitrate (ABR) video with variable frame rates (VFR). With the emergence of advanced edge devices capable of capturing high frame rate videos, we expect GreenBVQA to support VFR video and enable ABR video transmission with proper adaptation. Second, as user-generated content (UGC) videos with diverse content grow, we see a need to tailor GreenBVQA to specific content types such as gaming and virtual reality (VR). unsrt
http://arxiv.org/abs/2306.05961v2
20230609152751
The density of ADE families of curves having squarefree discriminant
[ "Martí Oller" ]
math.NT
[ "math.NT", "11N35, 11G30, 17B70" ]
A dichotomy theorem for Γ-switchable H-colouring on m-edge coloured graphs Richard BrewsterDepartment of Mathematics and Statistics, Thompson Rivers University, Kamloops, BC, Canada, Arnott KidnerDepartment of Mathematics and Statistics, Memorial University of Newfoundland, St, John's, NL, Canada, Gary MacGillivrayDepartment of Mathematics and Statistics, University of Victoria, Victoria, BC, Canada =========================================================================================================================================================================================================================================================================================================================================== We determine the density of curves having squarefree discriminant in some families of curves that arise from Vinberg representations, showing that the global density is the product of the local densities. We do so using the framework of Thorne and Laga's PhD theses and Bhargava's orbit-counting techniques. This paper generalises a previous result by Bhargava, Shankar and Wang. As an application, we give an asymptotic for the number of reducible orbits of our representations. § INTRODUCTION The aim of this paper is to count the density of curves in certain families that have squarefree discriminant. We do so following the techniques in arithmetic statistics developed by Bhargava and his collaborators. The main idea is that many arithmetic objects of interest can be parametrised by the rational or integral orbits of a certain representation (G,V): in this situation, Bhargava's geometry-of-numbers methods allow to count these integral orbits of V, which consequently provides information on the desired arithmetic objects that would be otherwise difficult to obtain. This idea has led to many impressive results in number theory; see <cit.> or <cit.> for an overview. The present paper is inspired by the recent paper <cit.> by Bhargava, Shankar and Wang, in which they compute the density of monic integral polynomials of a given degree that have squarefree discriminant. The main technical difficulty is to bound the tail estimate of polynomials having discriminant “weakly divisible” by the square a large prime (this notion will be defined later). They do so using the representation of G = _n on the space V of n× n symmetric matrices. By relating polynomials with discriminant divisible by p^2 for a large p to certain integral orbits of the representation (G,V), they get the desired result using the aforementioned geometry-of-numbers techniques. Similar methods were used in <cit.> in the non-monic case with a different representation, and also in <cit.> for certain families of elliptic curves (in particular, their F_2 case essentially corresponds to our D_4 case). A key observation, which motivates our results, is that the representation studied in <cit.> arises as a particular case of the more general families of representations studied in <cit.>. Using the framework of Vinberg theory, Thorne found that given a simply laced Dynkin diagram, we can naturally associate to it a family of curves and a coregular representation (G,V), where the rational orbits of the representation are related to the arithmetic of the curves in the family. These results have been used, implicitly and explicitly, to study the size of 2-Selmer groups of the Jacobians of these curves, see <cit.> for some particular cases. Later, Laga unified, reproved and extended all these results in <cit.> in a uniform way. Our aim is to compute the density of curves having squarefree discriminant in these families of ADE curves. We will do so by reinterpreting the methods in <cit.> in the language of <cit.> and <cit.>. As a corollary, we will obtain the asymptotics for the number of integral reducible orbits of these representations, following <cit.>. Let 𝒟 be a Dynkin diagram of type A,D,E. In Section <ref>, we will construct a representation (G,V) associated to 𝒟, and in Section <ref> we will construct a family of curves C → B. Here, B is isomorphic to the Geometric Invariant Theory (GIT) quotient V G := [V]^G. We see that B can be identified with an affine space, and we write B = [p_d_1,…,p_d_r]. Given b ∈ B, we define its height to be ht(b) := sup(|p_d_1(b)|^1/d_1,…,|p_d_r(b)|^1/d_r). Denote by C_b the preimage of a given b∈ B under the map C → B; it will be a curve of the form given by Table <ref>. The main result of this paper concerns the density of squarefree values of the discriminant Δ(C_b) of the curve (or equivalently, the discriminant Δ(b) defined in Section <ref>). A definition for the discriminant of a plane curve can be found in <cit.>, for instance. We remark that in our definition of discriminant, we assume that it is an integer-valued polynomial in multiple variables, normalised so that the coefficients have greatest common divisor 1 (for instance, the usual discriminant for elliptic curves contains a factor of 16: we omit it in our case). Our result is related to the p-adic density of these squarefree values: we will denote by ρ(𝒟_p) the p-adic density of curves in the family C → B having discriminant indivisible by p^2 in _p; this is obtained by taking all the (finitely many) elements in b ∈ B(/p^2) and counting the proportion of them that have non-zero discriminant in /p^2. We note that under our assumptions on the discriminant, none of the local densities vanish; this can be checked with a case-by-case computation. We have lim_X →∞#{b ∈ B() |Δ(b) is squarefree, (b) < X}/#{b ∈ B() |(b) < X} = ∏_pρ(𝒟_p). To prove this theorem, we need to obtain a tail estimate to show that not too many b ∈ B() are divisible by p^2 for large primes p. A key observation in <cit.> is to separate those b ∈ B() with p^2 | Δ(b) for a prime p in two separate cases: * If p^2 | Δ(b+pc) for all c ∈ B(), we say p^2 strongly divides Δ(b) (in other words, p^2 divides Δ(b) for “mod p reasons”). * If there exists c ∈ B() such that p^2 ∤Δ(b+pc), we say p^2 weakly divides Δ(b) (in other words, p^2 divides Δ(b) for “mod p^2 reasons”). For a prime number p, let _p^(1),_p^(2) denote the set of b ∈ B() whose discriminant is strongly (resp. weakly) divisible by p^2. We prove tail estimates for _p^(1),_p^(2) separately: There exists a constant γ > 0 such that for any positive real number M we have: #⋃_p > M p prime{b ∈_p^(1)|(b) < X} = O_(X^ V + /M) + O(X^ V - 1), #⋃_p > M p prime{b ∈_p^(2)|(b) < X} = O_(X^ V + /M) + O(X^ V - γ). The implied constants are independent of X and M. As in <cit.>, the strongly divisible case follows from the use of the Ekedahl sieve; more precisely, it follows from the results in <cit.> and the fact that the discriminant polynomial is irreducible by <cit.>. Therefore, it remains to prove the (substantially harder) weakly divisible case, which is the content of most of this paper. We start in Section <ref> by giving the necessary background and introducing our objects of interest, most importantly the representation (G,V) coming from Vinberg theory and the associated family of curves C → B. The main step in the proof of Theorem <ref> is done in Section <ref>, where given a b ∈_p^(2) we obtain a special integral G()-orbit in V whose elements have invariants b. We additionally consider a distinguished subspace W_0() ⊂ V(), and we define a Q-invariant for the elements of W_0(). Then, we will see that the elements in the constructed orbit have large Q-invariant when they intersect W_0() (which happens always except for a negligible amount of times by cutting-off-the-cusp arguments). This construction is the analogue of <cit.>; we give a more detailed comparison at the end of Section <ref>. In Section <ref>, we use Bhargava's geometry-of-numbers techniques to count the special G()-orbits in V() with large Q-invariant constructed in the previous section. We do so in a manner that is completely analogous to <cit.>, relying on critical reductions given by <cit.>. The final step in the proof of Theorem <ref> relies on an elementary but rather tedious case-by-case computation. We remark that our proof implicitly also relies on other implicit case-by-case computations: namely, the cutting-off-the-cusp result in Proposition <ref> relies on an (even more tedious) exhaustive analysis of all cases, sometimes relying on lengthy computations on a computer (cf. <cit.>). Theorem <ref> follows from Theorem <ref> using a squarefree sieve, which is explained in detail in Section <ref>. The sieve is carried out in a general enough setting that allows us to count the density of subsets in B() defined by infinitely many congruence conditions. In particular, we get an application of our result to the context of <cit.>, which allows us to get an upper bound on the average size of 2-Selmer groups of families defined by infinitely many congruence condtions. For b ∈ B(), denote by J_b the Jacobian of the curve C_b. Let m be the number of marked points of the family C → B, as given in Table <ref>. Let be a κ-acceptable subset of B() in the sense of Section <ref>. Then, we have lim sup_X →∞∑_b ∈, (b)<X#_2 J_b/#{b ∈|(b) < X}≤ 3·2^m-1. Finally, in Section <ref> we apply our results to obtain the asymptotics for the number of reducible orbits of G() in V(). This is done following Method II in <cit.>. Through a global-to-local correspondence for integral orbits, we start by obtaining an asymptotic estimate for the number of integral orbits in W_0(), which relies on the squarefree sieve of Section <ref>. This estimate directly translates to an estimate for the orbits of G() in V(), of which we can obtain the precise constant for the leading term using a Jacobian change-of-variables formula. In particular, we answer Question 2 in <cit.> affirmatively, namely: Let N^red(X),N^irred(X) denote the number of reducible (resp. irreducible) G()-orbits in V() having height less than X. Then, as X tends to infinity, we have N^red(X) ≍ N^irred(X), and moreover lim_X →∞ N^red(X)/N^irred(X) tends to a rational constant. It is particularly surprising that the leading constant for the asymptotics of reducible and irreducible orbits are the same up to a rational constant. For the irreducible case, this constant comes from the computation of the volume of G()\ G(), whereas for the reducible case the constant comes from computations of certain volumes in the cuspidal region for the representation (G,V). These two ways of obtaining the constant appear to be essentially different, and we wonder if there is a natural explanation for this phenomenon. *Acknowledgements. This paper was written while the author was a PhD student under the supervision of Jack Thorne. I would like to thank him for providing many useful suggestions, guidance and encouragement during the process, and for revising an early version of this manuscript. I also wish to thank Jef Laga for his helpful comments. The project that gave rise to these results received the support of a fellowship from “la Caixa” Foundation (ID 100010434). The fellowship code is LCF/BQ/EU21/11890111. The author wishes to thank them, as well as the Cambridge Trust and the DPMMS, for their support. § PRELIMINARIES In this section, we introduce our representation (G,V) of interest, together with some of its basic properties. We do so mostly following <cit.> and <cit.>. §.§ Vinberg representations Let H be a split adjoint simple group of type A,D,E over . We assume H is equipped with a pinning (T,P,{X_α}), meaning: * T ⊂ H is a split maximal torus (determining a root system Φ_H). * P ⊂ H is a Borel subgroup containing T (determining a root basis S_H⊂Φ_H). * X_α is a generator for _α for each α∈ S_H. Let W = N_H(T)/T be the Weyl group of Φ_H, and let 𝒟 be the Dynkin diagram of H. Then, we have the following exact sequences: 0 r H r (H) r (𝒟) r 0 0 r W r (Φ_H) r (𝒟) r 0 The subgroup (T,P,{X_α}) ⊂(H) of automorphisms of H preserving the pinning determines a splitting of (<ref>). Then, we can define ϑ∈(H) as the unique element in (T,P,{X_α}) such that its image in (𝒟) under (<ref>) coincides with the image of -1 ∈(Φ_H) under (<ref>). Writing ρ̌ for the sum of fundamental coweights with respect to S_H, we define θ := ϑ∘(ρ̌(-1)) = (ρ̌(-1)) ∘ϑ. The map θ defines an involution of H, and so dθ defines an involution of the Lie algebra . By considering ± 1 eigenspaces, we obtain a /2-grading = (0) ⊕(1), where [(i),(j)] ⊂(i+j). We define G = (H^θ)^∘ and V = (1), which means that V is a representation of G by restriction of the adjoint representation. Moreover, we have Lie(G) = (0). We have the following basic result <cit.> on the GIT quotient B := V G = [V]^G. Let 𝔠⊂ V be a Cartan subspace. Then, 𝔠 is a Cartan subalgebra of , and the map N_G(𝔠) → W_ := N_H()/Z_H() is surjective. Therefore, the canonical inclusions ⊂ V ⊂ induce isomorphisms W_≅ V G ≅ H. In particular, all these quotients are isomorphic to a finite-dimensional affine space. For any field k of characteristic zero, we can define the discriminant polynomial Δ∈ k[]^H as the image of ∏_α∈Φ_Tα under the isomorphism k[𝔱]^W k[]^H. The discriminant can also be regarded as a polynomial in k[B] through the isomorphism k[]^H ≅ k[V]^G = k[B]. We can relate the discriminant to one-parameter subgroups, which we now introduce. If k/ is a field and λ_m → G_k is a homomorphism, there exists a decomposition V = ∑_i ∈ V_i, where V_i := {v ∈ V(k) |λ(t)v = t^iv ∀ t ∈_m(k)}. Every vector v ∈ V(k) can be written as v = ∑ v_i, where v_i ∈ V_i; we call the integers i with v_i ≠ 0 the weights of v. Finally, we recall that an element v ∈ is regular if its centraliser has minimal dimension. Let k/ be a field, and let v ∈ V(k). The following are equivalent: * v is regular semisimple. * Δ(v) ≠ 0. * For every non-trivial homomorphism λ_m → G_k^s, v has a positive weight with respect to λ. The reasoning is the same as in <cit.>. We remark that the Vinberg representation (G,V) can be identified explictly. For the reader's convenience, we reproduce the explicit description written in <cit.> in Table <ref>. We refer the reader to loc. cit. for the precise meaning of some of these symbols. §.§ Restricted roots In the previous section we considered the root system Φ_H of H, but we will also need to work with the restricted root system Φ(G,T^θ). The exposition in this section is inspired by <cit.>. Write Φ/ϑ for the orbits of ϑ on Φ, where ϑ is the pinned automorphism defined in the previous section. * The map X^∗(T) → X^∗(T^θ) is surjective, and the group G is adjoint. In particular, X^∗(T^θ) is spanned by Φ(G,T^θ). * Let α, β∈Φ. Then, the image of α in X^∗(T^θ) is non-zero, and α,β have the same image if and only if either α = β or α = ϑ(β). The fixed group T^θ is connected and contains regular elements of T by <cit.>. The group G has trivial center by <cit.>. For the second part, see <cit.>. Hence, we can identify Φ/ϑ with its image in X^∗(T^θ). We note that ϑ = 1 if and only if -1 is an element of the Weyl group W(H,T); in this case Φ/ϑ coincides with Φ. We can write the following decomposition: = ⊕⊕_a ∈Φ/ϑ_a, with = ^θ⊕ V_0 and _a = _a ⊕ V_a, according to the θ-grading. We have a decomposition V = V_0 ⊕⊕_a ∈Φ/ϑ V_a. For a given a ∈Φ/ϑ there are three cases to distinguish, according to the value of s = (-1)^α,ρ̌: * a = {α} and s = 1. Then, V_a = 0 and _α is spanned by X_α. * a = {α} and s = -1. Then, V_a is spanned by X_α and _α = 0. * a = {α,ϑ(α)}, with α≠ϑ(α). Then, V_a is spanned by X_α-sX_ϑ(α) and _α is spanned by X_α+sX_ϑ(α). We note that ϑ preserves the height of a root α with respect to the basis S_H. Therefore, it will make sense to define the height of a root a ∈Φ/ϑ as the height of any element in ϑ^-1(a). §.§ Transverse slices over V G In this section, we present some remarkable properties of the map π V → B, where we recall that B := V G is the GIT quotient. An _2-triple of is a triple (e,h,f) of non-zero elements of satisfying [h,e] = 2e, [h,f] = -2f, [e,f] = h. Moreover, we say this _2-triple is normal if e,f ∈(1) and h ∈(0). Every non-zero nilpotent element e ∈(1) is contained in a normal _2-triple. If e is also regular, then it is contained in a unique normal _2-triple. The first part of the statement is <cit.>, and the second part follows from <cit.>. Let r be the rank of . We say an element x ∈ is subregular if _(x) = r + 2. Subregular elements in V exist by <cit.>. Let e ∈ V be such an element, and fix a normal _2-triple (e,h,f) using Theorem <ref>. Let C = e + _V(f), and consider the natural morphism φ C → B. * The geometric fibres of φ are reduced connected curves. For b ∈ B(k), the corresponding curve C_b is smooth if and only if Δ(b) ≠ 0. * The central fibre φ^-1(0) has a unique singular point which is a simple singularity of type A_n,D_n,E_n, coinciding with the type of H. * We can choose coordinates p_d_1,…,p_d_r in B, with p_d_i being homogeneous of degree d_i, and coordinates (x,y,p_d_1,…,p_d_r) on C such that C → B is given by Table <ref>. See <cit.>. Our choice of pinning in Section <ref> determines a natural choice of a regular nilpotent element, namely E = ∑_α∈ S_H X_α∈ V(). Let (E,H,F) be its associated normal _2-triple by Theorem <ref>. We define the affine linear subspace κ_E := (E + _(F)) ∩ V as the Kostant section associated to E. Whenever E is understood, we will just denote the Kostant section by κ. The composition κ↪ V → B is an isomorphism, and every element of κ is regular. See <cit.>. Let k/ be a field and let v ∈ V(k). We say v is k-reducible if Δ(v) = 0 or if v is G(k)-conjugate to some Kostant section, and k-irreducible otherwise. We will typically refer to -(ir)reducible elements simply as (ir)reducible. We note that if k is algebraically closed, then all elements of V are reducible, see <cit.>. §.§ Integral structures So far, we have considered our objects of interest over , but for our purposes it will be crucial to define integral structures for G and V. The structure of G over comes from the general classification of split reductive groups over any non-empty scheme S: namely, every root datum is isomorphic to the root datum of a split reductive S-group (see <cit.>). By considering the root datum Φ(G,T^θ) studied in Section <ref> and the scheme S =, we get a split reductive group G defined over , such that its base change to coincides with G. By <cit.>, we know that T^θ,P^θ are a maximal split torus and a Borel subgroup of G, respectively. We also get integral structures for T^θ and P^θ inside of G. G and P^θ have class number 1: G(𝔸^∞) = G()G() and P^θ(𝔸^∞) = P^θ()P^θ(). Note that cl(G) ≤cl(P^θ) ≤cl(T^θ) by <cit.>. We see that T has class number 1 using <cit.>, since G contains a -split torus consisting of diagonal matrices in (V) and has class number 1. To obtain the -structure for V, we consider as a semisimple G-module over via the restriction of the adjoint representation. This G-module splits into a sum of simple G-modules: = (⊕_i=1^r V_i ) ⊕(⊕_i=1^s _i ), where ⊕ V_i = V and ⊕_i =, since both subspaces are G-invariant. For each of these irreducible representations, we can choose highest weight vectors v_i ∈ V_i and w_i ∈_i, and we then consider V_i := Dist(G)v_i, _i := Dist(G)w_i, where Dist(G) the algebra of distributions of G (see <cit.>). By the results in <cit.>, we have that V_i = ⊗_V_i, _i = ⊗__i and that V := ⊕V_i is a G-stable lattice inside V. By scaling the highest weight vectors if necessary, we will assume that E ∈V(). We can also consider an integral structure B on B. We can take the polynomials p_d_1,…,p_d_r∈[V]^G determined in Section <ref> and rescale them using the _m-action t · p_d_i = t^d_ip_d_i to make them lie in [V]^G. We let B := [p_d_1,…,p_d_r] and write πV→B for the corresponding morphism. We may additionally assume that the discriminant Δ defined in Section <ref> lies in [V]^G, where the coefficients of Δ in [p_d_1,…,p_d_r] may be assumed to not have a common divisor. A crucial step in our argument will be to make our constructions in _p for all p and then glue it all together using the class number one property in Proposition <ref>. For this, we will need the following lemma, which records the existence of orbits in V(_p) (cf. <cit.>): There exists an integer N_0 ≥ 1 such that for all primes p and for all b ∈B(_p) we have N_0·κ_b ∈V(_p). Our arguments in Section <ref> will implicitly rely on integral geometric properties of the representation (G,V). In there, we will need to avoid finitely many primes, or more precisely to work over S = [1/N] for a suitable N ≥ 1. By combining the previous lemma and the spreading out properties in <cit.>, we get: There exists a positive integer N ≥ 1 such that: * For every b ∈B(), the corresponding Kostant section κ_b is G()-conjugate to an element in 1/NV(). * N is admissible in the sense of <cit.>. In particular, we will always assume that N is even. We fix the integer N in Proposition <ref> throughout the rest of the paper. We will also drop the underline notation for the objects defined over , and just refer to G,V… as G,V… by abuse of notation. To end this section, we consider some further integral properties of the Kostant section. In Section <ref>, we considered κ defined over , and now we will consider some of its properties over _p. Consider the decomposition = ⊕_j ∈_j according to the height of the roots. If P^- is the negative Borel subgroup of H, N^- is its unipotent radical and 𝔭^- and 𝔫^- are their respective Lie algebras, we have 𝔭^- = ⊕_j ≤ 0_j, 𝔫^- = ⊕_j < 0_j and [E,_j] ⊂_j+1. Let R be a ring in which N is invertible. Then: * [E,𝔫_R^-] has a complement in 𝔭_R^- of rank rk_R 𝔭_R^- - rk_R 𝔫_R^-; call it Ξ. * The action map N^- × (E + Ξ) → E + 𝔭^- is an isomorphism over R. * Both maps in the composition E + Ξ→ (E + 𝔭^-) N^- → H are isomorphisms over R. See <cit.>. If R is a field of characteristic not dividing N, then Ξ can be taken to be _(F) and E + Ξ is the same as the Kostant section considered in Section <ref>. We will abuse notation by referring to both the Kostant section defined in Section <ref> and the section in Theorem <ref> by κ. § CONSTRUCTING ORBITS Given an element b ∈ B() with discriminant weakly divisible by p^2 for a large prime p, we will show how to construct a special g ∈ G([1/p]) ∖ G() such that g κ_b ∈1/NV() in a way that “remembers p”. We start by defining the distinguished subspace W_0 ⊂ V as W_0 := ⊕_a ∈Φ/ϑ ht(a) ≤ 1 V_a, where the notation is as in Section <ref>. We write an element v ∈ W_0() as v = ∑_ht(α) = 1 v_αX_α + ∑_ht(β) ≤ 0 v_β X_β, where each X_α,X_β generates each root space V_α,V_β and v_α,v_β∈. Then, we can define the Q-invariant of v ∈ W_0 as Q(v) = |∏_ht(α) = 1 v_α|. Now, define: W_M := {v ∈1/NV() | v = gκ_b for some prime p>M, g ∈ G([1/p])∖ G(), b ∈ B()}. The main result of the section is the following: Let b ∈ B(), and assume that _G()κ_b = {e}. * If p^2 weakly divides Δ(b) for some prime p > M, then W_M ∩π^-1(b) is non-empty. * If v ∈ W_M ∩ W_0, then Q(v) > M. The proof of Proposition <ref> will rely on a reduction to _2, inspired by the techniques in the proofs of <cit.> and <cit.>, which we now explain. Assume we have a connected reductive group L over a field k, together with an involution ξ. As in Section <ref>, the Lie algebra 𝔩 decomposes as 𝔩 = 𝔩(0) ⊕𝔩(1), according to the ± 1 eigenspaces of dξ. We also write L_0 for the connected component of the fixed group L^ξ. Let k be algebraically closed. We say a vector v ∈𝔩(1) is stable if the L_0-orbit of v is closed and its stabiliser Z_L_0(v) is finite. We say (L_0,𝔩(1)) is stable if it contains stable vectors. If k is not necessarily algebraically closed, we say (L_0,𝔩(1)) if (L_0,k^s,𝔩(1)_k^s) is. By <cit.>, the θ defined in Section <ref> is a stable involution, i.e. (G,V) is stable. We now prove the analogue of <cit.>: the proof is very similar and is reproduced for convenience. Let S be a [1/N]-scheme. Let (L,ξ), (L',ξ') be two pairs, each consisting of a reductive group over S whose geometric fibres are adjoint semisimple of type A_1, together with a stable involution. Then for any s ∈ S there exists an étale morphism S' → S with image containing s and an isomorphism L_S'→ L_S intertwining ξ_S' and ξ_S''. We are working étale locally on S, so we can assume that L = L' and that they are both split reductive groups. Let T denote the scheme of elements l ∈ L such that (l) ∘ξ = ξ': by <cit.>, T is a closed subscheme of L that is smooth over S. Since a surjective smooth morphism has sections étale locally, it is sufficient to show that T → S is surjective. Moreover, we can assume that S = k for an algebraically closed field k, since the formation of T commutes with base change. Let A,A' ⊂ H be maximal tori on which ξ,ξ' act as an automorphism of order 2. By the conjugacy of maximal tori, we can assume that A = A' and that ξ,ξ' define the (unique) element of order 2 in the Weyl group. Write ξ = a ξ' for some a ∈ A(k). Writing a = b^2 for some b ∈ A(k), we have ξ = b · b ·ξ' = b ·ξ' · b^-1. The conclusion is that ξ and ξ' are H(k)-conjugate (in fact, A(k)-conjugate), which completes the proof. The following lemma is the key technical part in our proof. We remark the the first part was already implicitly proven in the proof of <cit.>. Let p > N be a prime. * Let b ∈ B(_p) be an element with _pΔ(b) = 1, where _p _p^*→ is the usual normalized valuation. Let v ∈ V(_p) with π(v) = b. Then, the reduction mod p of v in V(_p) is regular. * Let b ∈ B(_p) be an element with discriminant weakly divisible by p^2, and let v ∈ V(_p) ∩π^-1(b). Then, there exists g_v,p∈ G(_p) ∖ G(_p) such that g_v,p· v ∈ V(_p). Let v__p = x_s+x_n be the Jordan decomposition of the reduction of v in _p. Then, we have a decomposition __p = _0,_p⊕_1,_p, where _0,_p = _(x_s) and _1,_p = image((x_s)). By Hensel's lemma, this decomposition lifts to __p = _0,_p⊕_1,_p, with (v) acting topologically nilpotently in _0,_p and invertibly in _1,_p. As explained in the proof of <cit.>, there is a unique closed subgroup L ⊂ H__p which is smooth over _p with connected fibres and with Lie algebra _0,_p. For the first part of the lemma, we are free to replace _p for a complete discrete valuation ring R with uniformiser p, containing _p and with algebraically closed residue field k. In this case, the spreading out properties in <cit.> guarantee that the derived group of L is of type A_1. Since the restriction of θ restricts to a stable involution in L by <cit.>, Lemma <ref> guarantees that there exists an isomorphism _0,R^der≅_2,R intertwining the action of θ on _0,R^der with the action of ξ = ((1,-1)) on _2,R. To show that v_k is regular is equivalent to showing that the nilpotent part x_n is regular in _0,k^der. The elements v_k and x_n have the same projection in _0,k^der, and given that v∈_0,R^der,dθ = -1, its image in _2,R is of the form [ 0 a; b 0 ], with _R(ab) = 1 by the spreading out properties in <cit.>. In particular, exactly one of a,b is non-zero when reduced to k, and hence x_n is regular in _0,k^der, as wanted. For the second part, we return to the case R = _p. If v ∈ V(_p) has discriminant weakly divisible by p^2, there exists w ∈ V(_p) such that _pΔ(v+pw) = 1. By the first part of the lemma, v+pw is regular mod p, and so v is regular mod p. In particular, this means that the nilpotent part x_n is a regular nilpotent in _0,_p^der. We now claim that we have an isomorphism _0,_p^der≅_2,_p intertwining θ and the previously defined ξ and sending the regular nilpotent x_n to the matrix e = [ 0 1; 0 0 ] of _2,_p. Indeed, consider the _p-scheme X = ((L/Z(L),θ),(_2,ξ)), consisting of isomorphisms between L/Z(L) and _2 that intertwine the θ and ξ-actions. Using Lemma <ref>, we see that étale-locally, X is isomorphic to (_2,ξ); in particular, it is a smooth scheme over _p. By Hensel's lemma <cit.>, to show that X has a _p-point it is sufficient to show that it has an _p-point. Now, consider the _p-scheme Y = ((L/Z(L)__p,θ,x_n),(_2,ξ,e)) of isomorphisms preserving the θ and ξ-actions which send x_n to e: it is a subscheme of X__p. Again by Lemma <ref>, Y is étale locally of the form (_2,ξ,e), since _2^ξ acts transitively on the regular nilpotents of _2^dξ = -1 for any field of characteristic p > N. In particular, we see that Y is an (_2,ξ,e)-torsor. In this situation, to see that Y(_p) is non-empty and hence that X(_p) is non-empty, it will suffice to see that (_2,ξ,e) = _p. This follows from the elementary computation of the stabiliser of e under _2^ξ, which can be seen to be trivial over any field. In conclusion, X(_p) is non-empty, meaning that there is an isomorphism _0,_p^der≅_2,_p respecting θ and ξ, and we can make it so that the projection of v in _2,_p is an element of the form [ 0 a; bp^2 0 ], with a,b ∈_p and a ∈ 1+p_p. Moreover, there exists a morphism φ_2 → L^der__p inducing the given isomorphism _0,_p^der≅_2,_p, since _2 is simply connected. The morphism φ necessarily respects the grading, and induces a map _2(_p) → L^der(_p) on the _p-points. Consider the matrix g_v,p=φ((p,p^-1)): it satisfies the conditions of the lemma, and so we are done. We start by proving the first item. Since G has class number 1 by Proposition <ref>, the natural map G() \ G([1/p]) → G(_p)\ G(_p) is a bijection. Therefore, the g_v,p constructed in Lemma <ref> corresponds to an element g_v ∈ G([1/p]) ∖ G(). By construction, g_v · v belongs to V(_p) ∩ V([1/p]) = V(). We now prove the second item. Specifically, if v ∈ W_M ∩ W_0 is given by gκ_b for some g ∈ G([1/p])∖ G(), we will prove that p | Q(v). Since the group H is adjoint, there exists a t ∈ T() that makes all the height-one coefficients of tκ_b be equal to one, and in this case we see that t ∈ T^θ(). By Theorem <ref>, there exists a unique γ∈ N^-() such that γ t κ_b = v; by taking θ-invariants in the isomorphisms of Theorem <ref>. we see that γ∈ N^-,θ(). Since the stabiliser is trivial, we see that g = γ t, or in other words that g ∈ P^-,θ([1/p]) ∖ P^-,θ(). Assume that Q(v) is invertible in _p, so that all the height-one coefficients of v are invertible. Then, there exists a t' ∈ T(_p) making all the height-one coefficients of t'v be equal to one, and by Theorem <ref>, there exists at most one element γ' in N^-(_p) such that γ' t'κ_b = v. Consequently, g ∈ P^-,θ(_p) ∩ P^-,θ([1/p]) = P^-,θ(), a contradiction. Our construction is inspired by the construction in <cit.> for the case A_n. In that case, C → B corresponds to the family of hyperelliptic curves y^2 = f(x), where f(x) has degree n+1 (there is a slight difference between this paper and <cit.>, in that we consider f(x) without an x^n term while they consider polynomials with a possibly non-zero linear term; we ignore this difference for now). The main goal of <cit.> is to construct an embedding σ_m _2^(m)→1/4W_0() ⊂1/4V() , where σ_m(f) has characteristic polynomial f and Q(σ_m(f)) = m. By taking the usual pinning in _n+1, we see that our space W_0 corresponds to the space of symmetric matrices in _n+1 where the entries above the superdiagonal are zero, and the height-one entries are precisely those in the superdiagonal. An explicit section of B can be taken to lie in 1/4W_0(): namely, if n is odd, the matrix B(b_1,…,b_n+1) = [ 0 1; 0 ⋱; 1; 0 1; -b_2/2 -b_1 1; ⋰ -b_3 -b_2/2 0 1; -b_n-2/2 ⋰ ⋰ ⋱; -b_n/2 -b_n-1 -b_n-2/2 0 1; -b_n+1 -b_n/2 0 ] can be seen to have characteristic polynomial f(x) = x^n+1 + b_1x^n +…+b_nx +b_n+1. (if n is even, a similar matrix can be given). The main observation in this case is that if m^2 weakly divides Δ(f), then there exists an l∈ such that f(x+l) = x^n+1 + p_1x^n +…+mp_nx +m^2p_n+1 (cf. <cit.>). Then, if D = (m,1,…,1,m^-1), we observe that the matrix D(B(p_1,…,p_n-1,mp_n,m^2p_n+1)+lI_n+1)D^-1 is integral, has characteristic polynomial f(x) and the entries in the superdiagonal are (m,1,…,1,m). Thus, this matrix has Q-invariant m, as desired. Our Q-invariant is slightly different to the Q-invariant defined in <cit.>, which is defined in a slightly more general subspace of V. When restricting to W_0(), their Q-invariant turns out to be a product of powers of the elements of the superdiagonal, whereas in our case we simply take the product of these elements. This difference does not affect the proof of Theorem <ref>, and we can also see that for both definitions the Q-invariant in the previous example is m. § COUNTING ORBITS In light of the results in Section <ref>, to bound families of curves with non-squarefree discriminant it is sufficient to estimate the size of the G()-invariant set W_M, which we defined as W_M = {v ∈1/NV() | v = gκ_b for some prime p>M, g ∈ G([1/p])∖ G(), b ∈ B()}. Let N(M,X) be the number of G()-orbits of W_M having height at most X. We will prove the following: There exists a real constant γ > 0 such that N(M,X) = O_(1/MX^ V + ) + O(X^ V - γ). Proving Theorem <ref> then amounts to proving Theorem <ref>. After proving both these theorems, we will deduce Theorem <ref> using a squarefree sieve. §.§ Heights Recall that B = [p_d_1,…,p_d_r]. For any b ∈ B(), we define the height of b to be ht(b) = sup_i=1,…,r |p_d_i(b)|^1/d_i. Similarly, for every v ∈ V() we define ht(v) := ht(π(v)). We record the following fact from <cit.>, which in particular means that the number of elements of B()_<X := {b ∈ B() |(b) < X} is of order X^ V: We have d_1+…+d_r = _V. §.§ Measures on G Let Φ_G = Φ(G,T^θ) be the set of roots of G. The Borel subgroup P^θ of G determines a root basis S_G and a set of positive/negative roots Φ_G^±, compatible with the choice of positive roots in H determined by the pinning of Section <ref>. Let N be the unipotent radical of the negative Borel subgroup P^-,θ. Then, there exists a maximal compact subgroup K ⊂ G() such that N() × T^θ()^∘× K → G() given by (n,t,k) ↦ ntk is a diffeomorphism; see <cit.>. The following result is a well-known property of the Iwasawa decomposition: Let dn,dt,dk be Haar measures on N(),T^θ()^∘,K, respectively. Then, the assignment f ↦∫_n ∈N()∫_t ∈ T^θ()^∘∫_k ∈ K f(ntk) δ_G(t)^-1 dn dt dk defines a Haar measure on G(). Here, δ_G(t) = ∏_β∈Φ_G^-β(t) = (t)|_N(). We now fix the Haar measures dn,dt,dk. We get the measure on T^θ()^∘ by pulling it back from the isomorphism ∏_β∈ S_Gβ T^θ()^∘→_>0^# S_G, where _>0 is given the standard Haar measure d^×λ = dλ/λ. We give K its probability Haar measure. Finally, we choose dn as the unique measure in N() such that the Haar measure on G() given by Lemma <ref> coincides with dg. §.§ Fundamental sets In this section, we construct a fundamental domain for the action of G() on V()^red, the subset of reducible elements of V() (cf. Definition <ref>). We achieve this by first constructing a fundamental set for the action of G() on G(). For any c > 0, define T_c = {t ∈ T^θ()^∘|∀β∈ S_G, β(t) ≤ c}. We define a Siegel set to be a set of the form _ω,c := ω· T_c · K, where ω⊂N() is a compact subset, c is a positive real constant and K is the compact subset fixed in Section <ref>. * For every ω⊂N() and c > 0, the set {γ∈ G() |γ·_ω,c∩_ω,c≠∅} is finite. * There exist ω⊂N() and c > 0 such that G() _ω,c = G(). The first part follows from <cit.>. Using <cit.>, we can reduce the second part to showing that G() = P()G(), which follows from <cit.>. We fix a fundamental domain for the G()-action on G() by setting = _ω,c for suitable ω,c satisfying the conclusions of the second part of Proposition <ref>. Now, the construction of a fundamental domain for the G()-action on V()^red readily follows: The multiset ·κ is a finite cover of a fundamental domain for the G()-action on V(). Moreover, the degree of the cover is absolutely bounded. The proposition follows from an analogous argument to that of <cit.>. We will denote this cover of a fundamental domain by := ·κ, and we will denote by _X those elements of having height less than X. §.§ Averaging and cutting off the cusp We are now in a position to carry out the main steps of the proof of Theorem <ref>. Fix a compact left K-invariant set G_0 ⊂ G() which is the closure of non-empty open set. An averaging argument just as in <cit.>) yields N(M,X) ≪∫_g ∈#{v ∈ ((gG_0) ·_X) ∩ W_M } dg. Since all elements t ∈ T_c satisfy β(t) ≤ c for all β∈ S_G, it follows that there is a compact subset ω of N() containing t^-1ω t for all t ∈ T_c. Then, by averaging over elements of ω and K, and using Lemma <ref>, we get: N(M,X) ≪∫_t ∈ T_c#{v ∈ (t ·_X) ∩ W_M }δ^-1(t) dt. Here we have _X = ωG_0_X, which is a bounded set. The following results will simplify our analysis: Let v_0 be the coefficient of the highest weight in V. There exists a constant γ > 0 such that ∫_t ∈ T_c#{v ∈ (t ·_X) ∩1/N(V() ∖ W_0()) | v_0 = 0 }δ(t)^-1 dt = O(X^ V - γ) ∫_t ∈ T_c#{v ∈ (t ·_X) ∩ W_M | v_0 ≠ 0 }δ(t)^-1 dt = O(X^ V - γ/5) ∫_t ∈ T_c#{v ∈ (t ·_X) ∩ W_M |#_G() v > 1 }δ(t)^-1 dt = O(X^ V - γ/5) The first bound follows from the cutting off the cusp arguments in <cit.>. The second and third bound follow from <cit.> and <cit.> respectively, and an application of the Selberg sieve as in <cit.>. Therefore, combining Proposition <ref> and Proposition <ref>, it is sufficient to estimate the integral ∫_t ∈ T_c#{v ∈ (t)·_X ∩ W_0() | Q(v) > M}δ(t)^-1 dt. This will be done in Section <ref> with a case-by-case analysis, analogously to the computations in <cit.>. The first step is to note the following. Let v ∈ W_0(). If Q(v) = 0, then Δ(v) = 0. Let {α_1,…,α_k} be the height-one weights, and assume that the coefficient of α_i of v is zero. Let λ_i_m → G_ be the one-parameter subgroup such that (α_j ∘λ_i)(t) = t^δ_ij. Then, v has no positive weights with respect to λ, and so by Proposition <ref> we get the result. In view of the lemma, if {α_1,…,α_k} are the height-one weights, it will be sufficient to integrate in the region where Xα_i(t) ≫ 1. Furthermore, we can estimate the number of lattice points in (t)·_X ∩ W_0() to be ≪∏_α∈ W_0 (Xα(t)). Finally, the condition on the Q-invariant will translate to asking that ∏_i = 1^k (Xα_i(t)) ≫ M. Combining it all together will yield ∫_t ∈ T_c#{v ∈ (t)·_X ∩ W_0() | Q(v) > M}δ(t)^-1 dt = O_( 1/M X^ V + ) for all > 0, which concludes the proof of Theorem <ref> and hence of Theorem <ref>. §.§ A squarefree sieve Theorem <ref> follows from Theorem <ref> by performing a squarefree sieve, following the methods in <cit.>. In fact, we will prove a slightly more general result about counting elements in B() imposing infinitely many congruence conditions. Let κ be a positive integer. We say a subset ⊂ B() is κ-acceptable if = B() ∩⋂_p _p, where _p ⊂ B(_p) satisfy the following: * _p is defined by congruence conditions modulo p^κ. * For all sufficiently large primes p, the set _p contains all b ∈ B(_p) such that p^2 ∤Δ(b). For any subset A ⊂ B(), denote by N(A,X) the number of elements of A having height less than X. For any prime p and any subset A_p ⊂ B(_p), we denote by ρ(A_p) the density of elements of A_p inside B(_p). Let κ be a positive integer, and let ⊂ B() be a κ-acceptable subset. Then, there exists a constant δ > 0, independent of , such that N(,X) = (∏_pρ(_p) )N(B(),X) + O_(X^ V - δ + ). Recall that B = [p_d_1,…,p_d_k]. For an element b ∈ B() of height at most X, it holds that |p_d_i(b)| < X^d_i, where by Table <ref> we see that d_i ≥ 2 for all i. For any real number M, denote by _M the big family determined by _p for p ≤ M and by B(_p) for p>M. By the Chinese Remainder Theorem we can write N(_X^2/κ,X) = (∏_p < X^2/κρ(_p) )N(B(),X) + O_(X^ V - k + ). By Theorem <ref> applied to M = X^2/κ, we have N(_X^2/κ∖,X) = O_(X^ V - δ + ) for some constant δ = min{γ,2/κ} > 0. We have ρ(_p) ≫ 1-1/p^2 for all sufficiently large p by <cit.>. An elementary computation then shows that ∏_p < X^2κρ(_p) - ∏_pρ(_p) = O(X^-2/κ). The result then follows by combining (<ref>), (<ref>) and (<ref>). §.§ Case-by-case analysis In this section, we finish the proof of Theorem <ref> by performing a case-by-case analysis. More precisely, we will emulate the computations done in <cit.> for the A_n case in the D_n,E_n cases, which involve computing: * The volume of W_0(), corresponding to the product ∏_α∈ S (Xw(α)). * The modular function δ_G(t) = ∏_β∈Φ_G^-β(t) = (t)|_N(). * The Q-invariant condition ∏_i = 1^k (Xα_i(t)) ≫ M. In all cases, we will obtain (<ref>). §.§.§ D_2n+1 The exposition in the D_n cases is inspired by <cit.> and <cit.>. We start by describing explicitly the representation (G,V) of D_2n+1 in the form given by Table <ref>. Let n ≥ 2 be an integer. Let U_1 be a -vector space with basis {e_1,…,e_n,u_1,e_n^*,…,e_1^*}, endowed with the symmetric bilinear form b_1 satisfying b_1(e_i,e_j) = b_1(e_i,u_1) = b_1(e_i^*,e_j^*) = b_1(e_i^*,u_1) = 0, b_1(e_i,e_j^*) = δ_ij and b_1(u_1,u_1) = 1 for all 1 ≤ i,j ≤ n. In this case, given a linear map A U → U we can define its adjoint as the unique map A^∗ U → U satisfying b_1(Av,w) = b_1(v,A^∗w) for all v,w ∈ U. In terms of matrices, A^* corresponds to taking the reflection of A along its antidiagonal when working with the fixed basis. We can define (U_1,b_1) := { g ∈(U_1) | gg^* = 𝕀}, with a Lie algebra that can be identified with {A ∈(U) | A = -A^* }. Let U_2 be a -vector space with basis {f_1,…,f_n,u_2,f_n^∗,…,f_1^∗}, with a similarly defined bilinear form b_2. Let (U,b) = (U_1,b_1) ⊕ (U_2,b_2). Let H = (U,b), and consider := H. With respect to the basis {e_1,…,e_n,u_1,e_n^∗,…,e_1^*,f_1,…,f_n,u_2,f_n^∗,…,f_1^∗}, the adjoint of a block matrix according to the bilinear form b is given by [ A B; C D ]^* = [ A^* C^*; B^* D^* ], where A^*,B^*,C^*,D^* denote reflection by the antidiagonal. An element of is given by {[ B A; -A^* C ] | B = -B^*, C = -C^*}. The stable involution θ is given by conjugation by (1,…,1,-1,…,-1), where the first 2n+1 entries are 1 and the last 2n+1 entries are given by -1. Under this description, we see that V = {[ 0 A; -A^* 0 ] | A ∈Mat_(2n+1)×(2n+1)}. Moreover, G = (H^θ)^∘ is isomorphic to (U_1) ×(U_2). We will use the map [ 0 A; -A^* 0 ]↦ A to establish a bijection between V and (U_2,U_1), where (g,h) ∈(U_1) ×(U_2) acts on A ∈ V as (g,h)· A = gAh^-1. Let T be the maximal torus (t_1,…,t_n,1,t_n^-1,…,t_1^-1,s_1,…,s_n,1,s_n^-1,…,s_1^-1) of G. A basis of simple roots for G is S_G = {t_1-t_2,…,t_n-1-t_n}∪{s_1-s_2,…,s_n-1-s_n}. A positive root basis for V can be taken to be S_V = {t_1-s_1,s_1-t_2,…,t_n-s_n,s_n}. For convenience, we now switch to multiplicative notation for the roots. We make the change of variables α_i = t_i/t_i+1 for i = 1,…,n-1 and α_n = t_n; similarly β_i = s_i/s_i+1 for i = 1,…,n-1 and β_n = s_n. The estimate for the volume of W_0() becomes: ∏_ω∈ W_0 Xω(t) = X^2n^2+4n+1∏_i=1^n α_i^-2in+(i-1)^2(t)β_i^-2in+i^2(t). The modular function in our case is δ^-1(t) = ∏_i=1^n α_i^2in-i^2(t)β_i^2in-i^2(t). In our fundamental domain, we have α_i(t),β_i(t) ≪ 1 (see Section <ref>). The height-one coefficients also give Xt_i/s_i ≫ 1, for i = 1,…,n, and also Xs_n ≫ 1 and Xs_i/t_i+1≫ 1 for i = 1,…,n-1. Changing variables, these integration conditions become 1 ≫α_i(t) ≫ X^-2 for all i = 1,…,n; 1 ≫β_i(t) ≫ X^-2 for i = 1,…,n-1 and 1 ≫β_n(t)≫ X^-1. Call this integration domain Ω. The Q-invariant condition in this case is X^2nα_1…α_n ≫ M; we denote by Ω_M the integration domain Ω together with this extra restiction. Then, it holds that N(M,X) ≪∫_Ω_M X^2n^2+4n+1∏_i=1^n α_i^-2i+1 d^×α d^×β ≪X^2n^2+6n+1/M∫_Ω∏_i=1^n α_i^-2i+2 d^×α d^×β = O_(1/MX^4n^2+4n+1+), as desired. Note that the X^ term comes from the integration of α_1. §.§.§ D_2n The analysis in this case is very similar to the D_2n+1 case. Now, we consider the -vector space U_1 with basis {e_1,…,e_n,e_n^*,…,e_1^*}, endowed with a symmetric bilinear form b_1(e_i,e_j) = b_1(e_i^*,e_j^*) = 0, b_1(e_i,e_j^*) = δ_ij. We also consider a -vector space U_2 with basis {f_1,…,f_n,f_n^*,…,f_1^*}, with an analogous symmetric bilinear form b_2. Let (U,b) = (U_1,b_1) ⊕ (U_2,b_2), let H' = (U,b) and define H to be the quotient of H' by its centre of order 2. Under the basis {e_1,…,e_n,e_n^*,…,e_1^*,f_1,…,f_n,f_n^*,…,f_1^*}, the stable involution is given by conjugation with (1,…,1,-1,…,-1). Similarly to the D_2n+1 case, we have V = {[ 0 A; -A^* 0 ] | A ∈Mat_2n × 2n}, where A^* denotes reflection by the antidiagonal. In this case, the group G = (H^θ)^∘ is isomorphic to (U_1) ×(U_2)/Δ(μ_2), where Δ(μ_2) denotes the diagonal inclusion of μ_2 into the centre μ_2 ×μ_2 of (U_1) ×(U_2). As before, we can identify V with the space of 2n×2n matrices using the map [ 0 A; -A^* 0 ]↦ A, where (g,h) ∈ G acts by (g,h)· A = gAh^-1. We consider the maximal torus T of H given by (t_1,…,t_n,t_n^-1,…,t_1^-1,s_1,…,s_n,s_n^-1,…,s_1^-1). A basis of simple roots for H and G are given by S_H = {t_1-s_1,s_1-t_2,…,s_n-1-t_n,t_n-s_n,s_n+t_n}, S_G = {t_1-t_2,…,t_n-1-t_n,t_n-1+t_n}∪{s_1-s_2,…,s_n-1-s_n,s_n-1+s_n}. Let α_i = t_i/t_i+1 and β_i = s_i/s_i+1 for i = 1,…,n, and let α_n = t_n-1t_n and β_n = s_n-1s_n. Under this change of variables, the volume of W_0() is: ∏_ω∈ W_0Xω(t) = X^2n(n+1)(∏_i=1^n-2α_i^-2in+i^2-i+1α_n-1^(-n^2-n+4)/2α_n^(-n^2-n+2)/2∏_i=1^n-2β_i^-2in+i^2+i(β_n-1β_n)^(-n^2+n)/2)(t). The modular function is δ^-1(t) = ∏_i=1^n-2 (α_iβ_i)^i^2-2in+i(t) (α_n-1β_n-1α_nβ_n)^-(n-1)n/2(t). The Q-invariant condition is given by X^2nα_1…α_n-2α_n ≫ M. A computation shows that the domain of integration is defined by 1 ≫α_i ≫ X^-2 for i = 1,…,n-1, 1 ≫α_n ≫ X^-4 and 1 ≫β_i ≫ X^-2 for i = 1,…,n. The integral to compute becomes N(M,X) ≪∫_Ω_M X^2n(n+1)∏_i=1^n-2α_i^-2i+1α_n-1^2-nα_n^1-nd^×α d^×β ≪X^2n(n+1)+2n/M∫_Ω∏_i=1^n-2α_i^-2i+2α_n-1^2-nα_n^2-nd^×α d^×β = O_(1/MX^4n^2+), as wanted. §.§.§ E_6 For the E_6 case, we use the conventions and computations in <cit.>. Let S_H = {α_1,…,α_6}, where the Dynkin diagram of H is: [scale=0.6] [circle, draw,label=α_1] (A1) at (0,0) ; [circle, draw,label=α_3] (A3) at (2,0) ; [circle, draw,label=α_4] (A4) at (4,0) ; [circle, draw,label=α_5] (A5) at (6,0) ; [circle, draw,label=α_6] (A6) at (8,0) ; [circle, draw,label=below:α_2] (A2) at (4,-2) ; [-] (A1) – (A3) – (A4) – (A5) – (A6); [-] (A2) – (A4); The pinned automorphism ϑ consists of a reflection around the vertical axis. We can define a root basis S_G = {β_1,β_2,β_3,β_4} of G as β_1 = α_3+α_4, β_2 = α_1, β_3 = α_3 and β_4 = α_2 + α_4. The volume of W_0() in this basis is ∏_ω∈ W_0Xω(t) = X^26 (β_1^-12β_2^-17β_3^-21β_4^-11)(t). The modular function is δ^-1(t) = (β_1^8β_2^14β_3^18β_4^10)(t). A basis for the positive roots in V is given by S_V = {β_2,-β_1+β_3+β_4,β_3,β_1-β_3}. Hence, the Q-invariant condition is X^4 β_2β_3β_4(t) ≫ M. The domain of integration can be checked to be 1 ≫β_1(t),β_4(t) ≫ X^-2, 1 ≫β_2(t),β_3(t) ≫ X^-1. Then, the integral to compute becomes: N(M,X) ≪∫_Ω_M X^26β_1^-4β_2^-3β_3^-3β_4^-1 d^×β≪X^30/M∫_Ωβ_1^-4β_2^-2β_3^-2 d^×β = O_( 1/MX^42+), as wanted. §.§.§ E_7 For the E_7 and E_8 cases, we follow the conventions in <cit.>. Let S_H = {α_1,…,α_7}, where the Dynkin diagram of H is: [scale=0.6] [circle, draw,label=α_1] (A1) at (0,0) ; [circle, draw,label=α_3] (A3) at (2,0) ; [circle, draw,label=α_4] (A4) at (4,0) ; [circle, draw,label=α_5] (A5) at (6,0) ; [circle, draw,label=α_6] (A6) at (8,0) ; [circle, draw,label=α_7] (A7) at (10,0) ; [circle, draw,label=below:α_2] (A2) at (4,-2) ; [-] (A1) – (A3) – (A4) – (A5) – (A6) – (A7); [-] (A2) – (A4); The root basis S_G = {β_1,…,β_7} can be described as β_1 = α_3 + α_4 β_2 = α_5 + α_6 β_3 = α_2 + α_4 β_4 = α_1 + α_3 β_5 = α_4 + α_5 β_6 = α_6 + α_7 β_7 = α_2 + α_3 + α_4 + α_5 The volume of W_0() can be computed to be ∏_ω∈ W_0 Xω(t) = X^42 (β_1^-8β_2^-13β_3^-16β_4^-17β_5^-15β_6^-14β_7^-10)(t). The modular function for G can be computed to be δ_G^-1(t) = (β_1^7β_2^12β_3^15β_4^16β_5^15β_6^12β_7^7)(t). The domain of integration Ω is given by Xα_i(t) ≫ 1, or equivalently 1 ≫β_1,…,β_6 ≫ X^-2 and 1 ≫β_7 ≫ X^-4. The Q-invariant condition is given by X^7 (β_1^-1/2β_3^1/2β_4β_5^1/2β_6β_7^1/2)(t) ≫ M The resulting integral to compute is N(M,X) ≪∫_Ω_MX^42β_1^-1β_2^-1β_3^-1β_4^-1β_5^-2β_6^-2β_7^-3 d^×β ≪X^49/M∫_Ωβ_1^-3/2β_2^-1β_3^-1/2β_5^-3/2β_6^-1β_7^-5/2 d^×β = O_(1/MX^70+), as desired. §.§.§ E_8 Let S_H = {α_1,…,α_8}, where the Dynkin diagram of H is: [scale=0.6] [circle, draw,label=α_1] (A1) at (0,0) ; [circle, draw,label=α_3] (A3) at (2,0) ; [circle, draw,label=α_4] (A4) at (4,0) ; [circle, draw,label=α_5] (A5) at (6,0) ; [circle, draw,label=α_6] (A6) at (8,0) ; [circle, draw,label=α_7] (A7) at (10,0) ; [circle, draw,label=α_8] (A8) at (12,0) ; [circle, draw,label=below:α_2] (A2) at (4,-2) ; [-] (A1) – (A3) – (A4) – (A5) – (A6) – (A7) – (A8); [-] (A2) – (A4); The root basis S_G = {β_1,…,β_8} can be described as β_1 = α_2 + α_3 + α_4 + α_5 β_2 = α_6 + α_5 β_3 = α_4 + α_5 β_4 = α_1 + α_3 β_5 = α_2 + α_4 β_6 = α_5 + α_6 β_7 = α_7 + α_8 β_8 = α_3 + α_4 The volume of W_0() can be computed to be ∏_ω∈ W_0 Xω(t) = X^72 (β_1^-18β_2^-30β_3^-40β_4^-47β_5^-53β_6^-57β_7^-29β_8^-30)(t). The modular function for G can be computed to be δ_G^-1(t) = (β_1^14β_2^26β_3^36β_4^44β_5^50β_6^54β_7^28β_8^28)(t). The domain of integration Ω is given by Xα_i(t) ≫ 1, or equivalently 1 ≫β_2(t),…,β_8(t) ≫ X^-2 and 1 ≫β_1(t) ≫ X^-4. The Q-invariant condition is given by X^8 (β_4β_5β_6β_7)(t) ≫ M The resulting integral to compute is N(M,X) ≪∫_Ω_M X^72β_1^-4β_2^-4β_3^-4β_4^-3β_5^-3β_6^-3β_7^-1β_8^-2 d^×β ≪X^80/M∫_Ωβ_1^-4β_2^-4β_3^-4β_4^-2β_5^-2β_6^-2β_8^-2 d^×β = O_(1/MX^128+), as desired. § COUNTING REDUCIBLE ORBITS We can apply our results, more specifically Theorem <ref> and Theorem <ref>, to prove precise asymptotics for the number of reducible orbits of the action of G() in V(). We will do so in a manner that is completely analogous to Method II in <cit.>. We start with the analogue of <cit.>. Let b ∈ B() be an element such that π^-1(b) ∩ W_0() ≠∅. For each prime p, choose v_p ∈ W_0(_p) such that π(v_p) = b. Then, there exists v ∈ W_0(), unique up to the action of P^-,θ(), such that v is P^-,θ(_p)-conjugate to v_p for all p. The proof follows from the fact the P^-,θ has class number one, by Proposition <ref>. To prove Theorem <ref>, we will need a certain bound in the number of P^-,θ(_p)-orbits on W_0(_p) having bounded Q-invariant. This bound is inspired by <cit.>. Fix a prime p. There exist real positive constants C_1,C_2 such that for every b ∈ B(_p), the number of P^-,θ(_p)-orbits in W_0(_p) ∩π^-1(b) having Q-invariant with normalised valuation at most k is bounded above by C_1(p^k)^C_2( V)^2. Throughout the proof, we will interpret G as lying inside (V), with P^-,θ = T^θN being contained inside the subgroup of lower triangular matrices. The torus T^θ will consist of diagonal matrices and N will consist of unipotent lower triangular matrices. We start by considering the case k=0. In that case, given v ∈ W_0(_p) we can act by an appropriate element t ∈ T^θ(_p) so that t· v ∈ E+𝔭^-. We note that for all primes p not dividing the N fixed in Section <ref>, Theorem <ref> implies that we can take C_1 = 1 in the statement. For a prime p | N, we will prove that any two elements of E+𝔭^- that are N(_p)-conjugate are actually conjugate by an element of N(1/p^r_p) for some fixed r. First, we note that the image of the Kostant section over _p falls inside the subspace 1/p^e_p W_0(_p) for a certain fixed exponent e_p. Let b ∈ B(_p) and let v_1,v_2 ∈ (E+𝔭^-)(_p) ∩π^-1(b). Recall that for _p we have the isomorphism N×κ→ E+𝔭^- by taking θ-invariants in Theorem <ref>. Therefore, an element n ∈N(_p) such that n · v_1 = v_2 can be obtained by setting n = n_1 n_2, where n_1 · v_1 = κ(b) and n_2 ·κ(b) = v_2 can be obtained through the previous isomorphism. Using the matrix description of N inside of G, it is an elementary computation to see that n_1,n_2 ∈N(1/p^e_p_p), which implies our claim for r = 2e_p. Take n ∈N(1/p^r_p). We can multiply this matrix on the left by suitable elements of N(_p) to modify its lower triangular entries by _p. Therefore, there are finitely many orbits of the left action of N(_p) on N(1/p^r_p): namely, the number of orbits is bounded by p^r times the number of lower triangular matrix entries. Combining this with the previous claim, we prove the lemma for k=0. Now, assume k ≥ 1. By acting with an element of T^θ(1/p^k_p), we can take any element of W_0(_p) with Q-invariant having valuation at most k to an element of (E + 𝔭^-) (_p). Let b ∈ B(_p), and let v_1,v_2 ∈ W_0(_p) ∩π^-1(b) be two elements having Q-invariant bounded by p^k. Then, an analogous argument to the above shows that v_1,v_2 are conjugate by an element of P^-,θ(1/p^kγ_p) for some constant γ. The same argument as before also shows that the P^-,θ(1/p^γ k_p)-orbits on W_0(_p) split into finitely many orbits, and this number of orbits is at most C_1(p^k)^C_2( V)^2 for some constants C_1,C_2 > 0, as wanted. We call a subset S ⊂ W_0() a big family in W_0 if S = W_0() ∩⋂_p S_p, where the sets S_p ⊂ W_0(_p) satisfy the following conditions: * For all p, the set S_p is P^-,θ(_p)-invariant and is the preimage under reduction modulo p^j of a nonempty subset of W_0(/p^j) for some j > 0. * For all sufficiently large p, the set S_p contains all the elements W_0(_p) with discriminant not divisible by p^2. We obtain the following result as a consquence of Theorem <ref> and Lemma <ref>. The proof is inspired by the proof of <cit.>. Let S ⊂ W_0() be a big family. The number of P^-,θ()-orbits of S of height at most X is ( ∏_p∫_b ∈ B(_p)#(π^-1(b)∩ S_p/P^-,θ(_p)) db)N(X) + o(X^ V). Here, db is the Euclidean measure in B(_p), normalized so that B(_p) has total volume 1; and N(X) := #{b ∈ B() |(b) < X}. The main idea of the proof is to slice the P^-,θ()-orbits in W_0() according to their Q-invariant. Fix an integer m ≥ 1, and define S(m) := {v ∈ S | Q(v) = m}; this is a big family, defined by the set S_p(m) := {v ∈ S_p | |Q(v)|_p = |m|_p}. Let b ∈ B(), Then, Lemma <ref> implies that #(π^-1(b)∩ S(m)/P^-,θ()) = ∏_p #(π^-1(b)∩ S_p(m)/P^-,θ(_p)). In particular, if p does not divide m or the N fixed in Section <ref>, there is only one P^-,θ(_p)-orbit in π^-1(b)∩ S_p(m) by Theorem <ref>. Partition B(_p) = ∪_j = 1^m_p B_p,j, where B_p,j are level sets for the function sending b ∈ B(_p) to #(π^-1(b)∩ S_p(m)/P^-,θ(_p)). Choose a tuple (j_p)_p | Nm∈∏_p | Nm{1,…,m_p}, and choose any b^* ∈π(S(m)) ∩⋂_p B_p,j_p. Then, we have ∑_b ∈ B() ∩⋂_p B_p,j_p (b)<X#(π^-1(b)∩ S(m)/P^-,θ()) = #(π^-1(b^*)∩ S(m)/P^-,θ()) ∑_b ∈ B() ∩⋂_p B_p,j_p (b)<X 1. Since S is a big family, we see that π(S_p) contains every b ∈ B(_p) with p^2 ∤Δ(b) for all sufficiently large primes p. Therefore, we can use the squarefree sieve in Theorem <ref> to obtain ∑_b ∈ B() ∩⋂_p B_p,j_p (b)<X 1 = N(X) ∏_p | Nm∫_b ∈ U_p,j_p db ·∏_p ∤ Nm∫_b ∈π(S_p) db + O_(X^ V - δ + ). Combining equations (<ref>), (<ref>) and (<ref>), summing over all tuples (j_p)_p | Nm∈∏_p | Nm{1,…,m_p} and taking into account Lemma <ref>, we obtain ∑_b ∈ B() (b) < X#(π^-1(b) ∩ S(m)/P^-,θ()) = N(X) ∏_p ∫_b ∈ B(_p)#(π^-1(b) ∩ S(m)_p/P^-,θ(_p)) db + O_(m^C_2( V)^2X^ V - δ + ). We now prove the equality in the theorem by proving both inequalities. First, we prove that the number of P^-,θ()-orbits in W_0() with height at most X is greater or equal than the expression in the statement. Let S(<M) := ∪_m < M S(m). Summing (<ref>) over all m < M yields ∑_b ∈ B() (b) < X#(π^-1(b) ∩ S(<M)/P^-,θ()) = N(X) ∑_m < M∏_p ∫_b ∈ B(_p)#(π^-1(b) ∩ S(m)_p/P^-,θ(_p)) db + O_(M^2C_2( V)^2X^ V - δ + ). Dividing this expression by N(X) and letting X →∞ gives lim inf_X →∞∑_b ∈ B() (b) < X#(π^-1(b) ∩ S/P^-,θ())/N(X)≥∑_m < M∏_p ∫_b ∈ B(_p)#(π^-1(b) ∩ S(m)_p/P^-,θ(_p)) db. Letting M →∞ on the right-hand side and factoring into an Euler product, we obtain: ∑_m = 1^∞∏_p ∫_b ∈ B(_p)#(π^-1(b)∩ S(m)_p/P^-,θ(_p)) db = ∏_p∑_e=0^∞∫_b ∈ B(_p)#(π^-1(b)∩ S(p^e)_p/P^-,θ(_p)) db = ∏_p∫_b ∈ B(_p)#(π^-1(b)∩ S_p/P^-,θ(_p)) db This proves one inequality of the theorem. To prove the reverse inequality, we let S(≥ M) := S ∖ S(<M). Theorem <ref> shows that ∑_b ∈ B() (b) < X( π^-1(b) ∩ S(≥ M)/P^-,θ()) = O_(1/MX^ V + ) + O_(X^ V - δ + ). On the other hand, from equation (<ref>) we see that ∑_b ∈ B() (b) < X( π^-1(b) ∩ S(< M)/P^-,θ()) ≤ N(X) ∏_p ∫_b ∈ B(_p)#( π^-1(b)∩ S_p/P^-,θ(_p)) db + O_(M^2C_2( V)^2X^ V - δ + ). Taking M = X^δ/4C_2( V)^2 and combining (<ref>) and (<ref>) concludes the proof. The key observation now is that the asymptotics for the number of P^-,θ()-orbits on W_0() are the same as the asymptotics for the number of G()-orbits on V(). This follows from these two observations: * If v_1,v_2 ∈ W_0() are G()-equivalent but not P^-,θ()-equivalent, then they have a non-trivial stablizer in G(), but the amount of G()-orbits with non-trivial stabilizer is negligible by Proposition <ref>. * Also by Proposition <ref>, all but negligibly many reducible G()-orbits on V() intersect W_0(). Thus, we can extend Theorem <ref> to a result that counts G()-orbits on V() satisfying infinitely many congruence conditions. More precisely, we call a subset S ⊂ V() a big family in V if S = V() ∩⋂_p S_p, where: * For all p, the set S_p is G(_p)-invariant and is the preimage under reduction modulo p^j of a nonempty subset of V(/p^j) for some j > 0. * For all sufficiently large p, the set S_p contains all the elements W_0(_p) with discriminant not divisible by p^2. Let S ⊂ V() be a big family. The number of G()-orbits of S of height at most X is ( ∏_p∫_b ∈ B(_p)#(π^-1(b)∩ S_p ∩ W_0(_p)/P^-,θ(_p))db)N(X) + o(X^ V). The constant appearing in Theorems <ref> and <ref> can be made more precise using a Jacobian change-of-variables formula, which we now introduce. Let S_V = {α_1,…,α_k} denote the height-one weights of W_0. In an analogous way to the case-by-case computations of Section <ref>, we compute ∏_ω∈ W_0ω^-1∏_β∈Φ_G^-β and we write them in the basis S_V, giving: ∏_ω∈ W_0ω^-1∏_β∈Φ_G^-β = α_1^r_1…α_k^r_k. For a given v ∈ W_0 with height-one coefficients (v_α_1,…,v_α_k), we define the function λ on W_0 to be λ(v) := v_α_1^r_1… v_α_k^r_k. We then obtain the analogue to <cit.>, which has an identical proof: Let R = or _p for some prime p. There exists a rational constant ∈^× such that for every measurable function ϕ W_0(R) → we have ∫_v ∈ W_0(R)ϕ(v)|λ(v)| dv = || ∫_b ∈ B() Δ(b) ≠ 0( ∑_f ∈π^-1(b)∩ W_0(R)/P^-,θ(R)∫_h ∈ P^-,θ(R)ϕ(h· f) dh) db. Here, dv and db are Euclidean measures, and dh is a right Haar measure on P^-,θ(R). Applying this change of variables to Theorem <ref>, we obtain: Let S be a big family in V(). Then, the number of reducible G()-orbits on S of height at most X is (B()_<1) || ( ∏_p(1-1/p)^-# S_V∫_v ∈ S_p ∩ W_0 |λ(v)|_p dv )X^ V + o(X^ V). In particular, when the big family S is the whole of V(), we obtain: The number of reducible G()-orbits on V() of height at most X is (B()_<1) || ∏_i=1^# S_Vζ(r_i+1) X^ V + o(X^ V). We now have everything we need to prove Theorem <ref>. Indeed, the asymptotics for the irreducible orbits can be read off <cit.>, where we see directly that the leading term is of order X^ V. The constant in the irreducible case is related to the computation of the volume of G()\ G() with respect to a suitably normalised Haar measure, and can be done following <cit.> and <cit.>, for instance. Surprisingly, the exact same ζ(r_i+1) coefficients appear! As stated in the introduction, we wonder if there is a natural explanation as to why these the reducible and irreducible constants should coincide.
http://arxiv.org/abs/2306.07329v2
20230612180003
Symplectic cuts and open/closed strings I
[ "Luca Cassia", "Pietro Longhi", "Maxim Zabzine" ]
hep-th
[ "hep-th", "math-ph", "math.AG", "math.MP", "math.SG" ]
=1 stmryboldUstmrymn references.bib margin=2.5cm,bottom=2.5cm equationsection
http://arxiv.org/abs/2306.02127v1
20230603145047
Finite-range effect in the two-dimensional density-induced BCS-BEC crossover
[ "Hikaru Sakakibara", "Hiroyuki Tajima", "Haozhao Liang" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.supr-con" ]
1,2]Hikaru Sakakibara 1,3]Hiroyuki Tajima 1,2]Haozhao Liang [1]Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan [2]Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, Japan [3]RIKEN Nishina Center, Wako, Saitama 351-0198, Japan We theoretically investigate the Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensation (BEC) crossover in a two-dimensional Fermi gas with the finite-range interaction by using the Hartree-Fock-Bogoliubov theory. Expanding the scattering phase shift in terms of the scattering length and effective range, we discuss the effect of the finite-range interaction on the pairing and thermodynamic properties. By solving the gap equation and the number equation self-consistently, we numerically calculate the effective-range dependence of the pairing gap, chemical potential, and pair size throughout the BCS-BEC crossover. Our results would be useful for further understanding of low-dimensional many-body problems. xxxx, xxx Finite-range effect in the two-dimensional density-induced BCS-BEC crossover [ July 31, 2023 ============================================================================ § INTRODUCTION Strongly correlated quantum systems are essential in various contexts of modern physics. In a fermion system with a weak attractive interaction, it is known that the Bardeen-Cooper-Schrieffer (BCS) state is realized by the formation of Cooper pairs. If the strength of the attractive interaction becomes stronger, the BCS state turns into the Bose-Einstein condensate (BEC) of tightly bound molecules without any phase transitions <cit.>. This crossover phenomenon, nowadays called the BCS-BEC crossover, has been proposed originally in electron-hole systems <cit.>. After three decades, the BCS-BEC crossover is realized by cold atomic experiments of ^40K and ^6Li <cit.>. Such cold atom systems, in which the interaction strength can be arbitrarily tuned near the Feshbach resonance <cit.>, have attracted tremendous attention as ideal simulators for other quantum many-body systems, such as superconductors and nuclear matter <cit.>. Since the absolute value of the s-wave scattering length a can dramatically be enlarged near the Feshbach resonance, the interparticle interaction can be regarded as a contact-type (i.e., zero-range interaction) and characterized by one parameter a. Recently, the BCS-BEC crossover has been observed not only in cold atom systems but also in condensed matter systems <cit.>. In superconducting systems, by tuning the carrier density instead of the strength of interaction, the density-induced BCS-BEC crossover occurs <cit.>. This can be understood as the change of the interaction parameters through the Fermi momentum k_ F <cit.> [i.e., the dimensionless coupling measure (k_ Fa)^-1 in three dimensions]. Such a crossover in condensed matter systems is in contrast with that observed in cold atom systems, where a is tuned instead of k_ F. We note that the density-induced BCS-BEC crossover has been examined in lattice two-color quantum chromodynamics simulation <cit.>. Its three-body analogue has also been discussed <cit.>. However, in general the two-body interaction in condensed matter systems, such as superconductors and semiconductors, inevitably involves a finite effective range R in the s-wave channel. It is necessary to discuss how the finite-range interaction affects physical quantities in contrast to cold atom systems. It is reported that, in the superconducting BCS-BEC crossover, the pairing gap may show a peak structure in the carrier-density dependence, which is not found in cold atom systems <cit.>. The role of the finite-range interaction for the superconducting dome, that is, the peak structure of the superconducting critical temperature T_ c in the carrier-density dependence, has also been pointed out in the context of unconventional superconductors <cit.>. Moreover, in addition to the effective-range correction, the BCS-BEC-crossover superconductors are observed in a two-dimensional (2D) material <cit.>. In 2D systems, stronger correlations can be found compared to the 3D systems because of the reduction of kinetic degrees of freedom, as unconventional superconductors are more easily found in 2D materials than 3D ones. Remarkably, a two-body bound state can be formed even for an infinitesimally small attraction in 2D <cit.>. Such a bound-state formation plays a crucial role in the density-induced BCS-BEC crossover. In the previous works, the finite-range effects in 2D systems have not been explored systematically yet. While the quantum Monte Carlo simulation has been performed with the finite-range interaction, the finite-range dependence has been examined in only the small-range regimes (0≤ k_ FR≤ 0.11) <cit.>. The effect of the negative effective range has also been examined theoretically <cit.>. Furthermore, in Ref. <cit.>, the finite-range effect in the 2D superconductor system is considered by fitting to the experimental data but the Hartree-Fock (HF) self-energy contribution, which can be significant in the case with the finite-range interaction <cit.>, has been neglected. The present authors also studied the finite-range effect in the 2D BCS-BEC crossover by using the Brueckner G-matrix approach <cit.>. However, the effect of the pairing gap has not been taken into account in Ref. <cit.>. Systematical studies of finite-range effects will also be accessible in future cold atom experiments. By incorporating the additional process to excited states in the Feshbach resonance mechanism, the two-field optical method has been proposed to arbitrarily tune not only the scattering length but also the effective range <cit.>. Furthermore, a similar experiment for controlling the interaction spatially has been performed based on the above proposal <cit.>. In this paper, we theoretically investigate the effects of the positive effective range in an attractively interacting 2D Fermi gas system by using the Hartree-Fock-Bogoliubov (HFB) theory <cit.>. The HFB theory is useful to incorporate the finite-range effect and the presence of pairing gap self-consistently with relatively small numerical costs <cit.>. For the validity of the HFB theory, at least, the mean-field theory should be justified in the weak-coupling ground state corresponding to the BCS region <cit.>. Moreover, it is known that the mean-field theory can qualitatively describe the BCS-BEC crossover physics at zero temperature, as the information of the two-body bound state is correctly incorporated in the gap equation <cit.>. While the G-matrix study for the finite-range correction <cit.> does not involve the pairing gap, both the HF self-energy and the pairing gap can be determined self-consistently in the HFB theory. To understand the finite-range effect on the density-induced BCS-BEC crossover, we numerically calculate the pairing gap and chemical potential, which are directly affected by the effective range through the HF self-energy. To see the microscopic pairing properties, we also examine the pair-correlation length. This paper is organized as follows. In Sec. <ref>, we present the theoretical model for the BCS-BEC crossover with an attractive finite-range interaction in 2D. In Sec. <ref>, we show the numerical results and discuss how the finite-range correction affects the physical quantities such as pairing gap, chemical potential, and pair-correlation length. In Sec. <ref>, we summarize this paper. § MODEL In this section, we introduce the model for the 2D BCS-BEC crossover with the finite-range attractive interaction. A two-component 2D Fermi gas with the finite-range interaction is considered, where the Hamiltonian is given by H =∑_k,σξ_kc_kσ^† c_kσ +∑_k,k',P V(k,k') c_k+P/2,↑^† c_-k+P/2,↓^† c_-k'+P/2,↓ c_k'+P/2,↑. In Eq. (<ref>), ξ_k=k^2/(2m)-μ is the kinetic energy of a fermion with mass m measured from the chemical potential μ, and c_kσ^(†) is the annihilation (creation) operator of a fermion with the momentum k and the spin σ=↑,↓. We consider the finite-range separable s-wave interaction given by V(k,k')=GΓ_kΓ_k', where G is the coupling constant and Γ_k=1/√(1+(k/Λ)^2) is the form factor, which reproduces the relative momentum dependence of the scattering phase shift δ_k up to O(k^2) <cit.>. Since we are interested in the attractive interaction, the negative coupling constant G<0 is considered. The momentum scale Λ plays a role of the momentum cutoff. In the following, we show how to relate the model parameters (i.e., G and Λ) to the 2D scattering length and effective range via the analysis of the two-body T-matrix. In 2D systems, it is known that the two-body bound state always exists with arbitrary small attractive zero-range interaction. In the case of finite-range interaction, the T-matrix is written by T( k, k';ω) =GΓ_kΓ_k'[1-G∑_pΓ_p^2/ω_+-p^2/m]^-1, where ω_+ = ω + iδ is the two-body energy with an infinitesimally small number δ. The two-body binding energy E_ b is obtained from a pole of T-matrix as 0 = 4π/mG+log(mE_ b/Λ^2)/1-mE_ b/Λ^2. Also, the scattering length a is given by a=1/Λexp(-2π/mG). The ratio between the effective range R and a is given by R/a = √(-4π/mGexp(4π/mG)). For convenience, we measure the interaction strength and the effective range by using the dimensionless parameters log(k_ Fa) and R/a, where k_ F=√(2πρ) is the Fermi momentum. In the sense of cold atomic physics, log(k_ Fa) can be tuned by changing a near the Feshbach resonance. In the density-induced BCS-BEC crossover, k_ F and log(k_ Fa) are changed with the number density ρ. Qualitatively, the dilute BEC (strong-coupling) and dense BCS (weak-coupling) regimes are characterized as log(k_ Fa)≲ 1 and log(k_ Fa) 1, respectively. Next, the HFB theory is introduced to consider the many-body ground state in the presence of the nonzero effective range. To this end, two kinds of the mean-field expectation values are introduced: the pairing gap Δ(k)=-∑_k'V(k,k')⟨ c_-k',↓ c_k',↑⟩ and the HF self-energy Σ_σ(k)=∑_k'V(k-k'/2,k-k'/2)⟨ c_k',σ̅^† c_k',σ̅⟩, where σ̅ denotes the opposite spin with respect to σ. Since we are interested in the spin-balanced case, we suppress the spin index as Σ_↑(k)=Σ_↓(k)≡Σ(k). The resulting mean-field Hamiltonian reads H_ HFB = ∑_kΨ_k^†[ξ_kτ_3+Σ(k)τ_3-Δ(k)τ_1] Ψ_k +∑_kΔ(k) ⟨ c_k,↑^† c_-k,↓^†⟩ -∑_pΣ(p) ⟨ c_p,↑^† c_p,↑⟩ +∑_k[ξ_k+Σ(k)], where τ_i is the Pauli matrix acting on the Nambu spinor Ψ_k=(c_k,↑ c_-k,↓^†)^ T. After the Bogoiubov transformation, the ground-state energy is written by E_ GS = ∑_kΔ(k) ⟨ c_k,↑^† c_-k,↓^†⟩ -∑_pΣ(p) n_p+∑_k[ξ_k+Σ(k)-E_k], where n_p=1/2(1-ξ_p/E_p) is the momentum distribution and E_k=√({ξ_k+Σ(k)}^2+|Δ(k)|^2) is the quasiparticle dispersion. Moreover, the separable interaction leads to the convinient form of pairing gap as Δ(k)=-Γ_k G∑_k'Γ_k'⟨ c_-k',↓ c_k',↑⟩≡ΔΓ_k, where the superfluid order parameter Δ≡ - G∑_k'Γ_k'⟨ c_-k',↓ c_k',↑⟩ characterizes the magnitude of the pairing gap. Also, the HF self-energy reads Σ(k) =G∑_k'Γ_|k-k'|/2^2 n_k'. We note that the momentum dependence of Σ(k) is different from Ref. <cit.>. While the interaction Hamiltonian is directly replaced by the effective interaction in Ref <cit.>, here Eq. (<ref>) is derived microscopically under the mean-field approximation. However, this difference would not change the results qualitatively. In this way, E_ GS is further rewritten as E_ GS =∑_k[ξ_k+Σ(k)-E_k]-|Δ|^2/G -∑_pΣ(p) n_p. Minimizing E_ GS with respect to Δ, we obtain the gap equation 1=-G∑_kΓ_k^2/2E_k. To determine Δ and μ for a given number density ρ self-consistently, the gap equation (<ref>) should be solved with the number-density equation ρ =∑_k[1-ξ_k+Σ(k)/E_k]. In the end of this section, to be self-contained we review the finite-range effect on a two-body problem in the present model. Note that the behavior of E_ b in the present model has already been reported in Ref. <cit.>. Figure <ref> shows the solution of the two-body binding energy E_ b as a function of the ratio between the effective range R and the scattering length a. One can check that the zero-range result E_ b,0=1/ma^2 can be obtained in the limit of R/a→ 0. In this model, E_ b has two solutions for each R/a. We focus on the solution of smaller E_ b (solid line in Fig <ref>) because the other solution is unphysically large to discuss the low-energy properties of the present system. If one increases the effective range R, E_ b is enlarged up to R/a=e^-1/2≃0.607. § RESULTS AND DISCUSSION In this section, we present the numerical results of the HFB theory for the 2D BCS-BEC crossover with the finite-range interaction. The results are obtained by solving Eqs. (<ref>), (<ref>), and (<ref>) self-consistently. Figure <ref> shows the pairing order parameter Δ, which characterizes the superfluid order in this system, as a function of the dimensionless coupling parameter log(k_ F a) in the 2D BCS-BEC crossover. The blue dashed lines in Fig. <ref> are the results of contact-type interaction given by <cit.> Δ(R/a→ 0) = √(2E_ FE_ b,0), where E_ F=k_ F^2/2m is the Fermi energy. Δ/E_ F is plotted in panels (a), (b), and (c) with different R/a, while Δ/E_ b is plotted in panels (d), (e), and (f). The black solid lines represent the results of the HFB theory with the finite-range interaction. For comparison, we also show the results with the finite-range interaction without the HF self-energy Σ(k) (the red dashed lines). This calculation is similar to that used in the previous work for a layered superconductor in the BCS-BEC crossover regime at T=0 <cit.>. In the dilute BEC region (log(k_ Fa)≲ 1), the pairing gap is enlarged by the finite-range effect. This is because the binding energy is enhanced by this effect <cit.> as shown in Fig. <ref>. In this regard, fermions can form Cooper pairs more easily than in the case with the contact-type interaction. On the other hand, in the dense BCS region (log(k_ Fa) 1), the formation of Cooper pair is suppressed by the finite-range effect. Since introducing the finite-range effect is equivalent to introducing the high-momentum cutoff (i.e., Λ), the pairing order originating from the pairing near the Fermi surface is suppressed when k_ F is comparable with Λ. Indeed, the ratio between k_ F and Λ is given by k_ F/Λ=k_ FR√(m|G|/4π). In this regard, when R/a becomes larger, the suppression of BCS pairing becomes more remarkable. In the lower panels of Fig. <ref>, the plotted Δ is normalized by E_ b, which is independent of ρ, to clarify the density dependence of Δ. In Fig. <ref>(d), the finite-range results are similar to the contact-type result, since the parameter R/a is sufficiently small. Δ/E_ b,0 with the contact-type coupling increases monotonically as Δ(R/a→ 0)/E_ b,0 = k_ Fa. However, at larger R shown in Figs. <ref>(e) and (f), the peak structure of Δ/E_ b can be found with the density dependence [namely, the log(k_ Fa) dependence] in the finite-range calculations. This difference clearly manifests the suppression of the BCS-type pairing due to the finite-range correction. For more details, in Fig. <ref>(e), when R/a increases, the peak structure of the finite-range results can be found at log(k_ Fa) ∼ 1.5. In Fig. <ref>(f), for larger R/a such a peak structure is more pronounced and shifted toward log(k_ Fa) ∼ 1.0. In the experiment of the layered superconductor system <cit.>, a similar peak structure has been found as the comparison with the theoretical results has been reported <cit.>. Such a peak structure is unique to the finite-range interaction and not found in systems with the contact-type interaction. Therefore, in order to simulate these superconductor systems by using ultracold atom systems, it is necessary to tune the effective range in addition to the scattering length by using e.g., the optical field method <cit.>. Figure <ref> shows the position of the peaks with respect to the ratio R/a. These were obtained by applying the Lagrange interpolation to the numerical data to pick up the maximum value of Δ/E_ b. In the limit of R/a→0, the peak position can be at an infinitely large log(k_ Fa) as the peak structure does not exist in the system with the contact-type interaction. The peak position log(k_ Fa)_ peak tends to decrease monotonically when R/a increases. This indicates that for larger R/a the peak can be found at lower densities. This result would be useful to determine the parameters of interaction from the experiment results with the finite-range interaction, when one tries to qualitatively examine the finite-range properties in condensed matter systems as well as in future cold atom experiments. To see more detailed properties of the density-induced BCS-BEC crossover with the finite range interaction, in Fig. <ref> we show the results of the chemical potential μ at different effective ranges: (a) R/a=0.12, (b) R/a=0.27, and (c) R/a=0.52. The blue dashed lines are the results with the contact-type interaction given by μ = E_ F-E_ b,0/2. While μ≃ E_ F is found in the BCS side, μ≃ -E_ b,0/2 in the BEC side as μ represents the change of the energy when a single particle is added to the system. In this regard, μ is regarded as a thermodynamic quantity well characterizing the BCS-BEC crossover <cit.>. One can see that the finite-range effect suppresses μ in the whole crossover regime. To understand this suppression of μ in more detail, we also show the results with the finite-range interaction but without Σ(k) (the red dashed line) for comparison. It is found that μ is lowered when E_ b is enlarged by the finite-range effect. The reduction of E_ b is remarkably important in the BEC side as we discussed for the behavior of Δ. It is also found that Σ(k) (the black solid line) further suppresses μ compared to the case without Σ(k) in the entire crossover region. The effect of Σ(k) is found to be large with increasing R/a but sufficiently small in the dilute BEC side. In the dense BCS side, generally Σ(k) gives a significant shift of μ. This shift of μ is directly related to Σ(k≃0) in the quasiparticle dispersion E_k=√({k^2/2m -μ+Σ(k)}^2+|Δ(k)|^2). Figure <ref> shows Σ(k) at R/a=0.52, where log(k_ Fa)=1.0 and 0.0 are considered. Indeed, the shift of μ induced by Σ(k), given by Σ(k≃0)≃ -E_ F at log(k_ Fa)=0.0 and Σ(k≃0)≃ -0.9E_ F at log(k_ Fa)=1.0, are close to the differences between the results with and without Σ(k) in Fig. <ref>(c). We note that Σ(|k|≃ k_ F) is also similar to Σ(k≃ 0) in this regime. This result indicates that the momentum-independent Hartree shift used in Refs. <cit.> gives a reasonable approximation. The momentum dependence of Σ(k) in Fig. <ref> is characterized by Γ_k and hence Λ. At low energy, the momentum dependence of Σ(k) may lead to the effective mass correction <cit.>. The zero-crossing point of μ, which indicates the interaction strength where the underlying Fermi surface is depleted by the pair formation, has conveniently been regarded as the crossover boundary between the BCS and BEC sides <cit.> whereas there are no distinct phase boundaries between them. While the zero-crossing point of μ can be found for arbitrary R/a as shown in Fig. <ref>, such a point is quantitatively shifted by the finite-range correction through Σ(k). In contrast to the zero-range case, where the HF self-energy is trivially zero and μ is mainly reduced by the pair formation, we need to consider two possibilities of the reduced μ in the case with the finite-range interaction, that is, the pair formation and the HF self-energy shift. In other words, at large R/a, μ can be strongly suppressed by Σ(k) even with the small pairing effect. In this regard, one needs to carefully examine μ when trying to use μ the measure of the BCS-BEC crossover with the finite-range interaction. Finally, to further investigate microscopic properties of Cooper pairs in the present system, in Fig. <ref> the result of the pair-correlation length ξ_ pair is plotted, where ξ_ pair is defined by <cit.> ξ_ pair^2=∑_k|∇_kϕ(k)|^2/∑_k|ϕ(k)|^2. The pair-correlation length is also regarded as a useful quantity to characterize the BCS-BEC crossover <cit.>. In the dilute BEC regime, fermions form tightly bound bosonic molecules and hence ξ_ pair becomes smaller. In the dense BCS regime, loosely-bound Cooper pairs are formed and their sizes are typically larger than the mean interparticle distance given by k_ F^-1. This behavior is not changed significantly by the finite effective range correction. Entirely, the finite-range effect enlarge ξ_ pair in the crossover regime. In particular, in the dense BCS side (log(k_ Fa) 1), ξ_ pair is dramatically enlarged by the effective range correction, indicating the suppression of the BCS-type pairing by the finite-range effect. While μ is suppressed by the finite-range effect through two mechanisms, that is, the Cooper pairing and the HF self-energy shift, ξ_ pair monotonically increases with R/a and is more directly related to the Cooper pairing effect. One may expect that the enhanced ξ_ pair is also related to the tremendous suppression of Δ and the resulting peak structure in the density dependence of Δ/E_ b in the dense BCS regime as shown in Fig. <ref>. In this regard, the finite-range effect can be found in a different way through the different physical quantities, and ξ_ pair would be more convenient than μ to characterize the density-induced BCS-BEC crossover with the finite-range interaction. In addition, k_ Fξ_ pair=1 can be used as a crossover boundary between the BCS and BEC regimes. In such a viewpoint, the density with k_ Fξ_ pair=1 is shifted toward the lower densities when R/a increases. This result again indicates the suppression of the Cooper pairing near the Fermi surface. This is in contrast to the shift of zero-crossing point of μ toward higher densities with increasing R/a. We note that in the HFB framework the effective Fermi surface locates at the shifted chemical potential μ^*=μ-Σ(|k|≃ k_ F) <cit.> and therefore negative μ does not immediately mean the disappearance of the Fermi surface as we find that the zero-range result and the finite-range result without Σ(k) in Fig. <ref> are close to each other in the dense BCS regime. In this way, one can understand that k_ Fξ_ pair=1 is a more appropriate indicator of the BCS-BEC crossover than μ=0. We also remark that the cluster size (corresponding to ξ_ pair in this paper) is highly important to understand the microscopic properties of the density-induced hadron-quark crossover <cit.>. Indeed, the overlapped three-body state, which is larger than the interparticle distance, can be anticipated in the dense regime <cit.>. Our study on the role of the finite effective range for the pair size would be useful for further extensions to other crossover phenomena. § SUMMARY AND PERSPECTIVES In this paper, we have theoretically investigated the finite-range effect in the 2D Fermi gas system throughout the BCS-BEC crossover by using the Hartree-Fock-Bogoliubov theory. Using the finite-range separable interaction, we have numerically solved the particle number equation and the gap equation self-consistently. The momentum-dependent HF self-energy, which were ignored in previous studies, has been considered in the numerical calculation. The finite-range effects for the pairing gap, the chemical potential, and the pair size are studied systematically. In particular, the finite-range effect works on the pairing gap in different ways in the BCS and BEC sides, respectively. While the pairing gap is enhanced by the finite-range effect (through the enhancement of the two-body binding energy) in the BEC side, it is suppressed by the finite-range effect in the BCS side because the effective pairing interaction near the Fermi surface is suppressed by the cutoff associated with the effective range. Furthermore, the maximal behavior of Δ normalized by the density-independent scale is identified as density dependent and the peak density is plotted as a function of the effective range. For the suppression of chemical potential μ by the finite-range effect, there are two mechanisms, that is, the enhanced pairing correlations and the HF self-energy shift. In this regard, one needs to carefully examine the effective-range correction to understand the behavior of μ in the density-induced BCS-BEC crossover. Finally, we have examined the finite-range effect on the pair correlation length throughout the density-induced BCS-BEC crossover. The pair size is found to be monotonically enlarged by the finite effective range in the whole crossover region and gives a useful measure for the BCS-BEC crossover from a microscopic viewpoint. For future perspectives, for further understanding of the connection between clean cold atom systems and other condensed matter systems, it is important to generalize our approach to more realistic interaction model, such as the Rytova-Keldysh potential <cit.>. To obtain more quantitative results, the HF self-energy can be further renormalized by using the Brueckner G-matrix <cit.>, where the repeated scattering process is effectively included. Also, while we have focused on the ground-state properties at zero temperature, it is an interesting future work to examine how the Berezinskii-Kosterlitz-Thouless transition is modified by the finite-range effect and the associated HF self-energy shift <cit.>. § ACKNOWLEDGEMENTS H.S. was supported by RIKEN Junior Research Associate Program. H.T. acknowledges the JSPS Grants-in-Aid for Scientific Research under Grant Nos. 18H05406, 22K13981, and 22H01158. H.L. acknowledges the JSPS Grant-in-Aid for Early-Career Scientists under Grant No. 18K13549, the JSPS Grant-in-Aid for Scientific Research (S) under Grant No. 20H05648, and the RIKEN Pioneering Project: Evolution of Matter in the Universe. ptephy
http://arxiv.org/abs/2306.03845v1
20230606163200
$ω$Test: WebView-Oriented Testing for Android Applications
[ "Jiajun Hu", "Lili Wei", "Yepang Liu", "Shing-Chi Cheung" ]
cs.SE
[ "cs.SE", "D.2.5" ]
[email protected] The Hong Kong University of Science and Technology Hong Kong China [email protected] McGill University Montreal Canada Yepang Liu is affiliated with both the Department of Computer Science and Engineering and the Research Institute of Trustworthy Autonoumous Systems, Southern University of Science and Technology. [email protected] Southern University of Science and Technology Shenzhen China Corresponding Author [email protected] The Hong Kong University of Science and Technology Hong Kong China WebView is a UI widget that helps integrate web applications into the native context of Android apps. It provides powerful mechanisms for bi-directional interactions between the native-end (Java) and the web-end (JavaScript) of an Android app. However, these interaction mechanisms are complicated and have induced various types of bugs. To mitigate the problem, various techniques have been proposed to detect WebView-induced bugs via dynamic analysis, which heavily relies on executing tests to explore WebView behaviors. Unfortunately, these techniques either require manual effort or adopt random test generation approaches, which are not able to effectively explore diverse WebView behaviors. In this paper, we study the problem of test generation for WebViews in Android apps. Effective test generation for WebViews requires identifying the essential program properties to be covered by the generated tests. To this end, we propose WebView-specific properties to characterize WebView behaviors, and devise a cross-language dynamic analysis method to identify these properties. We develop , a test generation technique that searches for event sequences covering the identified WebView-specific properties. An evaluation on real-world open-/closed-source Android apps shows that can cover diverse WebView behaviors and detect WebView-induced bugs effectively. detected previously-unknown bugs. From the bugs that we have reported to the app developers, bugs were confirmed, of which were fixed. <ccs2012> <concept> <concept_id>10011007.10011074.10011099.10011102.10011103</concept_id> <concept_desc>Software and its engineering Software testing and debugging</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering Software testing and debugging : WebView-Oriented Testing for Android Applications Shing-Chi Cheung =================================================== § INTRODUCTION WebViews are user interface (UI) widgets of the Android framework to display web pages in Android apps <cit.>. They are instances of the class. WebViews not only help integrate web applications developed in HTML/JavaScript into Android apps but also provide mechanisms to support interactions between the web-end (JavaScript) and the native-end (Java) of an Android app. WebViews are widely used in real-world Android apps. An earlier study <cit.>, which analyzed over one million apps from Google Play store, showed that 85% apps use WebViews in some fashion. A recent study <cit.> that randomly crawled 6000 popular apps from two leading app stores demonstrated similar results (i.e, 90.6% of them use WebViews). A dataset containing 6400 most-popular Google Play apps <cit.> recently crawled by us also showed that 83.74% apps use WebViews in their code. Furthermore, a new trending ecosystem called app-in-app (e.g., WeChat Mini-Programs <cit.>) in which a host-app uses WebViews to embed sub-apps is adopted by many high-profile apps <cit.>. In reality, millions of active users are interacting with WebViews everyday <cit.>. Despite the popularity, WebView programming is error-prone due to its complicated cross-language interaction mechanisms (e.g., bridge communication <cit.> and WebView callbacks <cit.>). Existing studies have investigated various types of bugs induced by the misuses of WebViews (e.g., security issues <cit.>). They also proposed different bug detection techniques based on static/dynamic analysis. A fundamental limitation of static analysis techniques <cit.> is their inability of analyzing dynamically loaded web data. For example, some JavaScript code may not be available for analysis until they are loaded at runtime. In comparison, dynamic analysis techniques <cit.> exercise WebViews through testing, and therefore do not suffer from such a limitation. Nonetheless, the effectiveness of dynamic analysis techniques heavily relies on the quality of the generated tests. For example, manifesting a WebView-induced bug may require the tests to trigger some specific events on the buggy WebViews. Yet, most of the existing dynamic analysis techniques either adopt manual <cit.> or random approaches for test generation <cit.>. Their generated tests cannot effectively explore diverse WebView behaviors to expose hidden issues. Conventional general-purpose test generation techniques for Android apps are also ineffective for WebView testing. A typical category of these techniques is model-based graphical user interface (GUI) testing <cit.>, whose objective is to generate tests to visit more GUI states of an Android app. The GUI state of an app, which is modeled by the hierarchy of the rendered UI elements, is an abstraction of app behaviors. Intuitively, visiting more GUI states that “look different” means that more app behaviors are explored. However, GUI states may not be a good abstraction of WebView behaviors. Loading a new page in a WebView does not necessarily mean that a new WebView behavior is explored. For example, opening two different websites in a WebView-wrapped browser app may exercise the same page-loading process of WebViews. Another typical category of conventional techniques is to guide test generation by the coverage of some program properties <cit.> that reflect the test objectives. Program statements and branches are two commonly adopted properties. For example, Sapienz <cit.>, an Android test generation technique, regards the maximization of statement coverage as a main objective in test generation. However, the program properties adopted by existing work are not suitable for testing WebViews since none of them are specifically designed to model WebView behaviors. For example, not all statements in an app are used for implementing WebView functions. Leveraging these properties cannot guide the generation of tests to systematically examine WebView behaviors. Therefore, an effective test generation technique targeting WebViews in Android apps is needed. To effectively test WebViews, it is desirable to define specific test objectives and propose properties accordingly to guide test generation. A straightforward solution is to take WebView API[WebView APIs include APIs of classes under packages and , and bridge methods <cit.>.] call sites as the properties for tests to cover. However, such a design of properties is inadequate: it only captures the interaction sites between the web-end and native-end but ignores the involved data exchanges. As we will illustrate in Section <ref>, effective WebView testing should also examine how the data sent from the web-end are manipulated at the native-end and vice versa. Based on this observation, we propose a novel design of WebView-specific properties that considers both WebView API call sites and the data exchanges involved in the web-native interactions. We also devise a cross-language dynamic analysis method to identify these properties. We further propose a WebView-oriented test generation technique . is powered by a novel fitness function, which awards those tests that can cover diverse WebView-specific properties. We evaluated on open-source and closed-source Android apps and compared it with 5 baseline methods in terms of property coverage and the number of detected WebView-induced bugs. Our evaluation results show that achieved the highest coverage and detected the most number of bugs. detected previously-unknown bugs. From the bugs that we have reported to the corresponding app developers, bugs were confirmed and of them have been fixed. To summarize, this paper makes the following major contributions: * WebView-specific properties: We propose the first design of WebView-specific properties to formally characterize WebView behaviors in Android apps. * : We propose a test generation technique , which leverages the proposed properties to generate tests that are able to explore diverse WebView behaviors. * Implementation and evaluation: We implemented and evaluated its performance on real-world Android apps. It detected previously-unknown bugs in various apps. We make the tool and experiment dataset available to facilitate future research <cit.>. § MOTIVATION AND PROPERTY DESIGN Program properties (e.g., program statements) are entities that characterize the program behaviors of interest. The coverage of these properties provides an effective measure of test adequacy for the interested behaviors. We consider both WebView APIs call sites and the data exchanges in the web-native interactions as WebView-specific properties. We demonstrate the need of doing so using two web-native interaction scenarios in Wikipedia <cit.>. §.§ Two Interaction Scenarios Figure <ref> gives the code of the two interaction scenarios. Boxes and arrows marked in blue and yellow represent Scenario #1 and #2, respectively. Boxes marked in grey represent the functions shared by both scenarios. Both of the scenarios (1) are initiated at the native-end, (2) invoke a JavaScript function to pass data to the web-end via the WebView API (line 14), and (3) send the results generated at the web-end back to the native-end via the WebView callback (line 37). The two scenarios share the same interfaces that handle cross-language data transmissions (grey boxes). The handling is different according to the interaction type. Scenario #2 contains a reported bug <cit.>. Scenario #1 depicts the process of loading an article section. Its interaction type is “section”. Figure <ref> shows a section introducing cranberry. In this scenario, the native-end constructs a JSON-format containing a requested (line 3) and an (line 4). It calls to pass and the interaction ( specified at line 5) as arguments to the web-end through a dynamically constructed JavaScript code (line 14). The web-end retrieves the data in and dispatches to the handler for type (line 17). The handler (lines 19–25) then fetches the resources through an HTTP request based on the information in (lines 20–21). If an error occurs, the error status will be sent back to the native-end (line 23). The error message is sent by prompting a dialog (line 35) and thereby triggering the native-end WebView callback (lines 37–42). The native-end receives the error as the callback parameter (line 37) and handles it at line 46. Scenario #2 depicts the process of handling a click on a link. Its interaction type is “onClick”. This scenario is initiated at the native-end (lines 7–12) by packing the link and the of the DOM element that consumes the click event into (lines 9–10). It calls (line 14) and passes the to the handler (line 26) for at the web-end. If the DOM element contains an image (line 28), a message of type is sent back to the corresponding event handler at the native-end (line 49). This scenario contains a bug that results in a crash when a is thrown at line 52 <cit.>. The exception occurs because the Java code is looking for a attribute from a JSON-typed object (line 52) prepared by the JavaScript code at line 29, where the attribute is not set. §.§ Limitation of WebView API Call Sites We can see from the example that it is inadequate to characterize different web-native interaction scenarios by considering only WebView API call sites ( at line 14 and at line 37) as properties for test coverage. While the two scenarios explore different WebView behaviors, a test enacting either one of them can cover all these call sites. A test generation technique solely driven by WebView API call sites may consider the test exploring Scenario #2 redundant after generating the test for Scenario #1, thus missing the bug residing in Scenario #2. This motivates us to propose new properties that can better characterize WebView behaviors. §.§ New Design of WebView-Specific Properties From the example, we can make a key observation. Besides the call sites of WebView APIs, it is necessary to consider the data exchanges across the language boundaries to characterize WebView behaviors. In other words, program variables that carry the transmitted data between the web-end and the native-end should be captured by WebView-specific properties. We call such variables s. A higher coverage of these properties is likely to explore more WebView behaviors. For example, a test that covers s , , and exercises Scenario #1, while a test that covers s , , and exercises Scenario #2. With s, two scenarios can be distinguished from each other. However, it is challenging to identify s. Unlike conventional program properties such as program statements or branches, s cannot be simply identified from program structures. Their identification involves two challenges: Challenge 1: JavaScript code can be dynamically constructed at runtime. If it cannot be precisely determined, we may miss s at the web-end. For example, line 14 in Figure <ref> builds a string of JavaScript code. When a section is loaded, the string can be javascript:handleMessage(“section”, {url: “...”, offset: ...});. The and field of the object expression (the second argument of ) should be identified as s since their values are set by the Java code. Without recognizing the JavaScript code, these two s can be missed, further affecting the identification of more s that depend on them (e.g., at line 20). Challenge 2: Not all variables are s. It is difficult to statically identify s precisely due to language or Android framework features (e.g., dynamic data structures, intent, etc.) that involve dynamic decisions. For example, static analysis often adopts an over-approximation strategy when analyzing collections. It may consider all the elements in an array to be s even though only some of them carry transmitted data. If these s cannot be precisely identified, the generated tests are ineffective. To address the challenges, we devise a set of dynamic tagging rules to identify WebView-specific properties on-the-fly during testing. We present the method in the next section. § DYNAMICALLY IDENTIFYING WEBVIEW-SPECIFIC PROPERTIES This section presents a dynamic analysis method to identify WebView-specific properties in an Android app. Figure <ref> gives an overview of the method. The idea is to first initialize a set of s that are directly used by WebView APIs (Tag Initialization). Then, other variables depending on existing s and vice versa are iteratively identified and added to the set of s using Forward and Backward Propagation, respectively. Figure <ref> abstracts the dependencies between variables in Scenario #1 in Figure <ref>. In this example, when is executed by a test, its argument is the first variable to be tagged as an . Then the variables that depends on are iteratively tagged as s by traversing the variable dependencies built during test execution according to a set of backward propagation rules. For example, since is built from and , both of them are tagged as s. Variable further depends on the return value of [We instrument an app to inject the propagation logic at instruction level. There will be a temporary variable to hold the return of req.url() and this variable will be tagged.] and , the corresponding variables are also tagged as s. Next, will execute the JavaScript code. We analyze the code and identify the JavaScript variables whose values are defined by Java code. In this example, the string argument , the and field of the object expression (the second argument) of are tagged as s. We further use a set of forward propagation rules to tag new s depending on existing s at the web-end. In Figure <ref>, is tagged since it depends on two existing s, and . Table <ref> shows the tagging rules to identify s. We use a function ω(x) to indicate if a variable x is an : if x is an , the value of ω(x) is 1; otherwise 0. The code that implements the tagging logic is injected into an app using program instrumentation. Therefore, it is independent from any test generation approaches. In the following, we present the three steps to identify s: Tag Initialization, Backward Propagation, and Forward Propagation. §.§ Tag Initialization In the first step, we identify an initial set of s based on tag initialization rules, which consider the following five circumstances according to the semantics of WebView APIs <cit.>. (1) Parameters and return variables of WebView callback methods are tagged as s. These native-end parameters carry the information of events occurring at the web-end. For example, the parameter of the callback method at line 37 in Figure <ref> carries the prompted message sent by at line 35. The callback return also affects WebView behaviors. For instance, WebViews determine whether to continue loading a URL according to the return of when a user clicks a URL. (2) Parameters and return variables of bridge methods <cit.> are tagged as s. Bridge methods are Java methods that can be directly invoked by JavaScript <cit.>. The parameters of bridge methods are passed from the web-end to the native-end. The result computed at the native-end is passed back to the JavaScript code by the return values of bridge methods. (3) Arguments passed to WebView APIs and variables saving return values of WebView API invocations are tagged as s. These APIs are invoked to control the behaviors of WebViews (e.g., the display of a web page) through their arguments or extract information from the web-end (e.g., the current URL) through their return values. (4) Arguments passed to bridge method invocations and the variables saving the return values of bridge method invocations are tagged as s. Different from rule (2), this rule captures the variables involved with bridge methods at the web-end. (5) The literals in the JavaScript code that is dynamically constructed and loaded by WebView APIs that can execute JavaScript (e.g., ) are tagged as s. The reason for tagging only the literals is that once a string representing JavaScript code is formatted, only the values of literals are given by the Java code. For example, the constructed JavaScript code at line 14 of Figure <ref> can be javascript:handleMessage(“section”, {url: “https://...”, offset: 3}) when sections are loaded. The literals “section”, “https://...”, and 3 are tagged as s as their values are given by three Java variables, respectively (i.e., , , ). We implement this by intercepting the constructed JavaScript code at runtime. We instrument the JavaScript code to inject the tagging logic into it before the code is loaded by WebView APIs. §.§ Backward Propagation Backward propagation is used to iteratively identify new s that existing s depend on. Backward propagation based on static analysis may misclassify many variables as s. Figure <ref> depicts one scenario. At the beginning, two string variables and are put into (i=7) and (j=2), respectively. Then at another place (k=7) is assigned to , which will be tagged as an when it is used by . In this scenario, it is difficult to statically infer that only is an . An over-approximation strategy based on static analysis can incorrectly classify all the elements in the array as s and subsequently tag properties irrelevant to WebView behavior. In this case, both and will be tagged as s, which will further lead to the tagging of , (which depends on) and , (which depends on). However, only is used by . As such, we address the problem using a dynamic variable tracking strategy. §.§.§ Variable Tracking We adapt the idea of Phosphor <cit.> and BridgeTaint <cit.> to track variables in Java and JavaScript respectively at runtime. The idea is to leverage a unique ID assigned to each variable at its initialization to precisely identify created variables at runtime. During an app's execution, variable dependencies specified by the backward propagation rules are memorized using the ID assigned to each variable. When a WebView API is invoked, we will use the recorded dependencies to propagate. Take Figure <ref> as an example. Suppose that the IDs of , , are 3, 1, 2, respectively. We can construct a data dependency that is 1, 2 ⇒ 3 when executing . When is executed, we will look at the ID of , which is 3 because is a reference of , and tag it as an . Then the backward propagation will be triggered to search for IDs that 3 depends on, which are 1 and 2, and tag the corresponding variables as s (i.e., and ). The over-approximation problem can be much alleviated this way. §.§.§ Backward Propagation Rules The backward propagation rules are presented in the Backward Propagation section of Table <ref>. The five rules identify new s that provide values to existing s in assignment expressions. They are used to identify new s that existing s depend on using the following five rules: (1) Assignment expression whose right hand side (RHS) is a unary expression: for x = op y, the tag of y is determined by the tag of x because y depends x. (2) Assignment expression whose RHS is a binary expression: for x = y op z, the tag of y and the tag of z are determined by the tag of x because y and z depend x. (3) Assignment expression whose RHS is a library method invocation: for x = lib(y_1, y_2, ..., y_n), the tags of the arguments y_1, y_2, ..., y_n are determined by the tag of x. Library methods in an Android app includes Android system APIs and APIs in third-party libraries. We rely on existing tools <cit.> to identify third-party libraries. We assume the return of a library method invocation is dependent on its arguments. (4) Assignment expression whose left hand side (LHS) is an object field reference: for x.y = z, the tag of z is determined by the tag of x because when an object as a whole is tagged as an , the variables that are assigned to its fields should be tagged as s. x = y.z is not included into the backward propagation rules because the reverse logic ω(x) →ω(y) (i.e., the tag of an object is determined by the tag of its fields) may identify a lot of false s. (5) Assignment expression whose LHS is an array element reference: for arr[i] = x, the tag of x is determined by the tag of arr because when an array as a whole is tagged as an , the variables that become its elements should be tagged as s. x = arr[i] is not included into the backward propagation rules because the reverse logic ω(x) →ω(arr) (i.e., the tag of an array is determined by the tags of its elements) may identify a lot of false s. §.§ Forward Propagation The five forward propagation rules identify new s that depend on existing s in assignment expressions. using the following five forward propagation rules: (1) Assignment expression whose RHS is a unary expression: for x = op y, the tag of x is determined by the tag of y. (2) Assignment expression whose RHS is a binary expression: for x = y op z, the tag of x is determined by the tag of y or the tag of z. (3) Assignment expression whose RHS is a library method invocation: for x = lib(y_1, y_2, ..., y_n), the tag of x is determined by the tag of any one of the arguments of the invocation. (4) Assignment expression whose RHS is an object field reference: for x = y.z, the tag of x is determined by the tag of y or the tag of y.z. For example, if the enclosing object of x (i.e., y) is tagged as an , any one of its fields (e.g., x) should also be tagged as an when it is used. (5) Assignment expression whose RHS is an array element reference: for x = arr[i], the tag of x is determined by the tag of arr or the tag of arr[i]. For example, if an array is tagged as an , its elements should also be tagged as s. Unlike backward propagation which relies on dependencies memorized during app execution, forward propagation is immediately built whenever a program statement that matches one of the rules is executed. For example, when x = y op z is executed and one of y or z is an , x will be directly tagged as a new according to rule No.12. We also make forward propagation field-sensitive, i.e., when a newly created is a reference to an object/array/collection, its fields/elements will be recursively tagged as s. §.§ WebView-Specific Properties Finally, we summarize the set of identified WebView-specific properties. At runtime, all variables including s are uniquely represented by their assigned IDs. It is a key step to alleviate the over-classification problem of s. However, the assigned IDs cannot be directly treated as the properties that tests should cover. It is because IDs are assigned in a non-deterministic way at runtime (i.e., it is increased by 1 whenever a new variable is initialized) so that even two identical tests may generate two different sets of IDs. Therefore, tests cannot be measured by the covered ID set. To generate a deterministic property set, we propose to take the def locations of s (Def_) found by backward propagation and the use locations of s (Use_) found by forward propagation as the property set. The def location of an is the place in a program where the ID of that is assigned or the place where a variable dependency on that is built. The use location of an is the place in a program where that is used (e.g., it is used as an argument of a method invocation). These locations are also uniquely indexed. To summarize, the WebView-specific property set that a test covers (P) is the union of the covered WebView API call sites (ω API), Def_, and Use_: P = ω API ∪ Def_∪ Use_ §.§ Implementation We implement the tagging/propagation logic by app instrumentation. We use Soot <cit.> and Esprima <cit.> to instrument all the Java and JavaScript statements that match the operation semantics in Table <ref>, respectively. We only instrument the JavaScript code located inside the asset folder of an app (which can be accessed by decoding an apk file via Apktool <cit.>) and the JavaScript code dynamically loaded by WebView APIs (e.g., the one loaded by ). Other JavaScipt code (e.g., code in online websites) are not instrumented as they usually do not interact with the native-end of an app. The properties covered in the JavaScript code are sent to the native-end via bridge communication <cit.>. Together with the properties covered in the Java code, they are stored in the memory and will be saved to a file in the hard disk of an Android emulator or a real device every 500 ms. The file can be read by any test generation tools to retrieve coverage information. To also measure traditional coverage such as statement/method coverage, we also implement the logic of JaCoCo <cit.> into our instrumentation tool. The tool supports statement/method coverage on both Java and JavaScript code. To ease the variable tracking, we require the instrumented app to run on a customized Android OS. The tool and the OS are publicly available online <cit.>. § TEST GENERATION This section presents how we utilize the identified WebView-specific properties to guide test generation for WebViews. We implement the test generation procedure as a tool called , which aims to generate tests that maximize the coverage of WebView-specific properties of an Android app. §.§ Overview is a search-based test generation technique that generates a sequence of events to optimize a fitness function designed to meet the objective above. Intuitively, it keeps appending events (e.g., click a widget) to exercise app UI components [UI components includes activities <cit.> and fragments <cit.>.] if new properties are frequently discovered. It leaves a UI component if existing properties are repeatedly covered or no properties are found after enough events are tried. Algorithm <ref> depicts our proposed test generation procedure. It takes an app under test (AUT) that has been instrumented according to Section <ref> and a time budget as inputs, and outputs a set of covered WebView-specific properties (P) and a set of discovered WebView-induced bugs (s). iteratively generates events to explore the AUT within the given time budget (lines 4–16). It triggers an event in each iteration and monitors the manifested s (lines 5–6) as well as the covered properties (lines 7–8). decides whether to continually append new events (lines 10–14) or to leave the current activity A (line 16) by utilizing a fitness function defined in continue() (line 9). When the foreground activity has a good fitness value, will similarly compute a fitness value for the current fragments-state A_F in A (an activity can render multiple fragments on the screen, so the set of fragments currently displayed forms a fragments-state). When the fitness is good or there is zero or one fragments-state (i.e., |A| ⩽ 1, which means there is no more fragments-state to switch) in A, will randomly pick an available event (e.g., pressing a button, scrolling a list, inputting text, rotating screens, etc.) according to the current UI hierarchy (line 11) and execute it (line 12). Otherwise, an event that can switch to another fragments-state is triggered (line 14). reports s according to pre-defined test oracles (lines 5–6) <cit.>. In the following, we explain how (1) decides whether to continue exploration from the current activity and fragments-state, and (2) identifies bugs with pre-defined test oracles. §.§ Continual Appending of Events determines whether to continue exploration according to a fitness value calculated to evaluate how good a state S is. The state S can be either an activity A or a fragments-state (A_F) in A. In this section, we will (1) define the fitness function, and (2) explain how continue() makes decision based on the fitness value. §.§.§ Fitness function aims to cover as many unique properties as possible while minimizing the number of times that a property is covered more than once in order to diversify the properties covered during test generation. Test resources could be wasted if the generated tests keep visiting properties that have already been covered many times. In addition, if a state's function is adequately explored by enough events but no WebView-specific properties are found, fewer test resources should be spent on that state. Considering these factors and given a state S, the fitness function f(S) is defined as: f(S) = w × f1(P_S, t_S) + (1-w) × f2(N_S, c_S) where P_S is the set of properties covered when exploring S, t_S is the number of times that new properties are found in S, N_S is the number of events spent on S, c_S is the number of times that code coverage increases in S, and w is a weight that balances two sub-fitness functions f1() and f2(). f1(P_S, t_S) is inversely proportional to the frequency of the properties in P_S. Suppose P_S is represented as {p_1, p_2, ..., p_m} and the number of times that the properties in P_S are covered is represented as {n_1, n_2, ..., n_m}. With this representation, f1(P_S, t_S) is defined as: f1(P_S, t_S) = ∑_i=1^mmax(0, 1-(n_i / t_S/α)^β)/|P_S| f1() ranges between 0 and 1 and a higher value indicates a better fitness. The value of f1() is high when new properties are frequently covered because in this case, the number of unique properties (|P_S|) and the number of times that property coverage increases (t_S) will be higher, and the frequency of each covered properties (n_i) will be relatively lower. The value of f1() decreases when the covered properties are repeatedly covered (thus leading to higher n_is). Two parameters α and β control the decrease rate of f1(), which are set as e and 2, respectively. To effectively cover diverse WebView-specific properties in a limited time budget, we use f2() to limit the number of events spent on a state to search for uncovered properties. We rely on the code coverage information to compute f2(). Intuitively, if no more code coverage increase is observed after enough events are spent on a state, that state should not waste any more test resources afterwards. With this design, f2(N_S, c_S) is defined as: f2(N_S, c_S) = max(0, 1-(N_S / (c_S)^r(c_S)/ϵ)^θ) Similarly, f2() also ranges between 0 and 1. A higher value of f2() indicates better fitness. When the number of times that code coverage increase observed in S is high (i.e., a higher c_S), more events N_S can be allocated to S before f2() returns a relatively small value. r(c_S) is a function that returns a value slightly smaller than 1. It is defined as r(c_S) = 1-0.001*c_S to prevent the test from being stuck in a state whose code coverage is increased too many times. Two parameters ϵ and θ also control the decrease rate of f2(), which are set to 8 and 2, respectively. When no properties are covered in a state, we only use f2() to guide testing. When WebView-specific properties are covered in a state S, we make the fitness of S (i.e., f(S)) biased to f1() by setting the weight w in Equation <ref> as 0.7. §.§.§ Continue condition The function continue() (line 9 & line 10) makes decision based on the fitness value. It returns true if the following condition is satisfied: Rand(0,1) < min(0.9, f(S)) where Rand(0,1) returns a random number between 0 (inclusive) and 1 (exclusive). Intuitively, f(S) can be treated as the probability that Condition <ref> is satisfied. A higher f(S) will make continue() more likely to return true. Therefore, will have a higher probability to continue exploration from a state with good fitness. §.§ Oracle Test oracles (line 5) are designed to detect WebView-induced bugs based on WebView-induced crashes and the lifecycle misalignment criteria proposed by  <cit.>. We identify a crash as WebView-induced if the crash results from an event executed on an element in the websites loaded by WebViews. The lifecycle misalignment criteria were designed to detect UI inconsistencies of a WebView before and after executing certain lifecycle events that make an activity restart (e.g., rotate screen). The assumption is that the displayed web page should be consistent before and after an activity restarts. For instance, the entered information of a form on a web page should not be lost after a device orientation change. Otherwise, the end-user has to re-enter the information in the form. §.§ Implementation is built on top of Appium <cit.>, a test automation framework for mobile apps. obtains the UI hierarchy using the UiAutomator2 driver <cit.> facilitated by Appium. retrieves the covered properties by reading the files that record coverage information dumped by the instrumented AUT (Section <ref>) through Android's debugging tool <cit.>. Following previous work  <cit.>, we set a 200 ms delay between events. To switch to other fragment-states (line 14 in Algorithm <ref>), remembers the events that can lead to the switching of fragment-states during testing. One of them (including not-yet-triggered events, events that can open menus/navigation-drawers, etc.) will be picked if decides to switch again. is available online <cit.>. § EVALUATION In this section, we apply to test real-world Android apps. We evaluate it by studying three research questions: * RQ1: Can effectively explore WebView behaviors? Compared with baseline methods, can achieve higher WebView-specific property coverage? * RQ2: Can effectively detect WebView-induced bugs? * RQ3: Compared with other coverage criteria, what is the bug-exposing capability of the WebView-specific property coverage criteria? Is covering more WebView-specific properties helpful to detect more WebView-induced bugs? §.§ Evaluation Subjects Our evaluation subjects contain open-source Android apps and closed-source Android apps. They are listed on our website <cit.>. Collecting open-source apps: We collected open-source apps from F-Droid <cit.>, a popular open-source app hosting site, and the apps used in  <cit.>. We filtered the apps following the selection criteria below. An app was excluded if it satisfies one of the following conditions: (1) the app does not use WebViews in its application code; (2) the app does not have any commits within the past three months at the time when we conducted experiments; (3) the app's main functions cannot be reached in a fully automated way (e.g., requires login, remote devices or resources); (4) the app cannot be installed on Android emulators; (5) the app only uses WebViews to display simple pages such as About/License/Advertisement; (6) the app is a toy app (e.g., proof-of-concept apps) or a duplicate of the others. These criteria enable us to select well-maintained apps with non-trivial use of WebViews. Following the criteria and excluding the apps that cannot run after instrumentation, apps were selected as our experiment subjects. Among them, 32 apps can be found on Google Play. These apps are large-scale (avg over 100 Kloc), well-maintained (avg over 3k revisions), highly rated (avg 4/5), and diverse (covering 13 categories). Collecting closed-source apps: We collected closed-source apps using a Google Play crawler Raccoon <cit.>. We downloaded the most-popular apps according to the rankings given by AppBrain <cit.>. We followed criteria (1), (3), (4), (5) adopted in the previous paragraph when collecting the subjects. We stopped collecting until we successfully instrumented 30 apps. These 30 apps cover 15 categories and have over 2.84 billion downloads in total. §.§ Experiment Setup §.§.§ Baselines To study the RQs, we selected baseline methods whose tools are publicly available and can run on Android 10. We compared with the following four baselines, including one state-of-the-art WebView test generation technique and three state-of-the-art general-purpose Android test generation techniques. * :  <cit.> is a Monkey-based random test generation technique that uses specially designed oracles to detect WebView-induced bugs. The oracles are also adopted by (Section <ref>). * Q-Testing: Q-Testing <cit.> is a reinforcement learning-based general-purpose Android test generation technique. During app exploration, Q-Testing is more likely to pick an event that is expected to obtain higher accumulative rewards (i.e., discover new UI states). Whether two UI states are similar or different is determined by a trained neural network. * ComboDroid: ComboDroid <cit.> is a general-purpose Android test generation technique. Its core idea is to generate a long event sequence (a combo) by combining multiple short event sequences (use cases). ComboDroid tends to combine two use cases where the latter one uses the data written by the former one so as to maximize data-flow diversity. ComboDroid provides a fully-automated variant and a semi-automated variant, and we use the former one. We set the modeling time, a parameter required by ComboDroid, to 30 minutes by following their recommendations. * Fastbot2: Fastbot2 <cit.> is a general-purpose model-based test generation product from ByteDance <cit.>. It builds a probabilistic activity-event transition model and uses reinforcement learning to assist event selection. Fastbot2 uses APE <cit.> and has been deployed at ByteDance for two years. §.§.§ Ablation Study To evaluate the necessity of including s in guiding test generation, we additionally make one more baseline. * : differs from in that it only considers WebView API call sites in the fitness function. This baseline evaluates whether using WebView API call sites alone is adequate to guide WebView testing. §.§.§ Coverage Calculation To measure the effectiveness of exploring WebView behaviors, we measure the property coverage achieved by and the baselines. In traditional test generation studies, a coverage percentage (the covered properties over the total number of available properties) is usually used. However, unlike program statements/methods, the complete set of properties is difficult to obtain in our problem since it requires cross-language data flow analysis that is both sound and complete. For fair comparisons, we define the total property set P_all for an app as the union of the properties covered by and the baselines when finishing testing. Then the coverage of a method for an app is defined as: Cov_method = |P_method|/|P_all| §.§.§ Experiment Environment We ran experiments on Android emulators running Android 10. We choose Android 10 to balance the OS popularity (it is the second most popular Android OS version when we were conducting experiments) and the number of available baselines (e.g., ComboDroid can support up to Android 10). We followed recent works <cit.> and allocated one hour to test each app. Since the executions of and all the baselines are subject to some randomness, we repeated the experiments five times to mitigate randomness in the results. Under these settings, the complete property set of an app was further enlarged to the union of P_alls over the five rounds. The experiments on closed-source apps were conducted on a machine running CentOS Stream 8, powered by AMD Ryzen Threadripper PRO 3995WX 64-Cores and 512GB memory. The experiments on open-source apps were conducted on a machine running CentOS Stream 8, powered by AMD Ryzen Threadripper 3970X 32-Core Processor and 256GB memory. We ran 16 emulators in parallel on each machine. §.§ Results for RQ1 & RQ2 In this section, we present the results of WebView-specific property coverage and the detected WebView-induced bugs achieved by each methods. When reporting the coverage results, we will exclude an app for a baseline if it cannot successfully test the app. ComboDroid requires to instrument an app before testing. It fails to instrument 7 open-source apps and 9 closed-source apps in our dataset. Q-Testing cannot successfully test 4 open-source apps and 3 closed-source apps because of multiple engineering issues (e.g., UiAutomator error, app launching failure, etc.). also fails to run on 1 open-source app and 1 closed-source app. We excluded those apps when reporting the coverage results of ComboDroid, Q-Testing, and . When reporting the results on the detected bugs, ComboDroid, Q-Testing, and Fastbot2 are not included because they are not equipped with the oracles to detect WebView-induced bugs, and therefore no such bugs can be detected. Figure <ref> and  <ref> illustrate the WebView-specific property coverage distributions and its average progressive improvements of different methods using box plots and line charts. In figure <ref>, each app's coverage achieved by a method is averaged over five rounds. In figure <ref>, the properties covered by a method on each app is accumulated from five rounds of experiments. The data behind these figures can be found on our website <cit.>. From the figures, we can see that and can significantly outperform the baseline methods on both open-/closed-source apps. In particular, can achieve 16%-36% higher coverage on average on open-source apps and 9%-33% higher coverage on average on closed-source apps according to Figure <ref>. also outperforms , which eliminates s and simply considers WebView API call sites in the fitness function. Compared with , can increase the coverage by 4%(on average)/6%(on median) and 3%(on average)/8%(on median) on open-source apps and closed-source apps, respectively, according to Figure <ref>. Such results indicate that using WebView API call sites alone is inadequate to achieve higher property coverage. Table <ref> shows the number of bugs detected by , , and over the five rounds of experiments (ComboDroid, Q-Testing, and Fastbot2 do not have the oracles to detected WebView-induced bugs). In total, these 3 methods detected 27 bugs in 19 open-source apps and 13 bugs in 8 closed-source apps in these 5 runs. On average, detected more number of bugs than and . failed to find three bugs that were detected by in open-source apps because specific event types are not supported by currently. failed to find one bug that was detected by and in a closed-source app because we found the fitness value drops quickly on the activity that contains the buggy WebView in that app. Therefore decides to spend fewer events on that activity, leaving the bug undetected. We reported all the bugs detected in open-source apps to their corresponding GitHub issue trackers and provide the issue links on our website <cit.>. To comply with each app's contributing guide and license, we thoroughly tested each app and submitted a bug report in a proper format if the bug can be consistently reproduced. If a GitHub repository is archived or the detected bug has already been fixed in newer versions of the app, we provide the reproducing steps on our website <cit.>. We also provide the reproducing steps for bugs detected in closed-source apps on our website <cit.>. Among the 22 submitted bug reports whose bugs were detected by , bugs have been confirmed and of them have been fixed. During bug reproduction, we observed that covering s is helpful in driving to explore diverse WebView behaviors, thus discovering more hidden bugs in different usage scenarios of WebViews. For example, consistently detected bugs in Notepad <cit.>, which is a notes edit app that has over 10 million downloads. The app uses a WebView to display Frequently Ask Questions (FAQs). discovered that a user's reading progress would get lost after device rotation because the WebView refreshes the page after rotation. In another scenario, Notepad uses a WebView to display Privacy Policies (PPs). Exploring FAQs and PPs will cover the same set of WebView API call sites but different sets of s. , which is solely driven by WebView API call sites, may consider these scenarios being adequately explored after a few events, thus missing the bug hidden in these scenarios. In summary, can effectively explore WebView behaviors of Android apps. It can achieve higher WebView-specific property coverage and detect more number of WebView-induced bugs. §.§ Results for RQ3 To demonstrate the bug-exposing capability of the WebView-specific property coverage criteria, we compare it with WebView API call site coverage criteria and code coverage criteria. Figure <ref>-<ref> show the coverage distributions and the progressive improvements of WebView API call cite coverage and code coverage, respectively. In addition to the number of bugs detected at the end of testing shown in Table <ref>, we plot the progressive improvements of the number of bugs detected by each method in Figure <ref>. The time that a bug is found by a method is the first time when it is detected. The time is averaged by a number between 1 to 5, depending on how many times that the bug is detected in the five rounds of experiments. Figures <ref> and <ref> suggest that the coverage criterion based on WebView API call sites has a weak correlation with the number of detected bugs. The coverage quickly converges at the initial stage of the testing (before 5 minutes). However, many bugs are detected after 5 minutes. In addition, Figure <ref> shows the code coverage achieved by and are similar on both open-source apps (avg 33.5% vs 32.9%) and closed-source apps (avg 15.9% vs 15.7%). However, Table <ref> shows that detects far more bugs than , which sugguests that code coverage criterion also has a weak correlation with the number of detected bugs. To quantitatively measure the correlations, we follow existing work <cit.> to compute Kendall correlations <cit.> between the coverage computed base on a criterion and the number of detected bugs. Kendall correlation measures the correlations between two sets of data. A value between 0 to 1 indicates they are positively correlated (0-0.4 means a low correlation, 0.4-0.7 means a moderate correlation, 0.7-1 means a strong correlation). We compute the correlations between the average coverage on the buggy apps and the number of dectected bugs achieved by each method in each round at 5,10,15,...,60 minutes. The results for WebView-specific property coverage criteria, WebView API call site coverage criteria, and code coverage criteria on open-source apps are 0.7, 0.49, and 0.67, respectively. The results for WebView-specific property coverage criteria and WebView API call site coverage criteria on closed-source apps are 0.59 and 0.53, respectively. The result on code coverage is not reliable becasue closed-source apps are highly obfuscated. The code coverage are severely affected by third-party/system libraries, which cannot be effectively distinguished from application code. The results suggest that WebView-specific property coverage criteria has a moderate to strong correlations with the number of detected bugs. In summary, the WebView-specific property coverage criteria have a stronger bug-exposing capability than the WebView API call site coverage criterion and the code coverage criterion. Covering more WebView-specific properties is helpful in detecting more WebView-induced bugs. §.§ Limitations We observed a limitation of when analyzing the experiment results. We found is not effective in reaching difficult-to-reach WebViews in an Android app. Some WebViews are deeply hidden in an app (e.g., a long activity stack is observed when the WebView is reached) so that may decide to leave a component too early, but actually more events deserve to be tried. Furthermore, we also observed that some WebViews cannot be reached until specific conditions are satisfied. For example, a weather app called RadarWeather <cit.> uses a WebView to display a weather map in an activity. The map is only available until users enters a city name in another activity. The fitness function adopted by is not able to model such information. In future work, we plan to extend with static analysis, which aims to identify the activities/fragments that are helpful to reach WebViews. §.§ Threats to Validity Our property coverage (Formula <ref>) may not reflect the “real” coverage since P_all can be different from the complete set of WebView-specific properties (there can be properties that were not discovered in our experiments). Obtaining the complete set in our problem is difficult because it requires cross-language data flow analysis that is both sound and complete. It also requires a sound and complete string analysis to predict the possible JavaScript code that is dynamically constructed at runtime. We mitigated this problem by approximating the complete set with the union of the properties identified by the six methods over five runs. Although our coverage results can be different from that computed using the complete property set, it provides a reliable evaluation of the relative performance achieved by different methods. This meets our evaluation goals and it is adopted by existing work <cit.> in which the complete set is hard to obtain. The conclusions drawn from our evaluation are affected by the representativeness of the selected subjects. To mitigate the threat, we selected real-world open-source Android apps that are large-scale, well-maintained, and diverse in categories. We also included closed-source apps that are selected from the most popular Android apps on the Google Play store. They have more than 2.8 billion downloads in total. Randomness can affect our evaluation results as the algorithms of and all the baselines involve randomness. To mitigate the issue, we follow existing works <cit.> to repeat our experiments five times. §.§ Discussion The WebView-specific properties are computed via a set of propagation rules based on explicit data flows. Like existing work <cit.>, we choose not to propagate s through implicit operations (e.g., control flows) to reduce noise. Except s, coverage could be measured based on other types of properties such as call chains that involve WebView API calls, which seems sufficient for revealing the bugs. However, we choose not to use such call chains because (1) determining calls relevant to WebViews is difficult. Appending all calls around WebView API calls can include irrelevant methods. In comparison, the analysis defined in Section <ref> can effectively determine the part of an app that is relevant to WebViews. (2) Determining the length of call chains is hard. Shorter chains may have a weak bug detection capability. For example, the covered calls for cs-bug1 and cs-bug2 <cit.> in Notepad <cit.> are the same if the length of the chain is smaller than 100 (50 calls before and after WebView API calls). Longer chains may increase the bug-detection capability, but can include many methods irrelevant to WebViews, complicating the analysis and increasing test cost. § RELATED WORK §.§ WebView Study WebView has attracted immense interest from research communities over the past ten years mainly because of the new security threats it brings to Android apps. Many attack models and mitigation solutions were proposed. For example, Bai et al. proposed BridgeTaint <cit.>, a dynamic taint tracking technique targeting the WebView's bridge communication <cit.> to detect privacy leaks and cross-language code injection attacks. More recently, Yang et al. proposed EOEDroid <cit.>, OSV-Hunter <cit.>, and DCV-Hunter <cit.> that detect three kinds of new vulnerabilities resulting from WebView's event handlers, postMessage mechanism, and iframes/popups, respectively. Hu et al. proposed  <cit.> to detect WebView-induced lifecycle misalignment bugs. Our work complements existing work because we focus on effective test generation to examine WebView behaviors in an Android app. §.§ Android Testing A large number of test generation techniques have been proposed to test Android apps <cit.>. They can be classified into two major categories according to their test objectives. One is model-based test generation <cit.> whose objective is to discover diverse GUI states of an Android app. Intuitively, more discovered GUI states that “look different” means more app behaviors are explored. Another major category looks for program properties (e.g., program statements and branches) in an app's program structure and takes them as the test objectives <cit.>. Although these existing program properties may be suitable for general-purpose testing, none of them are designed for testing WebViews in Android apps. In our paper, we design a novel coverage criterion based on WebView-specific properties to guide the test generation to effectively examine WebView behaviors in Android apps. § CONCLUSION In this paper, we proposed a novel design of WebView-specific properties that can abstract WebView behaviors in Android apps. The property can be utilized to guide test generation to explore diverse WebView behaviors. Based on this idea, we devised , a test generation technique that maximizes the number of covered WebView-specific properties. Our evaluation results show that can effectively generate tests exercising diverse WebView behaviors and detect WebView-induced bugs. now only adopts crashing and lifecycle misalignment oracles. In the future, we plan to extend the oracles and leverage to detect more types of WebView-induced bugs. We will also study how to effectively reach difficult-to-reach WebViews in our future work. § DATA AVAILABILITY We make all our data publicly available at <https://richardhooooo.github.io/wTest/>. The website includes (1) open-source and closed-source apps used in experiments, (2) coverage results achieved by and the baselines, (2) the links to the submitted bug reports and reproduction steps if the report is not able to be submitted, (3) the customized Android OS, (4) the tool and its guidance. The authors thank ISSTA 2023 reviewers. This work is supported by National Natural Science Foundation of China (Grant No. 61932021), Hong Kong Research Grant Council/General Research Fund (Grant No. 16211919), Hong Kong Research Grant Council/Research Impact Fund (Grant No. R503418), NSERC Discovery Grant RGPIN-2022-03744 DGECR-2022-00378, and Guangdong Basic and Applied Basic Research Fund (Grant No. 2021A1515011562). ACM-Reference-Format
http://arxiv.org/abs/2306.08386v1
20230614092148
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
[ "Hong Sun", "Ziqiang Li", "Pengfei Xia", "Heng Li", "Beihao Xia", "Yi Wu", "Bin Li" ]
cs.CR
[ "cs.CR", "cs.CV" ]
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios Hong Sun^1,∗ Ziqiang Li^1,∗ Pengfei Xia^1 Heng Li^2 Beihao Xia^2 Yi Wu^1 Bin Li^1 ^1 Big Data and Decision Lab, University of Science and Technology of China. ^2 Huazhong University of Science and Technology {hsun777,iceli,xpengfei}@mail.ustc.edu.cn, {liheng,xbh_hust}@hust.edu.cn [email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================= [1]Equal Contribution. Recent deep neural networks (DNNs) have come to rely on vast amounts of training data, providing an opportunity for malicious attackers to exploit and contaminate the data to carry out backdoor attacks. These attacks significantly undermine the reliability of DNNs. However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data. In this paper, we address this limitation by introducing a more realistic attack scenario where victims collect data from multiple sources, and attackers cannot access the complete training data. We refer to this scenario as data-constrained backdoor attacks. In such cases, previous attack methods suffer from severe efficiency degradation due to the entanglement between benign and poisoning features during the backdoor injection process. To tackle this problem, we propose a novel approach that leverages the pre-trained Contrastive Language-Image Pre-Training (CLIP) model. We introduce three CLIP-based technologies from two distinct streams: Clean Feature Suppression, which aims to suppress the influence of clean features to enhance the prominence of poisoning features, and Poisoning Feature Augmentation, which focuses on augmenting the presence and impact of poisoning features to effectively manipulate the model's behavior. To evaluate the effectiveness, harmlessness to benign accuracy, and stealthiness of our method, we conduct extensive experiments on 3 target models, 3 datasets, and over 15 different settings. The results demonstrate remarkable improvements, with some settings achieving over 100% improvement compared to existing attacks in data-constrained scenarios. Our research contributes to addressing the limitations of existing methods and provides a practical and effective solution for data-constrained backdoor attacks. § INTRODUCTION Deep neural networks (DNNs) are widely utilized and powerful machine learning algorithms inspired by the structure and functioning of the human brain. They excel at learning intricate patterns in data, making them invaluable for various applications such as image recognition <cit.>, natural language processing <cit.>, image generation <cit.>, and anomaly detection <cit.>. However, the effectiveness of DNNs heavily relies on the quantity and quality of the training data. For instance, Stable Diffusion <cit.>, a generative model with 983 million parameters, owes its success in image generation tasks to pre-training on 5 billion image-text pairs. Similarly, GPT-3 <cit.>, a language model with 175 billion parameters, owes its efficacy in diverse language processing tasks to pre-training on an extensive corpus of 45 TB of text data. As the demand for data continues to rise, many users and businesses resort to third-party sources or online collections as a convenient means of acquiring the necessary data. However, recent studies <cit.> have demonstrated that such practices can be maliciously exploited by attackers to contaminate the training data, significantly impairing the functionality and reliability of trained models. The growing adoption of neural networks across different domains has made them an attractive target for malicious attacks. One particular attack technique gaining attention is the backdoor attack <cit.>. In backdoor attacks, a neural network is deliberately injected with a hidden trigger by introducing a small number of poisoning samples into the benign training set during the training. Once the model is deployed, the attacker can activate the backdoor by providing specific inputs containing the hidden trigger, causing the model to produce incorrect results. Backdoor attacks continue to present a significant and pervasive threat across multiple sectors, including image classification <cit.>, natural language processing <cit.>, speaker verification <cit.>, malware detection <cit.>, and video recognition <cit.>. In this paper, we focus on the widely studied field of image classification. However, it is important to note that all of previous backdoor attacks are based on an assumption that may be too broad in practice. They assume that all training data has been collected from a single source, and the collected source has been poisoned by attacker (As depicted in the O2O data collection mode in Fig. <ref>). In this case, the attacker can full access to the entire training data. While this assumption allows attackers to easily poison the training data, it does not accurately reflect real-world attack scenarios. To illustrate this point, let's consider a scenario where victims possess a private dataset with limited samples. To compensate for the limited data, victims may need to augment their dataset by collecting additional data from multiple sources on the internet (referred to as the public dataset) and combine it with their private dataset to form the training set. As shown in the M2O data collection mode in Fig. <ref>, part of the sources may be poisoned by attackers, secretly[This scenario is common in the context of large models training. For instance, Stable Diffusion <cit.>, a popular generative model, pre-trains on 5 billion image-text pairs. Similarly, GPT-3 <cit.>, a language model with 175 billion parameters, pre-trains on an extensive corpus of 45 TB of text data. Both of them need to crawl large amounts of data from different websites to complete complex training tasks.]. In this case, the attackers cannot access the private dataset and can only manipulate a portion of the public dataset to carry out the poisoning process. Consequently, a discrepancy arises between the distribution of the poisoning data and the distribution of the training data, which differs from the previous pipeline of poisoning attacks. In this paper, we address a more realistic backdoor attack scenario called data-constrained backdoor attacks, where the attackers do not have access to the entire training set. To be more precise, we classify data-constrained backdoor attacks into three types based on different types of data sources: number-constrained backdoor attacks, class-constrained backdoor attacks, and domain-constrained backdoor attacks[These three backdoor attacks are more realistic attack scenario, where the data provided by each data source is independently and identically distribution in number-constrained backdoor attacks, each data source provides data belonging to different categories in class-constrained backdoor attacks, and each data source provides data from different domains in domain-constrained backdoor attacks. More description can be found in Sec. <ref>.]. Upon investigation, we have discovered that existing attack methods exhibit significant performance degradation when dealing with these data-constrained backdoor attacks. We propose that the entanglement between benign and poisoning features is a crucial factor contributing to this phenomenon. Entanglement refers to the neural networks utilizing both benign and poisoning features to make decisions for poisoning samples. However, this approach is not efficient for backdoor attacks. Ideally, an efficient backdoor attack should solely rely on the poison feature generated by the trigger to make decisions, irrespective of how the benign feature is expressed. To address the aforementioned challenge and enhance the efficiency of poisoning attacks in data-constrained backdoor scenarios, we introduce two streams: Clean Feature Suppression and Poisoning Feature Augmentation. In the Clean Feature Suppression stream, we focus on reducing the influence of clean features during the poisoning process. On the other hand, in the Poisoning Feature Augmentation stream, we aim to amplify the expression of poisoning features. To achieve these goals, we propose three techniques utilizing the pre-trained Contrastive Language-Image Pre-Training (CLIP) <cit.> model. i) CLIP-based Clean Feature Erasing (CLIP-CFE) is designed to suppress the expression of clean features. It leverages the capabilities of the CLIP model to identify and minimize the impact of clean features in the poisoning process. ii) CLIP-based Universal Adversarial Perturbations (CLIP-UAP) focuses on poisoning feature augmentation. It employs the CLIP model to generate universal adversarial perturbations that effectively enhance the expression of poisoning features in the training data. iii) CLIP-based Contrastive Feature Augmentation (CLIP-CFA) also falls under the poisoning feature augmentation stream. This technique utilizes the CLIP model to perform contrastive feature augmentation, enhancing the power of poisoning features and improving the effectiveness of the backdoor attack. Our main contributions are summarized as follows. * We present a novel and contemporary backdoor attack scenario called data-constrained backdoor attacks to image classification. Data-constrained backdoor attacks assume that attackers lack access to the entire training data, making it a versatile and practical attack with broad applicability. * Through a systematic analysis of previous attack methods, we identify the entanglement between poisoning and benign features as the primary contributing factor to their performance degradation. * To address this issue, we introduce the pre-trained CLIP model into the field of backdoor attacks for the first time. We propose three innovative technologies: CLIP-CFE, CLIP-UAP, and CLIP-CFA. Extensive evaluations conducted on 3 datasets and 3 target models, and over 15 different settings demonstrate the significant superiority of our proposed CLIP-UAP and CLIP-CFA over existing backdoor attacks. Furthermore, CLIP-CFE complements existing attack methods and can be seamlessly integrated with them, resulting in further efficiency improvements. § BACKGROUND Here we first summary the common pipeline of backdoor attacks on neural networks, and then we introduce the Contrastive Language-Image Pre-Training (CLIP) Model that has been adopted in our method. §.§ Backdoor Attacks on Neural Networks §.§.§ General Pipeline of Backdoor Attacks Consider a learning model f(· ; Θ): X → Y, where Θ represents the model's parameters and X (Y) denotes the input (output) space, with given dataset 𝒟⊂ X× Y. Backdoor attacks typically involve three essential steps: poisoning set generation, backdoor injection, and backdoor activation. Poisoning set generation. In this step, attackers employ a pre-defined poison generator 𝒯(x,t) to introduce a trigger t into a clean sample x. Specifically, they select a subset 𝒫'={(x_i,y_i)|i=1,⋯,P} from the clean training set 𝒟={(x_i,y_i)|i=1,⋯,N} (𝒫'⊂𝒟, and P≪ N) and result in the corresponding poisoning set 𝒫={(x'_i,k)|x'_i=𝒯(x_i,t), (x_i,y_i) ∈𝒫^', i=1,⋯,P}. Here, y_i and k represent the true label and the attack-target label of the clean sample x_i and the poisoning sample x'_i, respectively. Backdoor injection. In this step, The attackers mix the poisoning set 𝒫 into the clean training set 𝒟 and release the new dataset. The victims download the poisoning dataset and use it to train their own DNN models <cit.>: Θmin 1/N∑_(x, y) ∈𝒟 L(f(x; Θ), y)+ 1/P∑_(x^', k) ∈𝒫 L(f(x'; Θ), k) , where L is the classification loss such as the commonly used cross entropy loss. In this case, backdoor injection into DNNs has been completed silently. Backdoor activation. In this step, the victims deploy their compromised DNN models on model-sharing platforms and model-selling platforms, such as Model Zoo and AWS Marketplace. The compromised model behaves normally when presented with benign inputs, but attackers can manipulate its predictions to align with their malicious objectives by providing specific samples containing pre-defined triggers. §.§.§ Examples of Backdoor Attacks Here, we present three popular backdoor attack methods that serve as the baseline for our preliminary experiments, providing insight into the motivation discussed in Sec. <ref>. All attacks follow the pipeline described in Sec. <ref>. BadNets <cit.>. BadNets <cit.> is the pioneering backdoor attack in deep learning and is often used as a benchmark for subsequent research. It utilizes a 2× 2 attacker-specified pixel patch as the universal trigger pattern attached to benign samples. Blended <cit.> Chen et al. <cit.> first discuss the requirement for invisibility in backdoor attacks. They propose that the poisoning image should be visually indistinguishable from its benign counterpart to evade human inspection. To meet this requirement, they introduce a blending strategy where poisoning images are created by blending the backdoor trigger with benign images. Formally, the poison generator can be formulated as 𝒯(x,t)=λ· t+(1-λ)· x, where λ represents the blend ratio (we set λ=0.15 for all experiments in this paper), and t is an attacker-specified benign image serving as the universal trigger pattern. Universal Adversarial Perturbations (UAP) <cit.> Inspired by Universal Adversarial Perturbations (UAPs) in adversarial examples, some studies <cit.> propose optimizing a UAP on a pre-trained clean model as the natural trigger, formulated as 𝒯(x,t)=x+t, where t is a pre-defined UAP serving as the universal trigger pattern. It's worth noting that UAP-based backdoor attacks require a clean model pre-trained on the entire training set, which is not suitable for the discussed settings. However, to better explain our motivation that previous technologies exhibit significant performance degradation in data-constrained backdoor attacks, we assume the availability of a clean model pre-trained on the original training dataset in this section. It is important to acknowledge that this assumption does not hold in an actual attack scenario. §.§ Contrastive Language-Image Pre-Training (CLIP) Model Our method introduces the Contrastive Language-Image Pre-Training (CLIP) <cit.> model into backdoor injection and we introduce it here. CLIP is a revolutionary deep learning model developed by OpenAI that is designed to connects texts and images by bringing them closer in a shared latent space, under a contrastive learning manner. The CLIP model is pre-trained on 400 million image-text pairs harvested from the Web, containing two encoder: CLIP text encoder ℰ̂_t(·) and CLIP image encoder ℰ̂_i(·). These encoders project the text and image to the CLIP common embedded feature space. Since natural language is able to express a much wider set of visual concepts, it contains ability to generalize across a wide range of tasks and domains, such as text-driven image manipulation <cit.>, zero-shot classification <cit.>, domain generalization <cit.>. To our best knowledge, our paper is the first study to explore the usage of CLIP model in the security community. § DATA-CONSTRAINED BACKDOOR ATTACKS Here we first show the considered pipeline of data-constrained backdoor attacks, and than illustrate the performance degradation of previous attack methods on proposed data-constrained backdoor attacks. Finally, we attribute the degradation to the entanglement between the benign and poisoning features during the poisoning injection. §.§ Preliminaries Previous methods <cit.> have commonly followed the attack pipeline outlined in Sec. <ref>. However, this widely adopted pipeline relies on an overly loose assumption that all training data is collected from a single source and that the attacker has access to the entire training data, which is often not the case in real-world attack scenarios. In this paper, we focus on a more realistic scenario: Data-constrained Backdoor Attacks, which necessitates the definition of a modified attack pipeline as follows: Pipeline of data-constrained backdoor attacks. Similar to the general pipeline of backdoor attacks on neural networks, the proposed pipeline also consists of three steps: poisoning set generation, backdoor injection, and backdoor activation. The backdoor injection and activation steps remain unchanged from the previous pipeline. However, in the poisoning set generation step, data-constrained backdoor attacks differ from the previous pipeline. Instead of assuming access to the entire dataset 𝒟, data-constrained attacks only assume access to a clean training set 𝒟'={(x_i,y_i)|i=1,⋯,N'}, which follows a different data distribution from 𝒟. To address this, the attacker randomly selects a subset 𝒫'={(x_i,y_i)|i=1,⋯,P} from the accessible dataset 𝒟', and creates the corresponding poisoning set 𝒫={(x'_i,k)|x'_i=𝒯(x_i,t), (x_i,y_i) ∈𝒫^', i=1,⋯,P}. Additionally, based on the different constraints imposed by the accessible training set 𝒟', data-constrained backdoor attacks are further categorized into three types: Number-constrained Backdoor Attacks, Class-constrained Backdoor Attacks, and Domain-constrained Backdoor Attacks. The details can be found in Sec. <ref>, Sec <ref>, and Sec. <ref>, respectively. Experimental settings. To evaluate the performance of three backdoor attack methods (BadNets, Blended, and UAP) under data-constrained scenarios, we conduct experiments on the CIFAR-10 dataset. Specifically, we consider three types of data constraints: number, class, and domain. The settings for the poisoning attacks follow those described in Sec. <ref>. In all attacks, we set the attack-target label k to category 0. For our experiments, we select the VGG-16 model as the victim model and employ SGD as the optimizer with a weight decay of 5e-4 and a momentum of 0.9. The batch size is set to 256, and the initial learning rate is set to 0.01. The learning rate is multiplied by 0.1 at the 35-th and 55-th epochs, and the training is conducted for a total of 70 epochs. §.§ Number-constrained Backdoor Attacks Definition. Let 𝒟' denote the data manipulable by the malicious source, and 𝒟 represent all the data available to the data collector. In the number-constrained scenario, as illustrated in Fig. <ref> (a), the data collector gathers data from multiple sources, including both malicious and benign sources, to form 𝒟. The data provided by each data source is independently and identically distributed. In other words, 𝒟 and 𝒟' belong to the same distribution, but in terms of quantity, N'< N. The setting of number-constrained backdoor attacks is similar to that of data-efficient backdoor attacks discussed in previous studies <cit.>. Both aim to improve the Attack Success Rate (ASR) under a low poisoning rate. However, previous studies assumed that the attacker has access to the entire training set 𝒟, which enables efficient trigger design and sample selection. For example, some studies <cit.> draw inspiration from Universal Adversarial Perturbations (UAPs) in adversarial examples and propose to optimize a UAP on a clean model pre-trained on the training set as the natural trigger. Xia et al. <cit.> enhance the poisoning efficiency in backdoor attacks by selecting poisoning data from the entire training set. Although these methods have achieved remarkable results, they cannot be directly applied to number-constrained backdoor attacks due to the lack of access to the entire training set. Experimental results. In this section, we investigate the performance degradation of previous studies in number-constrained backdoor attacks. As shown in Fig. <ref>, the attack success rate experiences a significant decrease as the number (P) of poisoning samples decreases, particularly for Blended backdoor attacks. It is worth noting that Universal Adversarial Perturbations (UAP) achieves relatively favorable results even with a low poisoning rate. This can be attributed to the utilization of a proxy model that is pre-trained on the entire training set (𝒟). However, in our settings, UAP is not accessible, and we present the results for UAP to effectively demonstrate the performance degradation even when a pre-trained proxy model is available. §.§ Class-constrained Backdoor Attacks Definition. In the class-constrained scenario, let 𝒟' represent the data manipulable by the malicious source, and 𝒟 denote all the data available to the data collector. As depicted in part (b) of Fig. <ref>, the data collector gathers data from multiple sources, including both malicious and benign sources, to form 𝒟. Each data source provides data belonging to different categories, resulting in 𝒟' containing only a subset of categories present in 𝒟. Therefore, 𝒟 and 𝒟' follow distinct distributions. More specifically, the accessible clean training set 𝒟'⊂ X× Y' (𝒟'={(x_i,y_i)|i=1,⋯,N'}) is a subset of the entire training set 𝒟⊂ X× Y (𝒟={(x_i,y_i)|i=1,⋯,N}), where Y'⊂ Y={1,2,⋯,C}. Class-constrained backdoor attacks can be seen as a general setting of clean-label backdoor attacks <cit.>. In clean-label backdoor attacks, the accessible clean training set 𝒟' is defined as 𝒟'⊂ X× Y', where Y'={k} and k represents the attack-target label. Experimental results. In this section, we explore the performance degeneration of previous studies in class-constrained backdoor attacks. As illustrated in Fig. <ref>, attack success rate decreases as the number of class (C') in the poisoning set decreases, which is the similar as experimental results on the number-constrained backdoor attacks. §.§ Domain-constrained Backdoor Attacks Definition. In the domain-constrained scenario, as depicted in part (c) of Fig. <ref>, the data collector gathers data from multiple sources (both malicious and benign) to form 𝒟. Each data source provides data from a different domain, resulting in 𝒟' containing only a subset of the domains present in 𝒟. Consequently, 𝒟 and 𝒟' belong to different distributions. We examine an extreme scenario in domain-constrained backdoor attacks, where the test dataset follows the same distribution as the benign source (𝒟∖𝒟') and is outside the domain of the malicious source 𝒟'⊂ X× Y' (𝒟'={(x_i,y_i)|i=1,⋯,N'}). Experimental results. To simulate the domain-constrained scenario, we conducted experiments with the following settings in this section: we designate the CIFAR-10 dataset as the benign source, the ImageNet dataset as the malicious source, and evaluated the attack performance on the CIFAR-10 dataset. Fig. <ref> illustrates the results, showing a decrease in the attack success rate as the domain rate (the proportion of poisoning sets sampled from 𝒟∖𝒟' and 𝒟') in the poisoning set decreases. This observation aligns with the experimental findings in the number-constrained and class-constrained backdoor attacks. §.§ Entanglement Between Benign and Poisoning Features In our data-constrained backdoor attacks, we have made two significant observations. Firstly, we have noticed a severe performance decline in class-constrained and domain-constrained backdoor attacks. Secondly, we consistently found that BadNets outperforms Blended in terms of attack efficiency. We attribute these observations to the entanglement between benign and poisoning features. Our study is the first to investigate feature entanglement in the context of backdoor attacks, providing new insights into backdoor learning. Ideally, we would expect backdoor models to rely solely on poisoning features to make decisions when encountering poisoning samples, as this would be the most efficient approach for backdoor attacks. However, neural networks tend to be greedy and utilize all features for decision-making <cit.>, leading to activation of both poisoning and benign features during backdoor injection. This results in reduced poisoning efficiency when there is a difference in benign features between the backdoor injection and activation phases, as observed in data-constrained backdoor attacks. We have further investigated our hypothesis and present our findings in Fig. <ref>. As shown in Fig. <ref> (a), the attack efficiency of number-constrained, dirty-label single-class, and clean-label single-class backdoor attacks<ref> decreases in turn under the same poison rate. To understand the reason behind this phenomenon, we provide visualizations of the backdoor injection and activation phases for these three attacks in Fig. <ref> (b). For the number-constrained backdoor attack, the distribution of poisoning samples (consisting of both benign and poisoning features) in the backdoor injection phase is the same as that in the backdoor activation phase. In other words, both benign and poisoning features are activated simultaneously during both phases. However, for the dirty-label single-class backdoor attack, the distribution of poisoning samples (consisting of single-class benign and poisoning features) in the backdoor injection phase is different from that in the backdoor activation phase. During the injection phase, both benign and poisoning features are activated, but during activation phase, only the poisoning feature is activated. This is the reason why previous attack methods on dirty-label single-class backdoor attacks exhibit performance degeneration. The clean-label single-class backdoor attack is similar to the dirty-label single-class backdoor attack in terms of the distribution of poisoning samples. However, during backdoor injection, there is competing activation[In the clean-label single-class backdoor attack, the benign feature of the accessible class (the same as the attack-target class) in both poisoning and clean sets is labeled with the same label (e.g., "Fish" in Fig. <ref>), and the clean set contains more samples of the attack-target class. As a result, the presence of the benign feature in the poisoning set hampers the activation of the poisoning features. In contrary, the benign feature of the accessible class in poisoning and clean sets is labeled with the different label in the dirty-label single-class backdoor attack (e.g., the benign feature in the clean set is labeled as "Frog", while the benign+poisoning feature in the poisoning set is labeled as "Fish"). Consequently, the benign feature in the poisoning set does not impact the activation of the poisoning features.] between benign and poisoning features. Consequently, the poisoning efficiency of clean-label single-class backdoor attacks is lower than that of dirty-label single-class backdoor attacks. In summary, performance degeneration in data-constrained backdoor attacks can be attributed to entanglement between benign and poisoning features. BadNets exhibits simpler triggers compared to Blended, leading to less entanglement and a higher attack success rate. § CLIP-GUIDED BACKDOOR ATTACKS METHOD In this section, we present our approach, which consists of two components: Clean Feature Suppression and Poisoning Feature Augmentation. These components are independent of each other and can be seamlessly combined. Specifically, Clean Feature Suppression can be implemented through CLIP-based Clean Feature Erasing (CLIP-CFE), and Poisoning Feature Augmentation has included two technologies: CLIP-based Universal Adversarial Perturbations (CLIP-UAP) and CLIP-based Contrastive Feature Augmentation (CLIP-CFA). By seamlessly integrating these two orthogonal aspects, our method presents a comprehensive and versatile solution that addresses all three types of data-constrained backdoor attacks. Before proposing the attack method, we first introduce threat model considered in our work. §.§ Threat Model Attack scenario. The proliferation of large-scale artificial intelligence models, such as ChatGPT and Stable Diffusion, necessitates the collection of massive amounts of data from the web. However, the security and trustworthiness of this data cannot always be guaranteed. This data collection pipeline inadvertently introduces vulnerabilities that can be exploited by data-based backdoor attacks. Attackers can strategically inject poisoning data into the training dataset and publish it on the internet, potentially compromising the integrity and performance of these models. Unlike previous attack scenarios where all training data is sourced from a single provider, we consider a more realistic scenario in which victims collect data from multiple sources. In this scenario, attackers only have access to a portion of the training dataset. This situation mirrors the real-world training process of models that utilize diverse public data. By acknowledging the challenges posed by multi-source data collection and limited attacker access, our study provides valuable insights into the security implications of such scenarios. Attack goal. The objective of our paper is aligned with popular backdoor attacks, as seen in previous studies <cit.>. The attackers aim to activate a hidden trigger within the model by providing specific inputs, leading the model to produce incorrect results. Our attack strategy emphasizes three key properties: (i) Minimal side effects: The backdoor attacks should not adversely impact the accuracy of the model on benign inputs. (ii) Effective backdoor: The attack should have a high success rate across various datasets and models, ensuring its efficiency. (iii) Stealthy attack: The backdoor attacks should be inconspicuous and difficult to detect, maintaining their stealthiness. Our research aims to develop invigorative backdoor attacks that strike a balance between effectiveness and preserving the integrity of the model's performance on legitimate inputs. Attackers' prior knowledge. In order to simulate a realistic scenario, we assume that the attackers have no access to the models or training details. They possess only general knowledge about the class labels involved in the task. This assumption reflects a more challenging and practical setting, where attackers have limited information about the target system. Attackers' capabilities. Building upon previous studies <cit.>, we make the assumption that the attackers possess the capability to control the training data. However, we further impose a stricter assumption in this work, stating that the attackers have control over only a portion of the training data. Consequently, we divide the attack scenario into three distinct tasks, each representing different capabilities of the attacker. These tasks include: (i) Number-constrained backdoor attacks, where the attacker has access to only a subset of the training data; (ii) Class-constrained backdoor attacks, where the attacker has access to only a subset of the classes in the training data; and (iii) Domain-constrained backdoor attacks, where the attacker has access to only a subset of the domains within the training data. By considering these various constraints, we provide a comprehensive analysis of backdoor attacks in different data-constrained scenarios. §.§ Clean Feature Suppression As described in Sec. <ref>, the effectiveness of data-constrained backdoor attacks is hindered due to the entanglement of benign and poisoning features during the backdoor injection phase. To address this challenge, we propose a solution called "clean feature suppression" in this section. The primary objective of this approach is to minimize the impact of benign features on the decision-making process, thus amplifying the significance of poisoning features. §.§.§ CLIP-based Clean Feature Erasing To achieve clean feature suppression, we can employ a feature extractor pre-trained on the entire training set (As shown in Clean Feature Erasing Noise). However, since our data-constrained backdoor attacks lack access to the complete training set, an alternative solution is required. Recent studies have shown that pre-trained CLIP <cit.> generates consistent and robust semantic representations across a wide range of (image, text) pairs, enabling impressive zero-shot classification performance comparable to supervised learning accuracy on challenging datasets like ImageNet (As shown in CLIP for Zero-shot Classification). Hence, we can utilize the pre-trained general model CLIP, which replaces[CLIP <cit.> is a general pre-trained model that has been released by OpenAI. It is pre-trained on 400 million image-text pairs harvested from the Web and can express a much wider set of visual concepts.] the feature extractor trained on the entire training set, allowing us to achieve clean feature suppression (As shown in CLIP for Clean Feature Erasing). Clean feature erasing noise. The technique of clean feature suppression aims to eliminate the clean information present in images by introducing optimized noise, denoted as δ, which helps modify the input image to resemble the unbiased class. In accordance with the data-constrained backdoor attack pipeline outlined in Sec. <ref>, we assume that the chosen clean training dataset for generating the poisoning set consists of P clean examples, denoted as 𝒫'⊂ X × Y (where 𝒫' = {(x_i, y_i) | i=1, ⋯, P}). Here, x_i ∈ X represents the inputs, y_i ∈ Y = {1, 2, ⋯, C} represents the labels, and C denotes the total number of classes. We refer to the modified version as 𝒫_e ={(x_e,i, y_i) | i=1, ⋯, P}, where x_e,i = x_i + δ_i represents the erased version of the training example x_i ∈𝒫'. The term δ_i ∈Δ denotes the "invisible" noise applied to achieve the erasing effect. The noise δ_i is subject to the constraint ||δ_i||_p ≤ϵ, where ||·||_p represents the L_p norm, and ϵ is set to a small value to ensure the stealthiness of the backdoor attacks. Our objective in erasing the clean features is to ensure that the pre-trained feature extractor does not extract any meaningful information from the given images x. This is achieved by introducing customized and imperceptible noise, denoted as δ_i. To be more specific, for a clean example x_i, we propose to generate the noise δ_i that erases the features by solving the following optimization problem: δ_i=min_δ_i L(f'(x_i+δ_i),y_m) s.t. ||δ_i||_p≤ϵ, where L represents the mean squared error (MSE) loss, defined as L(a,b)=||a-b||^2. The function f'(·) corresponds to the pre-trained feature extractor employed for noise generation. Additionally, y_m denotes the unbiased label for the classification task, which is defined as y_m=[1/C,1/C,⋯,1/C], where C signifies the total number of classes. While this vanilla method proves effective in erasing clean features, it requires a proxy feature extractor that has been pre-trained on the entire training set. This approach is not suitable for our data-restricted backdoor attacks. CLIP for zero-shot classification. The pre-trained CLIP model <cit.> possesses the ability to express a broader range of visual concepts and has been utilized as a general feature extractor in various tasks. These tasks include text-driven image manipulation <cit.>, zero-shot classification <cit.>, and domain generalization <cit.>. In this section, we introduce the pipeline of CLIP in zero-shot classification, which can serve as inspiration for incorporating it into our clean feature erasing approach. CLIP achieves zero-shot classification by aligning text and image features. Firstly, CLIP employs its text encoder, denoted as ℰ̂_t(·), to embed the input prompts ("a photo of a c_i") into text features T_i ∈ℝ^d, where i={1,2,⋯,C} represents the classes. Subsequently, the image feature I_j ∈ℝ^d of image x_j is embedded using the image encoder, denoted as ℰ̂_i(·). During the inference phase, the classification prediction y_j is computed using the cosine similarity between T_i and I_j. This can be expressed as: y_j=max_i(⟨ I_j,T_i ⟩), i∈{1,2,⋯,C}, where C represents the number of classes, and ⟨·,·⟩ represents the cosine similarity between two vectors. CLIP for Clean Feature Erasing (CLIP-CFE). Taking inspiration from CLIP's approach to zero-shot classification, we leverage a general CLIP model to optimize the feature erasing noise. This allows us to relax the need for a proxy feature extractor pre-trained on the entire training set. We consider C prompts, "a photo of a c_i," corresponding to different classes c_i in the dataset, where i=1,⋯,C. The CLIP-based feature erasing noise, denoted as δ_i, is proposed for the input x_i by solving the following optimization problem: δ_i=min_δ_i L(f_CLIP(x_i+δ_i,ℙ),y_m) s.t. ||δ_i||_p≤ϵ, where L represents the mean squared error (MSE) loss, y_m denotes the unbiased label for the classification task defined as y_m=[1/C,1/C,⋯,1/C], ℙ represents the set of prompts corresponding to different classes in the dataset, and f_CLIP denotes the CLIP-based model used to obtain the label of the input image. Specifically, ℙ= {p_1,p_2,⋯,p_C}= {"a photo of a c_i"| i=1,2,⋯,C}, f_CLIP(x_i+δ_i,ℙ)= [⟨ℰ̂_i(x_i+δ_i), ℰ̂_t(p_1)⟩/∑_i=1^C⟨ℰ̂_i(x_i+δ_i), ℰ̂_t(p_i)⟩,⋯,⟨ℰ̂_i(x_i+δ_i), ℰ̂_t(p_C)⟩/∑_i=1^C⟨ℰ̂_i(x_i+δ_i), ℰ̂_t(p_i)⟩]. To solve the constrained minimization problem illustrated in Eq. <ref>, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) <cit.>. The PGD method enables us to find a solution by iteratively updating the noise as follows: δ^t+1_i=∏_ϵ(δ^t_i-α·sign(∇_δL(f_CLIP(x_i+δ^t_i,ℙ),y_m))), where t represents the current perturbation step, with a total of T=50 steps. ∇_δL(f_CLIP(x_i+δ^t_i,ℙ),y_m) denotes the gradient of the loss with respect to the input. The projection function ∏ is applied to restrict the noise δ within the ϵ-ball (with ϵ=8/255 in our paper) around the original example x, ensuring it does not exceed this boundary. The step size α determines the magnitude of the noise update at each iteration. The resulting erasing examples are then obtained as follows: 𝒫_e={(x_e,i,y_i)|i=1,⋯,P}, where x_e,i=x_i+δ^T_i. §.§ Poisoning Feature Augmentation In addition to eradicating clean features in images to tackle the entanglement between benign and poisoning features, enhancing the expression of poisoning features is another effective approach. In this section, we present two parallel triggers aimed at augmenting the poisoning features. §.§.§ CLIP-based Universal Adversarial Perturbations In this section, we also employ the widely-used pre-trained CLIP model <cit.> to generate universal adversarial perturbations as the backdoor trigger. Xia et al. <cit.> argue that deep models inherently possess flaws, and it is easier to exploit and enhance an existing flaw to serve as a backdoor rather than implanting a new one from scratch (BadNets <cit.> and Blended <cit.>). Universal Adversarial Perturbations (UAP) <cit.> utilize these inherent flaws in models as triggers, providing a straightforward method for augmenting the poisoning feature. However, this approach typically requires a feature extractor that has been pre-trained on the entire training set, which is not practical in data-constrained backdoor attacks. To address this limitation, we propose a CLIP-based<ref> Universal Adversarial Perturbations (CLIP-UAP) method. Specifically, given an accessible clean training set 𝒟'={(x_i,y_i)|i=1,⋯,N'} and an attack-target label k, the defined trigger can be formulated as follows: δ_uap=min_||δ_uap||_p≤ϵ∑_(x, y) ∈𝒟' L(f_CLIP(x+δ_uap,ℙ),k), where ℙ and f_CLIP are defined as shown in Eq. <ref> and Eq. <ref>, respectively. Similar to Eq. <ref>, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) <cit.> to solve the constrained minimization problem. The optimization process can be expressed as follows: δ^t+1_uap=∏_ϵ(δ^t_uap-α·sign(∇_δ_uapL(f_CLIP(x+δ^t_uap,ℙ),k))), where t, ∇_δ_uapL(f_CLIP(x+δ^t_uap,ℙ),k), and ∏ hold the same meaning as in Eq. <ref>. Unlike the sample-wise clean feature erasing noise, the CLIP-UAP serves as a universal trigger for the entire training set. Therefore, it follows the optimization formulation presented in Eq. <ref> to generate δ^t+1_uap at each step t. The optimization process is performed on all samples in the accessible clean training set 𝒟'. Consequently, the CLIP-UAP for the set 𝒟' can be represented as δ_uap=δ^T_uap, and the poison generator is formulated as 𝒯(x,δ_uap)=x+δ_uap. §.§.§ CLIP-based Contrastive Feature Augmentation While the CLIP-UAP method has shown impressive results in terms of poisoning efficiency, it requires customization for different attack-target labels. In this section, we propose a more versatile trigger design that is independent of the attack-target label, enhancing the poisoning feature. Drawing inspiration from the entanglement between benign and poisoning features discussed in Sec. <ref>, we utilize contrastive optimization to augment the poisoning feature. Our expectation is that the poisoning feature extracted from the designed trigger will be more expressive compared to the clean feature extracted from the clean samples. Specifically, given a trigger δ^t+1_con to be optimized, two random views (query: x+δ_con and key: x_1+δ_con) are created by different clean samples (x and x_1). Positive pair is defined as such query-key pair, between different poisoning samples. Negative pairs are defined as pairs between poisoning example and its corresponding clean example, i.e. between x+δ_con and x. All views are passed through the pre-trained image encoder ℰ̂_i(·) of the CLIP to acquire the representation v: v_q=ℰ̂_i(x+δ_con), v_+=ℰ̂_i(x_1+δ_con), v_-=ℰ̂_i(x). CLIP-Contrastive feature augmentation (CLIP-CFA) focuses on optimizing the general trigger by maximizing the similarity between positive pairs while ensuring dissimilarity between negative pairs. To achieve this, we design a loss function as follows: L_con(x,x_1,δ_con)=-⟨ v_q,v_+ ⟩/⟨ v_q,v_- ⟩, where ⟨·,·⟩ represents the cosine similarity between two vectors and the optimization of δ_con is designed as: δ_con=min_||δ_con||_p≤ϵ∑_(x, y) ∈𝒟' L_con(x,x_1,δ_con). Similar to Eq. <ref>, we also adopt the first-order optimization method PGD <cit.> to solve the constrained minimization problem as follows: δ^t+1_con=∏_ϵ(δ^t_con-α·sign(∇_δ_conL_con(x,x_1,δ_con))). Therefore, the optimization should also be accumulated on all samples in the accessible clean training set 𝒟'. Finally, the CLIP-CFA of set 𝒟' can be formulated as δ_con=δ^T_con, and the poison generator is formulated as 𝒯(x,δ_con)=x+δ_con. §.§ Attack Summary In Sec. <ref>, we present two independent trigger design methods: CLIP-based Universal Adversarial Perturbations (CLIP-UAP) and CLIP-based Contrastive Feature Augmentation (CLIP-CFA). These triggers are aimed at enhancing the expression of poisoning features and can replace previous trigger design approaches, leading to improved performance in data-constrained backdoor attacks. Additionally, in Sec. <ref>, we introduce a CLIP-based Clean Feature Erasing (CLIP-CFE) method. This approach minimizes the influence of clean features during the poisoning process and can be integrated into any of the aforementioned trigger design methods. By combining trigger design and clean feature erasing, our final approach achieves state-of-the-art performance in all three types of data-constrained backdoor attacks. § EXPERIMENTS We provides an overview of the experimental settings in this paper, covering various aspects such as datasets, model architecture, evaluation metrics, baselines, and implementations (Appendix <ref>). Subsequently, we perform comprehensive experiments to assess the effectiveness of our proposed methods through answering the following research questions: RQ1: Are proposed technologies effective on three backdoor attacks? (Sec. <ref>) RQ2: Are proposed technologies harmless for Benign Accuracy? (Sec. <ref>) RQ3: Are proposed technologies stealthy for victims? (Sec. <ref>) RQ4: Are proposed technologies effective for different poisoning settings? (Sec. <ref>) In this section, we present the results specifically for CIFAR-100 datasets. Experimental outcomes for CIFAR-10 and ImageNet-50 are provided in Appendix <ref> and Appendix <ref> respectively. Additionally, for further discussions, please refer to Appendix <ref>. §.§ RQ1: Are proposed technologies effective on three backdoor attacks? To assess the effectiveness of our proposed technologies, we conduct attacks on various target models and datasets, evaluating the Attack Success Rate (ASR) for each target model. In order to establish a basis for comparison, we introduce two baseline attack methods: BadNets <cit.> and Blended <cit.>, as discussed in Sec. <ref>. Fig. <ref> illustrates the performance of the following types of backdoor attacks on the CIFAR-100 dataset: (a) number-constrained, (b) clean-label single-class (class-constrained), (c) dirty-label single-class (class-constrained), and (d) out-of-the-domain (domain-constrained)[Both the clean-label single-class and dirty-label single-class backdoor attacks represent extreme scenarios of the class-constrained backdoor attack. In the clean-label single-class attack, the targeted class category is set to Y'=k, while in the dirty-label single-class attack, it is set to Y'=c where c≠ k. Similarly, the out-of-the-domain backdoor attack is an extreme scenario of the domain-constrained backdoor attack, with a domain rate of 0. For further details, please refer to Appendix <ref>.]. CLIP-based poisoning feature augmentation is more effective than previous attack methods. Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques (BadNets <cit.> and Blended <cit.>[While there are several more effective techniques for poisoning attacks, they typically necessitate access to the entire training data, rendering them unsuitable for our data-limited backdoor attacks.]) in terms of consistency across different attacks and target models. Specifically, we achieved an ASR of 0.878, 0.825, 0.984, and 0.988 for BadNets, Blended, CLIP-UAP, and CLIP-CFA, respectively, in the number-constrained backdoor attack on the VGG-16 dataset. These results provide evidence that our proposed poisoning feature augmentation generates more effective triggers compared to other methods. CLIP-based Clean Feature Suppression is useful for different attack methods. Our proposed method, CLIP-CFE, has shown significant improvements in effectiveness compared to the baseline method without CLIP-CFE. In various cases, CLIP-CFE has enhanced the poisoning efficiency significantly. For instance, in the clean-label single-class backdoor attack on the VGG-16 dataset, we observed remarkable improvements of 187%, 150%, 110%, and 229% for BadNets, Blended, CLIP-UAP, and CLIP-CFA, respectively. However, it is worth noting that in the results of the domain-constrained backdoor attacks on MobileNet-V2 (as depicted in the right part of Fig. <ref> (d)), CLIP-CFA and CLIP-UAP only slightly outperform the corresponding methods with CFE. More discussion. While our technologies have shown significant improvements in poisoning efficiency compared to baseline methods, there are still important discussions that need to be addressed. We aim to provide answers to the following questions in a systematic manner in Appendix <ref>: i) Why do we observe performance degradation in the clean-label single-class attack? ii) Why are domain-constrained backdoor attacks generally easier compared to class-constrained backdoor attacks? §.§ RQ2: Are proposed technologies harmless for Benign Accuracy? As shown in Table <ref>, our proposed methods, CLIP-UAP and CLIP-CFA, exhibit similar or even better average Benign Accuracy (BA) compared to the baseline methods, BadNets <cit.> and Blended <cit.>. Additionally, it is worth noting that our proposed method, CLIP-CFE, does not negatively impact BA. This finding confirms that our technologies are harmless to the benign accuracy compared to baseline methods, even under various settings and different backdoor attacks. §.§ RQ3: Are proposed technologies stealthy for victims? Fig. <ref> showcases examples of poisoning images generated by different attacks on the ImageNet-50[As shown in Appendix <ref>, CIFAR-10 and CIFAR-100 have low resolution, which makes unclear visualizations. Therefore, we show the results on the ImageNet-50 dataset in this section.] dataset. While our CLIP-UAP and CLIP-CFA may not achieve the highest stealthiness in terms of SSIM (as indicated in Table <ref>), the poisoning images generated by our methods appear more natural to human inspection compared to the baseline attacks. Additionally, incorporating CLIP-CFE has minimal impact on both PSNR and the natural appearance of the images, while achieving higher stealthiness in terms of SSIM. §.§ RQ4: Are proposed technologies effective for different poisoning settings? Experiments on different poison rates for number-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our proposed methods in reducing the number of poisoning samples (poisoning rates) for number-constrained backdoor attacks. The results depicted in Fig. <ref> demonstrate the following: i) The attack success rate increases with higher poisoning rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets <cit.> and Blended <cit.>. iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different poison classes for class-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our proposed methods in increasing the number of poisoning classes for class-constrained backdoor attacks. The results presented in Fig. <ref> demonstrate the following: i) The attack success rate increases with higher poisoning classes for different attacks. ii) The attack success rate of clean-label single-class attack is lower than that of dirty-label single-class attacks. iii) Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques, BadNets <cit.> and Blended <cit.>. iv) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different domain rates for domain-constrained backdoor attacks. We conduct ablation studies to assess the effectiveness of our methods in increasing the domain rate for domain-constrained backdoor attacks. The results depicted in Fig. <ref> demonstrate the following: i) The ASR increases with higher domain rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets <cit.> and Blended <cit.>. iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers. Experiments on different large pre-trained models. We utilize the pre-trained CLIP model as the basis for our technologies. It's worth noting that the community has proposed various CLIP variants. Therefore, an important practical consideration is whether our proposed technologies remain robust when applied different pre-trained CLIP models. To investigate this, we conduct ablation studies on different CLIP models for number-constrained backdoor attacks, as depicted in Fig. <ref>. The results demonstrate that our proposed technologies exhibit robustness across different CLIP models, with ViT-B/32 emerging as a competitive choice for all methods. § LIMITATIONS AND FUTURE WORKS In this paper, we address the challenges of data-constrained backdoor attacks, which occur in more realistic scenarios where victims collect data from multiple sources and attackers cannot access the full training data. To overcome the performance degradation observed in previous methods under data-constrained backdoor attacks, we propose three technologies from two streams that leverage the pre-trained CLIP model to enhance the efficiency of poisoning. Our goal is to inspire the research community to explore these realistic backdoor attack scenarios and raise awareness about the threats posed by such attacks. In the following section, we discuss the limitations of our approach and outline potential future directions for backdoor learning research. Performance degradation in clean-label backdoor attacks. Clean-label backdoor attacks present a significant challenge <cit.>. As shown in Fig. <ref>, previous methods exhibit a poor ASR, and our technologies show limited improvement in clean-label backdoor attacks when the poisoning rate is low. In future research, we will investigate the underlying reasons for this situation and explore more efficient attack methods specifically designed for clean-label backdoor attacks. Application limitations. Our technologies depend on the CLIP model that is pre-trained on natural images, which may limit their applicability to certain domains such as medical images or remote sensing. In such cases, a possible solution is to replace CLIP with a domain-specific pre-trained model, such as MedCLIP <cit.> for medical images or Satellite <cit.> for remote sensing, to adapt our methods to the target domain. Transfer to other domains. The attack scenario we have defined is not limited to a specific domain and can be applied to other important applications, including backdoor attacks for malware detection, deepfake detection, and federated learning. In our future work, we plan to explore the design of realistic attack scenarios and efficient backdoor attacks specifically tailored for these applications. plain § APPENDIX §.§ Experimental Setup §.§.§ Datasets We use the following three popular datasets in image classification: CIFAR-10 <cit.>. CIFAR-10 is a tiny object classification dataset containing 50,000 training images and 10,000 testing images. Each image has a size of 32×32×3 and belongs to one of 10 classes. CIFAR-100 <cit.>. Similar to CIFAR-10, CIFAR-100 is also a tiny object classification dataset containing 50,000 training images and 10,000 testing images. Each image has a size of 32×32×3 and belongs to one of 100 classes. ImageNet-50 <cit.>. ImageNet is the most popular object classification dataset containing 1.3M training images and 50K testing images. Each image has a size of 224×224×3 and belongs to one of 1000 classes. For simplicity, we randomly sampled 50 categories to compose a tiny dataset: ImageNet-50. Our ImageNet-50 dataset contains 60K training images and 2.5K testing images. §.§.§ Model Architecture. We verify the performance on three popular model architectures of image classification: VGG-16 <cit.>, ResNet-18 <cit.>, and MobileNet-V2 <cit.>. All of them are widely used in various areas of artificial intelligence, such as flower classification <cit.>, pulmonary image classification <cit.>, fault diagnosis <cit.>, and Covid-19 screening <cit.>. §.§.§ Baseline and Comparison. Our method contains two aspects: clean feature suppression and poisoning feature augmentation. Among them, poisoning feature augmentation can be accomplished through designing efficient and data-independent triggers, while clean feature suppression is orthogonal to previous trigger designing and can be integrated into any backdoor triggers. Therefore, we compare our two designed triggers CLIP-based universal adversarial perturbations (CLIP-UAP) and CLIP-based contrastive feature augmentation (CLIP-CFA) with two popular triggers: BadNets <cit.>and Blended <cit.>. All of them are independent of the training data therefore can be implemented easily in the introduced data-constrained backdoor attacks. To verify the validity of the clean feature suppression, we integrate the proposed CLIP-based clean feature erasing (CLIP-CFE) onto currently designed triggers: two our designed triggers and two baseline triggers. §.§.§ Implementations. In order to demonstrate the effectiveness of our proposed method, we conduct experiments on three datasets (CIFAR-10, CIFAR-100, and ImageNet-50). For the CIFAR-10 and CIFAR-100 datasets, we choose VGG-16, ResNet-18, and MobileNet-V2 as the victim models. All models use the SGD optimizer with a momentum of 0.9, weight decay of 5e-4, and a learning rate of 0.01 (0.1 for MobileNet-V2), which is multiplied by 0.1 at epoch 35 and 55. For the ImageNet-50 dataset, we use VGG-16 and MobileNet-V2 as the victim models. We use the SGD optimizer with a momentum of 0.9, weight decay of 5e-4, and a learning rate of 0.05 (0.01 for VGG-16), which is multiplied by 0.1 at epoch 35 and 55. The complete training epochs is 70. In the number-constrained scenario, we conducted experiments with poisoning rates of 0.01 (P=500), 0.015 (P=750), and 0.007 (P=453) for the CIFAR10, CIFAR100, and ImageNet-50 threes datasets, respectively. In the class-constrained scenario, experiments with poisoning rates of 0.02 (P=1000), 0.01 (P=500), and 0.02 (P=1296) for three datasets. We choose two extreme scenario in the class-constrained backdoor attacks, denoted as clean-label single-class backdoor attack and dirty-label single-class backdoor attack. Specifically, the accessed class category is set to Y'={k} and Y'={c}, c≠ k for clean-label single-class backdoor attack and dirty-label single-class backdoor attack, respectively. In the domain-constrained scenario, experiments with poisoning rates of 0.02 (P=1000, 1000, and 1296) for three datasets. The out-of-domain samples of all experiments are selected from other ImageNet-1K datasets that are not ImageNet-50. The attack-target class k is set to category 0 for all experiments of above three data-constrained backdoor attacks. §.§.§ Evaluation Metrics. We evaluate the performance of our method in terms of Harmlessness: Benign Accuracy (BA), Effectiveness: Attack Success Rate (ASR), and Stealthiness: Peak Signal-to-noise Ratio (PSNR) <cit.> and Structural Similarity Index (SSIM) <cit.>. Benign Accuracy (BA). BA is the clean accuracy of the testing set 𝒟_t={(x_i,y_i)|i=1,⋯,M} and is applied to evaluate the Harmlessness of the backdoor. When BA of the infected model is similar to the accuracy of the clean model, we believe that the current attack technique is harmless. Attack Success Rate (ASR). ASR is applied to evaluate the effectiveness of the backdoor attack, which is the fraction of testing images with specific trigger that are predicted as the target class. Specifically, For M' images in the testing set that do not belong to the attack-target class (k), the ASR is formulated as: ASR=∑_i=1^M'𝕀(f(𝒯(x_i,t); Θ)=k)/M', (x_i,y_i)∈𝒟'_t, where 𝒟'_t is a subset of testing set 𝒟_t (𝒟'_t⊂𝒟_t), containing the images whose label is not the attack-target class k. Peak Signal-to-noise Ratio (PSNR) <cit.>. PSNR is applied to measure the similarity between clean images and the corresponding poisoning images. Give a image x_i ∈𝒟_t and the corresponding poisoning image x'_i=𝒯(x_i,t), the PSNR is formulated as: PSNR=1/M∑_i=1^MPSNR_i(x_i, x'_i), where PSNR_i(x_i, x'_i)=10 log _10(255^2 / MSE(x_i, x'_i)), MSE(f,g)=1/H W∑_i=1^H ∑_j=1^W(f_i j-g_i j)^2, H and W are height and width of the image, respectively. Larger PSNR means larger similarity between clean images and the corresponding poisoning images, therefore larges stealthiness of backdoor attacks. Structural Similarity Index (SSIM) <cit.>. Similar to PSNR, SSIM is another metrics to represent the stealthiness of backdoor attacks, which is formulated as: SSIM=1/M∑_i=1^MSSIM_i(x_i, x'_i), where SSIM_i(x_i, x'_i)=l(x_i, x'_i)· c(x_i, x'_i)· s(x_i, x'_i), {[ l(f, g)=2 μ_f μ_g+C_1/μ_f^2+μ_g^2+C_1; c(f, g)=2 σ_f σ_g+C_2/σ_f^2+σ_g^2+C_2; s(f, g)=σ_f g+C_3/σ_f σ_g+C_3 ],. where μ and σ are mean and variance of image, respectively. Similarly, Larger SSIM means larger similarity between clean images and the corresponding poisoning images, therefore larges stealthiness of backdoor attacks. §.§ Experiments on the CIFAR-10 Dataset §.§.§ RQ1: Are proposed technologies effective on different backdoor attacks. In this section, we utilize our proposed technologies to attack different target models on the CIFAR-10 dataset. Our objective is to verify the effectiveness of the attack and calculate the ASR for each target model. The baseline attack methods, BadNets <cit.> and Blended <cit.> were introduced in Sec. <ref>. The attack performance of the number-constrained, clean-label single-class (class-constrained), dirty-label single-class (class-constrained), and out-of-the-domain (domain-constrained) backdoor attacks on CIFAR-10 dataset are reflected in Fig. <ref>, <ref>, <ref>, and <ref>, respectively. CLIP-based Poisoning Feature Augmentation Is More Effective Than Previous Attack Methods. Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets <cit.> and Blended <cit.> baseline methods in terms of consistency under different attacks and datasets. This confirms that the proposed poisoning feature augmentation generates more efficient triggers than other methods. CLIP-based Clean Feature Suppression Is Useful For Different Attack Methods. Our proposed CLIP-CFE method improves the poisoning effectiveness on most cases compared to the baseline without CLIP-based Clean Feature Erasing. Only on small cases the BadNets and CLIP-UAP slightly outperform the corresponding methods with CFE. §.§.§ RQ2: Are proposed three technologies harmless for Benign Accuracy. Table <ref> illustrates that our proposed CLIP-UAP and CLIP-CFA methods have similar or even better average Benign Accuracy (BA) compared to the baseline methods BadNets <cit.> and Blended <cit.>. Additionally, our proposed CLIP-CFE method has no negative effect on BA, confirming that our technologies are harmless for benign accuracy under various settings and different backdoor attacks. §.§.§ RQ3: Are proposed technologies effective for different poisoning settings. Ablation of different poison rates on the number-constrained backdoor attacks. We conducted ablation studies to verify the effectiveness of the proposed methods in reducing the number of poisoning samples (poisoning rates) on the number-constrained backdoor attacks. The results in Fig. <ref> illustrate that: i) The attack success rate increases with the increase of poisoning rate for different attacks; ii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets <cit.> and Blended <cit.>; iii) The proposed CLIP-CFE further improves the poisoning effectiveness upon the different triggers. Ablation of different poison classes on the class-constrained backdoor attacks. In this section, we conducted ablation studies to verify the effectiveness of the proposed methods in increasing the number of poisoning classes on the class-constrained backdoor attacks. The results in Fig. <ref> illustrate that: i) The attack success rate increases with the increase of poisoning classes for different attacks; ii) The attack success rate of clean-label single-class attack is lower than that of dirty-label single-class attacks; iii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets <cit.> and Blended <cit.> methods; iv) The proposed CLIP-CFE method further improves the poisoning effectiveness with different triggers. Ablation of different domain rates on the domain-constrained backdoor attacks. In this section, we conducted ablation studies to verify the effectiveness of the proposed methods in increasing the domain rate on the domain-constrained backdoor attacks. The results in Fig. <ref> illustrate that: i) The attack success rate increases with the increase of the domain rates for different attacks; ii) Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets <cit.> and Blended <cit.> methods; iii) The proposed CLIP-CFE method further improves the poisoning effectiveness with different triggers. §.§ Experiments on the ImageNet-50 Dataset §.§.§ RQ1: Are proposed technologies effective on different backdoor attacks. In this section, we utilized our proposed technologies to attack different target models on the ImageNet-50 dataset and calculate the ASR for each target model to verify the attack effectiveness. The baseline attack methods, BadNets <cit.>and Blended <cit.> were introduced in Sec. <ref>. Fig. <ref>, <ref>, <ref>, and <ref> reflect the attack performance of the number-constrained, clean-label single-class (class-constrained), dirty-label single-class (class-constrained), and out-of-the domain (domain-constrained) backdoor attacks on the ImageNet-50 dataset, respectively. CLIP-based Poisoning Feature Augmentation Is More Effective Than Previous Attack Methods. Our proposed CLIP-UAP and CLIP-CFA methods outperform the BadNets <cit.> and Blended <cit.> baseline methods in terms of consistency under different attacks and datasets. This confirms that the proposed poisoning feature augmentation generates more efficient triggers than other methods. CLIP-based Clean Feature Suppression Is Useful For Different Attack Methods. Our proposed CLIP-CFE method improves the poisoning effectiveness on most cases compared to the baseline without CLIP-based Clean Feature Erasing. Only on the MobileNet-V2 results of the number-constrained backdoor attacks (right part of the Fig. <ref>), CLIP-UAP slightly outperform the corresponding methods with CFE. §.§.§ RQ2: Are proposed three technologies harmless for Benign Accuracy. Table <ref> illustrates that our proposed CLIP-UAP and CLIP-CFA methods have similar or even better average Benign Accuracy (BA) compared to the baseline methods BadNets <cit.> and Blended <cit.>. Additionally, our proposed CLIP-CFE method has no negative effect on BA, confirming that our technologies are harmless for benign accuracy under various settings and different backdoor attacks. §.§ Related Works Backdoor attacks aim to introduce hidden triggers into DNNs, allowing the attacked models to behave correctly on clean samples while exhibiting malicious behavior when triggered by specific inputs. These attacks can occur at various stages of Artificial Intelligence (AI) system development <cit.>. The surface of backdoor attacks has been systematically categorized into six groups: code-based <cit.>, outsourcing, pretrained model-based <cit.>, data collection-based <cit.>, collaborative learning-based <cit.>, and post-deployment attacks <cit.>. Among these categories, poisoning-based backdoor attacks, which involve introducing a backdoor trigger during the training process by mixing a few poisoning samples, are the most straightforward and commonly used method. This study focuses on addressing concerns related to poisoning-based backdoor attacks. Furthermore, backdoor can be embedded through various techniques, such as transfer learning <cit.>, controlling model parameters <cit.>, adding malicious modules <cit.>, and modifying the context of deep learning for source code <cit.>, among others. Previous studies focusing on poisoning-based backdoor attacks typically assume that the attacker has access to the entire training set and primarily concentrate on enhancing poisoning efficiency and stealthiness. §.§.§ Poisoning Efficiency Existing studies aim at improving the poisoning efficiency of backdoor attacks can be categorized into two main areas. Designing Efficient Triggers The design of efficient triggers that are easier for DNNs to learn has garnered significant interest. Researchers have recently drawn inspiration from Universal Adversarial Perturbations (UAPs) <cit.> and optimized UAPs on pre-trained clean models to create effective triggers, which have been widely utilized in various studies <cit.>. However, this approach requires a pre-trained clean model on the training set, which is not practical for data-constrained backdoor attacks. Selecting Efficient Poisoning Samples Efficient sample selection for poisoning attacks is a critical yet under-explored aspect that is distinct from trigger design. Xia et al. <cit.> were among the first to investigate the contribution of different data to backdoor injection. Their research revealed that not all poisoning samples contribute equally, and appropriate sample selection can greatly enhance the efficiency of data in backdoor attacks. §.§.§ Poisoning Stealthiness Existing studies focused on increasing the stealthiness of backdoor attacks can be categorized into two main areas. Designing Invisible Triggers The concept of invisible triggers aims to ensure that poisoning images are visually indistinguishable from clean samples, thus evading detection in both pixel and feature spaces. This perspective is the most straightforward approach to bypass defenses. Chen et al. <cit.> first propose a blended strategy to evade human detection by blending clean samples with the trigger to create poisoning samples. Subsequent studies <cit.> focuse on constraining the norm of the trigger through optimization methods. Moreover, some studies have explored the use of natural patterns such as warping <cit.>, rotation <cit.>, style transfer <cit.>, frequency <cit.>, and reflection <cit.> to create triggers that are more imperceptible to human inspection. In contrast to previous works that employ universal triggers, Li et al. <cit.> employ GAN models to generate sample-specific triggers, which are similar to adversarial examples and extremely imperceptible to humans. Clean-label Attacks. Clean-label attacks refer to backdoor attacks where the target labels of the poisoning samples align with their perception labels. Turner et al. <cit.> is the first to explore clean-label attacks by employing GAN-based and adversarial-based perturbations. Compared to standard backdoor attacks, clean-label attacks are typically less effective due to the model's tendency to associate natural features, rather than backdoor triggers, with the target class. Recent studies have focused on aligning features <cit.> or gradients <cit.> between perturbed inputs from the target class and trigger-inserted inputs from the non-target class through pretraining on the entire training set. Additionally, some study <cit.> has proposed optimizing the backdoor trigger using only the knowledge about the target-class training data. In this approach, the trigger is optimized to point towards the interior of the target class, resulting in improved effectiveness. §.§ Discussion Performance degradation in the clean-label single-class backdoor attack. As depicted in Fig. <ref> (b), Fig. <ref>, and Fig. <ref>, both the baseline and our attack methods exhibit poor Attack Success Rate (ASR) in the clean-label single-class backdoor attack. In this section, we aim to enhance the attack strength of our methods and devise a more efficient attack strategy for the clean-label single-class backdoor attack. In our optimization equations, namely Eq. <ref>, Eq. <ref>, and Eq. <ref>, we impose constraints on the optimized noise, denoted as δ_i, δ_uap, and δ_con, respectively. These constraints are specified as ||δ_i||_p ≤ϵ, ||δ_uap||_p ≤ϵ, and ||δ_con||_p ≤ϵ, where ||·||_p represents the L_p norm, and we set ϵ to 8/255 to ensure the stealthiness of the backdoor attacks, as observed in our previous experiments. To bolster the attack strength and subsequently increase the ASR in the clean-label single-class backdoor attack, we investigate the impact of adjusting the constraint on δ_i. As demonstrated in Fig. <ref> and Fig. <ref>, significant (more than 500%) improvements are observed in the ASR of the clean-label single-class backdoor attack when we set the constraint on δ_i to ||δ_i||_p ≤ 16/255. This finding validates the efficacy of our method in the clean-label single-class backdoor attack, albeit at the expense of compromising stealthiness. This sacrifice, which is common in previous backdoor attack methods <cit.>, is a low-cost trade-off. Domain-constrained backdoor attacks are easier than class-constrained backdoor attacks. Fig. <ref> provides a visualization of the Attack Success Rate (ASR) achieved by different attack methods on the CIFAR-10 dataset in domain-constrained (domain rate set to 0) and dirty-label single-class backdoor attacks. While domain-constrained backdoor attacks impose stricter restrictions (assumptions that attackers have no access to any data in the training set), the ASR in domain-constrained backdoor attacks consistently surpasses that of dirty-label single-class backdoor attacks. This observation leads us to propose that the diversity of samples in the poisoning set is another crucial factor affecting attack efficiency. Consequently, we recommend that attackers fully consider the diversity of poisoning samples during the poisoning set generation phase. § AVAILABILITY All codes and datasets will be publicly available.
http://arxiv.org/abs/2306.17815v1
20230630172649
Bayesian Optimization with Formal Safety Guarantees via Online Conformal Prediction
[ "Yunchuan Zhang", "Sangwoo Park", "Osvaldo Simeone" ]
cs.LG
[ "cs.LG", "cs.IT", "eess.SP", "math.IT" ]
Bayesian Optimization with Formal Safety Guarantees via Online Conformal Prediction Yunchuan Zhang, Student Member, IEEE, Sangwoo Park, Member, IEEE, and Osvaldo Simeone, Fellow, IEEE The authors are with the King’s Communications, Learning and Information Processing (KCLIP) lab, King’s College London, London, WC2R 2LS, UK. (email:{yunchuan.zhang, sangwoo.park, osvaldo.simeone}@kcl.ac.uk). This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement No. 725732), by the European Union’s Horizon Europe project CENTRIC (101096379), by an Open Fellowship of the EPSRC (EP/W024101/1), and by Project REASON, a UK Government funded project under the Future Open Networks Research Challenge (FONRC) sponsored by the Department of Science Innovation and Technology (DSIT). July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Black-box zero-th order optimization is a central primitive for applications in fields as diverse as finance, physics, and engineering. In a common formulation of this problem, a designer sequentially attempts candidate solutions, receiving noisy feedback on the value of each attempt from the system. In this paper, we study scenarios in which feedback is also provided on the safety of the attempted solution, and the optimizer is constrained to limit the number of unsafe solutions that are tried throughout the optimization process. Focusing on methods based on Bayesian optimization (BO), prior art has introduced an optimization scheme – referred to as – that is guaranteed not to select any unsafe solution with a controllable probability over feedback noise as long as strict assumptions on the safety constraint function are met. In this paper, a novel BO-based approach is introduced that satisfies safety requirements irrespective of properties of the constraint function. This strong theoretical guarantee is obtained at the cost of allowing for an arbitrary, controllable but non-zero, rate of violation of the safety constraint. The proposed method, referred to as , builds on online conformal prediction (CP) and is specialized to the cases in which feedback on the safety constraint is either noiseless or noisy. Experimental results on synthetic and real-world data validate the advantages and flexibility of the proposed . Bayesian optimization, online conformal prediction, safe exploration. § INTRODUCTION §.§ Context and Scope Problems as diverse as stock portfolio optimization and asset management <cit.>, capacity allocation in energy systems <cit.>, material discovery <cit.>, calibration and optimization of quantum systems <cit.>, and scheduling and optimization of wireless systems <cit.> can all be formulated as black-box zero-th order optimizations. In such problems, the objective to be optimized can only be accessed on individual candidate solutions, and no further information is retrieved apart from the value of the objective. As illustrated in Fig. <ref>, in a common formulation of this problem, a designer sequentially attempts candidate solutions, receiving noisy feedback on the value of each attempt from the system. In this paper, we study scenarios in which feedback is also provided on the safety of the attempted solution, and the optimizer is constrained to limit the number of unsafe solutions that are tried throughout the optimization process <cit.>. As an example, consider the problem of discovering pharmaceuticals for a particular condition (see, e.g., <cit.>). A pharmaceutical company may try different molecules by carrying out costly trials with patients. Such trials would return not only an indication of the effectiveness of the candidate cure, but also an indication of possible side effects. A reasonable goal is that of finding a maximally effective compound, while minimizing the number of molecules that are found to have potential side effects during the optimization process. Typical tools for the solution of black-box zero-th order optimization construct surrogates of the objective function that are updated as information is collected by the optimizer. This can be done using tools from reinforcement learning, such as bandit optimization <cit.>, or Bayesian optimization (BO) <cit.>. Focusing on methods based on BO, prior art has introduced an optimization scheme – referred to as <cit.> – that is guaranteed not to select any unsafe solution with a controllable probability with respect to feedback noise. This theoretical guarantee is, however, only valid if the optimizer has access to information about the constraint function. In particular, reference <cit.> assumes that the constraint function belongs to a reproducible kernel Hilbert space (RKHS), and that it has a known finite RKHS norm. In practice, specifying such information may be difficult, since the constraint function is a priori unknown. In this paper, a novel BO-based approach is introduced that satisfies safety requirements irrespective of properties of the constraint function. This guarantee is obtained at the cost of allowing for an arbitrary, controllable but non-zero, rate of violation of the safety constraint. The proposed method, referred to as , builds on online conformal prediction (CP) <cit.>, and is specialized to the cases in which feedback on the safety constraint is either noiseless or noisy. §.§ Related Work Existing constrained sequential black-box zero-th order optimizers that leverage BO, collectively referred as Safe-BO schemes, target a strict safety requirement whereby no safety violations are allowed. Accordingly, all candidate solutions attempted by the optimizer must be safe <cit.>. As mentioned in the previous subsection, such stringent safety requirements can only be guaranteed by making strong assumptions on the knowledge available regarding the safety constraint function. In particular, all the existing works on Safe-BO, with a notable exception of <cit.>, either assume knowledge of the smoothness properties of the constraint function when dealing with deterministic constraint function <cit.>, or treating the constraint function as a random realization of a Gaussian process with a known kernel when dealing with random constraint function <cit.>. When the mentioned assumptions or the surrogate model on the constraint function are invalid or infeasible, the existing methods cannot provide any formal safety guarantees. In order to mitigate this problem, reference <cit.> proposed to apply meta-learning <cit.> to estimate a suitable surrogate model for the constraint function using additional data that are assumed to be available from other, similar, optimization problems. However, no formal safety guarantees are available for the approach. CP is a general framework for the calibration of statistical models <cit.>. CP methods can be applied to pre-trained machine learning models with the goal of ensuring that the model's outputs provide reliable estimates of their uncertainty. There are two main classes of CP techniques: offline CP, which leverages offline calibration data for this purpose <cit.>; and online CP, which uses feedback on the reliability of past decisions to adjust the post-processing of model's outputs <cit.>. In both cases, CP offers theoretical guarantees on the quality of the uncertainty quantification provided by the decisions of the system. The relevance of online CP for the problem of interest, illustrated in Fig. 1, is that, as the optimizer attempts multiple solutions over time, it needs to maintain an estimate of the constraint function. In order to ensure the safety of the candidate solutions selected by the optimizers, it is important that such estimates come with well-calibrated uncertainty intervals. In this paper, we leverage the theoretical guarantees of online CP in order to define novel BO-based safe optimization strategies. The only existing combination of CP and BO we are aware of are provided by <cit.>, which apply offline CP to BO for the solution of an unconstrained optimization problem. The approach aims at improving the acquisition function while accounting for observation noise that goes beyond the standard homoscedastic Gaussian assumption. These prior works do not address safety requirements. §.§ Main Contributions In this paper, we introduce , a novel BO-based optimization strategy for constrained black-box zero-th order problems with safety constraints. provides assumptions-free guarantees on the safety level of the attempted candidate solutions, while enabling any non-zero target safety violation level. As summarized in Table <ref>, this contrasts with the state-of-the-art papers <cit.> that only target the most stringent safety constraint with no safety violations throughout the optimization process, while relying on strong assumptions on the constraint function <cit.>. To summarize, the main contributions of the paper are as follows: ∙ We introduce the deterministic () algorithm, which assumes noiseless feedback on the constraint function and targets a flexible safety constraint on the average number of candidate solutions that are found to be unsafe. The approach is based on a novel combination of online CP and Safe-BO methods. ∙ For the case in which feedback on the constraint function is noisy, we introduce the probabilistic () algorithm, which targets a flexible safety constraint on the probability that the average number of candidate solutions that are found to be unsafe exceeds a controllable threshold. The method relies on a “caution-increasing” back-off mechanism that compensates for the uncertainty on the safety feedback received from the system. ∙We prove that both and meet their target safety requirements irrespective of the properties of the constraint function. ∙ We validate the performance of all the proposed methods and theorems on a synthetic data set and on real-world applications. The rest of the paper is organized as follows. Sec. <ref> formulates the constrained black-box zero-th order problem with safety constraints. The general framework of Safe-BO, as well as the representative, state-of-the-art, algorithm , are reviewed in Sec. <ref> and Sec. <ref>, respectively. The proposed methods are introduced in the following sections, with presented in Sec. <ref> and described in Sec. <ref>. Experimental results on synthetic dataset are provided in Sec. <ref>, and Sec. <ref> demonstrates results on real-world applications. Finally, Sec. <ref> concludes the paper. § PROBLEM FORMULATION In this section, we describe the constrained black-box zero-th order optimization problems for safety-critical scenarios studied in this work. Then, we introduce the general solution framework of interest in the next section, which is referred to as Safe-BO <cit.>. §.§ Optimization Problem and Safety Constraint We focus on optimization problems constrained of the form max_𝐱∈𝒳 f(𝐱) s.t. q(𝐱)≥ 0, where objective function f(𝐱) and constraint function q(𝐱) are real valued; and 𝒳 is some specified subset of the d-dimensional vector space R^d. Let f^opt denote the maximum value of the problem (<ref>), which we assume to be finite. We also assume that the set of optimal solutions, achieving the optimal value f^opt, is not empty. We write any optimal solution as 𝐱^opt∈𝒳 with f^opt=f(𝐱^opt). Furthermore, we assume that there is a known, non-empty, set 𝒮_0⊂𝒳 of safe solutions, i.e., 𝒮_0 ⊆{𝐱∈𝒳: q(𝐱)≥ 0 }. This subset may be as small as a single safe solution 𝐱_0 with q(𝐱_0) ≥ 0, i.e., 𝒮_0={𝐱_0}. We address the optimization problem (<ref>) under the following conditions. ∙ Zero-th-order black-box access: The real-valued objective function f(𝐱) and constraint function q(𝐱) are a priori unknown, and only accessible as zero-th-order black boxes. This implies that, given a candidate solution 𝐱, the optimizer can evaluate both functions, obtaining the respective values f(𝐱) and q(𝐱). In practice, the evaluations are often noisy, resulting in the observation of noisy values f̃(𝐱) and q̃(𝐱). No other information, such as gradients, is obtained by the optimizer about the functions. ∙ Efficient optimization: The optimizer wishes to minimize the number of accesses to both functions f(𝐱) and q(𝐱), while producing a feasible and close-to-optimal solution 𝐱^*∈𝒳. That is, we wish for the optimizer to output a vector 𝐱^*∈𝒳 that satisfies the constraint q(𝐱^*)≥ 0, with an objective value f(𝐱^*) close to the maximum value f^opt. The performance of the optimizer can be measured by the optimality ratio Δ f(𝐱^*)=f(𝐱^*)/f^opt. ∙ Safety: Interpreting the inequality q(𝐱)≥0 as a safety constraint, we consider choices of the optimization variable 𝐱∈𝒳 that result in a negative value of the constraint function q(𝐱) to be unsafe, unless the number of such violations of the constraint are kept below a threshold. Accordingly, we will require that the number of evaluations of the constraint function q(𝐱) that result in a violation of the constraint in (<ref>) be no larger than a pre-determined value. We will formalize this constraint next by describing the general operation of the optimizer. §.§ Sequential Surrogate-Based Safe Optimization Starting from a given solution 𝐱_0∈𝒮_0 (<ref>), the optimizer sequentially produces candidate solutions 𝐱_1,...,𝐱_T ∈𝒳 across T trials or iterations. At each iteration t, the optimizer receives noisy observations of the objective value f(𝐱_t) as y_t=f(𝐱_t)+ϵ_f,t, as well as a noisy observation of the constraint value q(𝐱_t) as z_t=q(𝐱_t)+ϵ_q,t, where the observation noise terms ϵ_f,t∼𝒩(0,σ_f^2) and ϵ_q,t∼𝒩(0,σ_q^2) are independent zero-mean Gaussian random variables with variance σ_f^2 and σ_q^2, respectively <cit.>. We focus on optimizers that maintain surrogate models of functions f(𝐱) and q(𝐱) in order to select the next iterate. To elaborate, let us write as 𝒪_t the overall history of past iterates (𝐱_0,...,𝐱_t) and past observations (y_0,z_0,...,y_t,z_t) at the end of the t-th iteration, i.e., 𝒪_t=(𝐱_0,...,𝐱_t,y_0,...,y_t,z_0,...,z_t). As we detail in the next section, the optimizer maintains probability distributions p(f|𝒪_t) and p(q|𝒪_t) on the functions f(𝐱) and q(𝐱) across all values 𝐱∈𝒳 based on the available information 𝒪_t. The distributions p(f|𝒪_t) and p(q|𝒪_t) summarize the belief of the optimizer regarding the values of the two functions. At the next iteration t+1, the optimizer leverages the distributions p(f|𝒪_t) and p(q|𝒪_t) to obtain iterate 𝐱_t+1 as follows. ∙ Safe set: Using distribution p(q|𝒪_t), the optimizer identifies a safe set 𝒮_t+1⊆𝒳, containing solutions 𝐱∈𝒳 deemed by the optimizer to be safe, i.e., to satisfy the constraint q(𝐱)≥0. ∙ Acquisition: Using distributions p(f|𝒪_t) and p(q|𝒪_t), the optimizer selects the next iterate 𝐱_t+1∈𝒮_t+1, with the aim of maximizing the likelihood of obtaining a large, i.e., close to 1, optimality ratio (<ref>). §.§ Safety Constraints We now formalize the safety constraint by distinguishing the cases in which the observations (<ref>) of constraint function q(𝐱) are: (i) noiseless, i.e., we have z_t=q(𝐱_t) in (<ref>) with noise power σ_q^2=0; and (ii) noisy, i.e., we have a positive observation noise power σ_q^2 > 0 in (<ref>). §.§.§ Deterministic Safety Constraint Noiseless observations of the constraint function values allow the optimizer to keep track of the number of iterates 𝐱_t that result in violations of the non-negativity constraint in problem (<ref>). Accordingly, with σ^2_q=0, we impose that the non-negativity constraint q(𝐱_t)≥0 be violated no more than a tolerated fraction α∈[0,1] of the T iterations. Specifically, given a target violation rate α∈[0,1], this results in the deterministic safety requirement violation-rate(T) := 1/T∑_t=1^T 1(q(𝐱_t) < 0) ≤α, where 1(·) is the indicator function, i.e., we have 1(true)=1 and 1(false)=0. §.§.§ Probabilistic Safety Constraint In the presence of observation noise on the constraint, i.e., with a positive observation noise power σ_q^2>0, the optimizer cannot guarantee the deterministic constraint (<ref>). Rather, for some target violation rate α∈[0,1], the optimizer can only aim at ensuring that the constraint (<ref>) be satisfied with a probability no smaller than a target reliability level 1-δ, with δ∈(0,1]. This results in the probabilistic safety constraint (violation-rate(T)≤α)≥1-δ, in which the probability is taken with respect to the observation noise variables {ϵ_q,t}_t=1^T for the constraint function q(𝐱) in (<ref>). § SAFE BAYESIAN OPTIMIZATION We adopt BO as the underlying surrogate-based optimization strategy. When deployed to address the problem of safe black-box optimization defined in the previous section, BO-based schemes are referred to collectively as Safe-BO <cit.>. As illustrated in Fig. <ref>, Safe-BO models objective function f(𝐱) and constraint function q(𝐱) by using independent Gaussian processes (GPs) as surrogate models, producing the distributions p(f|𝒪_t) and p(q|𝒪_t) introduced in Sec. <ref>. In this section, we first review background material on GPs in Sec. <ref>. Then, we discuss a general approach to define safe sets 𝒮_t+1 on the basis of the current distribution p(q|𝒪_t) in Sec. <ref>. §.§ Gaussian Process Consider an unknown scalar-valued function g(𝐱) with input 𝐱∈R^d. GP models such a function by assuming that, for any collection (𝐱_1,...,𝐱_N) of inputs, the corresponding outputs (g(𝐱_1),...,g(𝐱_1)) follow a multivariate Gaussian distribution. The Gaussian distribution is characterized by a mean function μ(𝐱) with 𝐱∈R^d, and kernel function κ(𝐱,𝐱') for 𝐱,𝐱'∈R^d <cit.>. An examples of a kernel function is the radial basis function (RBF) kernel κ(𝐱,𝐱')=exp(-h||𝐱-𝐱'||^2), which depends on a bandwidth parameter h>0. Specifically, for given inputs (𝐱_1,...,𝐱_N), collectively denoted as 𝐗, the output vector (g(𝐱_1),...,g(𝐱_1)) follows a Gaussian distribution 𝒩(μ(𝐗),𝐊(𝐗)), with N×1 mean vector μ(𝐗)=[μ(𝐱_1),...,μ(𝐱_N)]^ T, and N× N covariance matrix 𝐊(𝐗) with each (n,n')-th entry given by κ(𝐱_n,𝐱_n'). Assume that the output g(𝐱) is observed in the presence of independent Gaussian noise as y=g(𝐱)+ϵ, with ϵ∼𝒩(0,σ^2). We write as 𝐲=[y_1,..., y_N]^ T the N× 1 vector collecting the noisy outputs (<ref>) for inputs (𝐱_1,...,𝐱_N). An important property of GPs is that, given the history 𝒪=(𝐗,𝐲) of previous observations 𝐲 for inputs 𝐗, the posterior distribution p(y|𝐱,𝒪) of a new output y corresponding to any input 𝐱 has a Gaussian distribution with mean μ(𝐱|𝒪) and variance σ^2(𝐱|𝒪), i.e., p(g(𝐱)|𝒪)= 𝒩(μ(𝐱|𝒪), σ^2(𝐱|𝒪)), with μ(𝐱|𝒪)=μ(𝐱)+κ(𝐱)^ T(𝐊(𝐗)+σ^2𝐈_N)^-1(𝐲-μ(𝐗)), and σ^2(𝐱|𝒪)=κ(𝐱,𝐱)-κ(𝐱)^ T(𝐊(𝐗)+σ^2𝐈_N)^-1κ(𝐱), with N×1 covariance vector κ(𝐱)=[κ(𝐱,𝐱_1),,κ(𝐱,𝐱_N)]^ T and identity matrix 𝐈_N∈R^N× N. §.§ Credible Intervals and Safe Set Let us return to the operation of sequential optimizers based on BO. As explained in the previous section, at the end of iteration t, the optimizer has attempted solutions (𝐱_1,...,𝐱_t), which are collectively referred to as 𝐗_t. For these inputs, it has observed the noisy values 𝐲_t=[y_1,...,y_t]^ T in (<ref>) of the objective function, as well as the noisy values 𝐳_t=[z_1,...,z_t]^ T in (<ref>) for the constraint function. As we reviewed in Sec. <ref>, GPs allow the evaluation of the posterior distributions p(f(𝐱)|𝒪_t)=p(f(𝐱)|𝐗_t,𝐲_t) and p(q(𝐱)|𝒪_t)=p(q(𝐱)|𝐗_t,𝐳_t) for a new candidate solution 𝐱, given the history 𝒪_t=(𝐗_t,𝐲_t,𝐳_t) consisted of the previous attempts 𝐗_t and its corresponding noisy observations 𝐲_t and 𝐳_t. As we discuss next, these posterior distributions are used by Safe-BO methods to construct credible intervals, which quantify the residual uncertainty on the values of functions f(𝐱) and q(𝐱) at any candidate solution 𝐱. Introducing a scaling parameter β_t+1>0, the credible interval for the value of the objective function f(𝐱) for input 𝐱 at the end of iteration t, or equivalently at the beginning of iteration t+1, is defined by lower bound f_l(𝐱|𝒪_t) and upper bound f_u(𝐱|𝒪_t) given by ℐ_f(𝐱|𝒪_t)=[ f_l(𝐱|𝒪_t),f_u(𝐱|𝒪_t)] =[ μ_f(𝐱|𝐗_t,𝐲_t)-β_t+1σ_f(𝐱|𝐗_t,𝐲_t), μ_f(𝐱|𝐗_t,𝐲_t)+β_t+1σ_f(𝐱|𝐗_t,𝐲_t)], where the mean μ_f(𝐱|𝐗_t,𝐲_t) and the standard deviation σ_f(𝐱|𝐗_t,𝐳_t) are defined as in (<ref>) and (<ref>), respectively. In a similar manner, the credible interval for the constraint function q(𝐱) is defined as ℐ_q(𝐱|𝒪_t)=[ q_l(𝐱|𝒪_t),q_u(𝐱|𝒪_t)] =[ μ_q(𝐱|𝐗_t,𝐳_t)-β_t+1σ_q(𝐱|𝐗_t,𝐳_t), μ_q(𝐱|𝐗_t,𝐳_t)+β_t+1σ_q(𝐱|𝐗_t,𝐳_t)], where the mean μ_q(𝐱|𝐗_t,𝐲_t) and the standard deviation σ_q(𝐱|𝐗_t,𝐳_t) are also defined as in (<ref>) and (<ref>), respectively. Under the Gaussian model assumed by GP, the intervals (<ref>) and (<ref>) include the true function values f(𝐱) and q(𝐱) for a given input 𝐱 with probability P(β_t+1)=2F(β_t+1)-1, where F(·) is the cumulative distribution function (CDF) of standard Gaussian random variable F(z) = (Z ≤ z) with Z ∼𝒩(0,1). Therefore, the lower bounds f_l(𝐱|𝒪_t) and q_l(𝐱|𝒪_t) in the credible intervals (<ref>) and (<ref>), respectively, serve as pessimistic estimates of the objective and constraint values at the confidence level defined by probability P(β_t+1). Furthermore, under the same confidence level, the upper bounds f_u(𝐱|𝒪_t) and q_u(𝐱|𝒪_t) in (<ref>) and (<ref>) describe optimistic estimates of the objective and constraint values, respectively. That said, it is important to stress that, since the Gaussian model assumed by GP is generally misspecified, there is no guarantee on the actual probability that the credible intervals ℐ_f(𝐱|𝒪_t) and ℐ_q(𝐱|𝒪_t) include the true values f(𝐱) and q(𝐱). These intervals, in fact, are guaranteed to include the true functions values with probability P(β_t+1) only under the GP model. In order to meet the safety requirement (<ref>) or (<ref>), Safe-BO methods define a safe set of candidate solutions 𝐱∈𝒳 that are likely to satisfy the constraint q(𝐱)≥ 0. To this end, the optimizer selects the scaling factor β_t+1 so as to ensure some desired “safety” probability P(β_t+1). Then, leveraging the GP model, Safe-BO methods adopt the pessimistic estimate of the value of constraint function given by q_l(𝐱|𝒪_t) in (<ref>) as a conservative estimate of the constraint function. Accordingly, the safe set 𝒮_t+1 is defined as the set of all feasible solutions 𝐱∈𝒳 for which the conservative estimate q_l(𝐱|𝒪_t) of constraint function q(𝐱) predicts the solution 𝐱 to be safe, i.e., 𝒮_t+1=𝒮(𝒪_t|β_t+1)= {𝐱∈𝒳:q_l(𝐱|𝒪_t)≥0}∪𝒮_0. The safe set includes the known initial set 𝒮_0 of safe solutions in (<ref>), ensuring a non-empty safe set <cit.>. Safe-BO schemes choose as the first solution 𝐱_0 a point randomly selected from the initial safe set 𝒮_0. For the following iterations, while all Safe-BO schemes adopt the same definition of the safe set (<ref>), the realization of the acquisition process selecting the next iterate 𝐱_t+1 differentiates the schemes proposed in prior <cit.>. In the next section, we specifically describe the operation of <cit.>. § In this section, we review <cit.> a representative state-of-the-art Safe-BO method, which will serve as a reference for the proposed strategies introduced in the next section. §.§ Scope and Working Assumptions addresses problem (<ref>) under a strict version of the probabilistic safety constraint (<ref>) with target violation rate α=0 and arbitrary target reliability level 1-δ. In order to allow for a zero violation rate (α=0) to be a feasible goal, makes the assumption that the constraint function q(𝐱) in (<ref>) lies in the RKHS ℋ_κ associated with the same kernel function κ(𝐱,𝐱') assumed by GP inference (see Sec. <ref>). In this sense, the model adopted by GP is assumed by to be well specified. Formally, the mentioned assumption made by enforces that the function can be expressed as q(𝐱) = ∑_i=1^m a_i κ(𝐱, 𝐱_i) for some vectors {𝐱_i∈R^d}_i=1^m, real coefficients {a_i}_i=1^m, and integer m. For a function q(𝐱) of the form (<ref>), the squared RKHS norm is defined as ||q||^2_κ=∑_i=1^m∑_j=1^ma_ia_jκ(𝐱_i,𝐱_j). Furthermore, a useful property of constraint function q(𝐱) in RKHS ℋ_κ is that it is upper bounded by a function of their squared RKHS norm as |q(𝐱)| ≤κ(𝐱,𝐱)^1/2 ||q||_κ for all values 𝐱 in their domain. The property (<ref>) is leveraged by by assuming that the RKHS norm of the constraint function q(𝐱) is upper bounded by a known constant B, i.e., ||q||_κ≤ B. §.§ Safe Set Creation Safe-BO determines the safe set 𝒮_t+1 in (<ref>) using the scaling parameter β_t+1=B+4σ_q√(γ_t+1-ln(δ)), where B is the constant appearing in the assumed upper bound (<ref>); σ_q^2 is the known observation noise power in (<ref>); 1-δ is the target reliability level in (<ref>); and γ_t is the maximal mutual information between the true values (q(𝐱_1),...,q(𝐱_t)) of the constraint function and the corresponding t noisy observations (𝐳_1,...,𝐳_t) when evaluated under the model assumed by GP. This quantity can be evaluated as <cit.> γ_t=max_𝐗'_t=(𝐱'_1,...,𝐱'_t)(1/2log|𝐈_t+σ_q^-2𝐊_q(𝐗'_t) |), where 𝐈_t is the t× t identity matrix and 𝐊_q(𝐗'_t) is the t× t covariance matrix defined in Sec. <ref>. Evaluating (<ref>) requires a maximization over all possible inputs sequences 𝐗'_t=(𝐱'_1,...,𝐱'_t), hence in practice it is often addressed via greedy algorithms (see, e.g., <cit.>). We also observe that, in the limit of no observation noise, i.e., as σ_q→0, the scaling parameter (<ref>) tends to β_t = B. By choosing the scaling parameter β_t+1 as in (<ref>), under the key assumption (<ref>), all the decisions in the safe set 𝒮_t+1 (<ref>) can be proved to be safe with high probability <cit.> (see also <cit.>). §.§ Acquisition Process In this section, we detail the acquisition process adopted by to select the next iterate 𝐱_t+1 within the safe set 𝒮_t+1. To start, defines the set of potential optimizers ℳ_t+1 as the set of all possible solutions 𝐱∈𝒮_t+1 that may increase the objective function. It also maintains a set of possible expanders 𝒢_t+1 as the set of safe solutions that can potentially increase the size of the safe set 𝒮_t+1 if selected. Then, given the potential optimizers ℳ_t+1 and the possible expanders 𝒢_t+1, chooses the solution 𝐱∈ℳ_t+1∪𝒢_t+1 that maximally reduces the larger uncertainty implied by the credible intervals (<ref>) and (<ref>), i.e., 𝐱_t+1 = _𝐱∈ℳ_t+1∪𝒢_t+1max{σ_f(𝐱|𝒪_t), σ_q(𝐱|𝒪_t) }. We now describe the construction of sets ℳ_t+1 and 𝒢_t+1. For the first, let us recall that the lower bound f_l(𝐱|𝒪_t) in the credible interval (<ref>) can be viewed as a pessimistic estimate of the objective f(𝐱), while the upper bound f_u(𝐱|𝒪_t) can be interpreted as an optimistic estimate of the same value. The set of potential optimizers, ℳ_t+1, includes all safe solutions 𝐱∈𝒮_t+1 for which the optimistic estimate f_u(𝐱|𝒪_t) is larger than the best pessimistic estimate f_l(𝐱|𝒪_t) for all safe solutions 𝐱∈𝒮_t+1. This set can be expressed mathematically as ℳ_t+1={𝐱∈𝒮_t+1|f_u(𝐱|𝒪_t)≥max_𝐱'∈ S_t+1f_l(𝐱|𝒪_t)}. Note that this set is non-empty, since it includes at least the solution 𝐱 that maximizes the lower bound f_l(𝐱|𝒪_t). The set ℳ_t+1 accounts only for the objective value to select solutions from the safe set 𝒮_t+1. In contrast, the set of possible expanders considers the potential impact of a selected candidate solution on the safe set. To formalize this concept, let us write 𝒮_t+2(𝐱) for the safe set (<ref>) evaluated by extending the current history 𝒪_t with the pair (𝐱,q_u(𝐱|𝒪_t)) of candidate solution 𝐱 and corresponding hypothetical observation of the optimistic value q_u(𝐱|𝒪_t) of the constraint q(𝐱). Accordingly, we have 𝒮_t+2(𝐱)=𝒮(𝒪_t∪(𝐱,q_u(𝐱|𝒪_t))|β_t+1), and the set of possible expanders is defined as 𝒢_t+1={𝐱∈𝒮_t+1: | 𝒮_t+2(𝐱) ∖𝒮_t+1 |>0}, that is, as the set of all safe solutions that can potentially increase the size of the safe set. After T trials, the final decision 𝐱^* is obtained by maximizing the pessimistic estimate f_l(𝐱|𝒪_T) of the objective function that is available after the last iteration over the safe set 𝒮_T+1, i.e., 𝐱^* = max_𝐱∈𝒮_T+1 f_l(𝐱|𝒪_T). The overall procedure of is summarized in Algorithm 1. §.§ Safety Property was shown in <cit.> to achieve the probabilistic safety constraint (<ref>) with α=0, as long as the assumptions that the true constraint function q(𝐱) is of the form (<ref>) and that the RKHS norm bound (<ref>) holds. (Safety Guarantee of <cit.>) Assume that the RKHS norm of the true constraint function q(𝐱) is bounded by B>0 as in (<ref>). By choosing the scaling parameter β_t+1 as in (<ref>), satisfies the probabilistic safety constraint (<ref>) with α=0. Furthermore, with ideal observations of the constraint function q(𝐱), i.e., σ_q=0, by choosing the scaling parameter as β_t+1=B, meets the deterministic requirement (<ref>) with α=0. From Theorem <ref>, as long as the Gaussian model assumed by GP is well specified – in the sense indicated by the RKHS form (<ref>) with known norm upper bound B in (<ref>) – ensures safe optimization with a zero target violation rate α=0. In practice, however, it is hard to set a value for the constant B. Therefore, for any fixed constant B, the resulting algorithm does not have formal guarantees in terms of safety <cit.>. § DETERMINISTIC SAFE-BO VIA ONLINE CONFORMAL PREDICTION As we have reviewed in Sec. <ref>, in order to achieve a zero target violation rate α=0 in the safety constraints (<ref>) and (<ref>), assumes that the constraint function q(𝐱) belongs to a specific family of functions. Other Safe-BO methods <cit.> also require the same assumption to guarantee the safety constraint (see Sec. <ref>). In the following two sections, we will introduce , a novel Safe-BO scheme that achieves the safety constraint requirements (<ref>) or (<ref>) without requiring any assumptions on the underlying constraint function q(𝐱). This goal is met at the cost of obtaining a non-zero, controllable, target violation rate α∈ (0,1] in the deterministic safety requirement (<ref>) and in the probabilistic safety requirement (<ref>). This section focuses on the case in which observations (<ref>) of the constraint function are ideal, i.e., σ_q^2=0, hence aiming at achieving the deterministic safety constraint (<ref>). The next section addresses the case with noisy observations on the constraint function. §.§ Adaptive Scaling via Noiseless Feedback on Safety As detailed in Sec. <ref>, fixes a priori the scaling parameters β_1,...,β_T to be used when forming the safe set (<ref>), along with the set of potential optimizers (<ref>) and possible expanders (<ref>), irrespective of the actual history 𝒪_t of past iterates 𝐗_t and observations 𝐲_t and 𝐳_t. This is done by leveraging the mentioned assumptions on the constraint function (<ref>)–(<ref>). In contrast, not relying on any assumption on the constraint function q(𝐱), the proposed selects the scaling parameter β_t+1 adaptively based on the history 𝒪_t by leveraging ideas from online CP <cit.>. In order to support the adaptive selection of a scaling parameter β_t+1 that ensures the deterministic safety constraint (<ref>), maintains an excess violation rate variable Δα_t+1 across the iterations t=1,...,T. The variable Δα_t+1 compares the number of previous unsafe candidate solutions 𝐱_t' with t'=1,...,t to a tolerable number that depends on the target violation rate α. The main idea is to use the excess violation rate Δα_t+1 to update the parameter β_t+1: A larger excess violation rate Δα_t+1 calls for a larger value of β_t+1 so as to ensure a more pronounced level of pessimism in the evaluation of the safe set (<ref>). This forces the acquisition function (<ref>) to be more conservative, driving down the excess violation rate towards a desired non-positive value. §.§ To define the excess violation rate, we first introduce the safety error signal err_t = 1(z_t<0), which yields err_t=1 if the last iterate 𝐱_t was found to be unsafe based on the observation z_t=q(𝐱_t), and err_t=0 otherwise. An important property of schemes, like and , that rely on the use of safe sets of the form (<ref>) is that one can ensure a zero error signal err_t=0 by setting β_t=∞. In fact, with this maximally cautious selection, the safe set 𝒮_t includes only the initial safe set 𝒮_0 in (<ref>), which consists exclusively of safe solutions. The excess violation rate Δα_t+1 measures the extent to which the average number of errors made so far, 1/t ·∑_t'=1^terr_t', exceeds an algorithmic target level α_algo, which will be specified later. Accordingly, the excess violation rate is updated as Δα_t+1=Δα_t+η(err_t-α_algo), for a given update rate η > 0 and for any initialization Δα_1< 1. The relation between excess violation rate and the average number of errors becomes apparent by rewriting (<ref>) as Δα_t+1 = Δα_1+ η·( ∑_t'=1^t err_t' - α_algo· t) = Δα_1+ η· t ·(violation-rate(t) - α_algo), which is a linear function of the difference between the violation rate up to time t and the algorithmic target α_algo. This implies that the desired safety requirement (<ref>) can be equivalently imposed via the inequality violation-rate(T) = Δα_T+1-Δα_1/Tη +α_algo≤α. Therefore, controlling the violation rate requires us to make sure that the excess violation rate Δα_t does not grow too quickly with the iteration index t. Intuitively, as mentioned, in order to control the value of the excess violation rate Δα_t, we need to select values of β_t that increase with Δα_t. To this end, as summarized in Algorithm 2, inspired by the approach introduced by <cit.> in the context of online CP, the proposed sets the parameter β_t as β_t = φ(Δα_t), where we have defined function φ(Δα_t) = F^-1((clip(Δα_t)+1)/2), with F^-1(·) being the inverse of the function F(·) (<ref>), i.e., the inverse CDF of standard Gaussian distribution, and clip(Δα_t)= max{min{Δα_t, 1}, 0} being the clipping function. An illustration of the function (<ref>) can be found in Fig. <ref>. Furthermore, we set the algorithmic target level as α_algo=1/T-1(Tα -1 -1/η+Δα_1/η). The overall procedure of is summarized in Algorithm <ref>. We next prove that meets the reliability requirement (<ref>). §.§ Safety Guarantees is guaranteed to meet the deterministic safety constraint (<ref>) (or equivalently (<ref>)), as summarized in the next theorem. Under noiseless observations of the constraint function (σ_q^2=0), satisfies the deterministic safety constraint (<ref>) for any pre-determined target violation rate α∈ (0,1]. Function (<ref>) implements the following mechanism: When Δα_t≥ 1, it returns β_t=∞, i.e., Δα_t≥ 1 ⇒β_t=∞. As discussed earlier in this section, this ensures a zero error signal err_t=0. With this mechanism in place, one can guarantee the upper bound Δα_t+1 < 1+η (1-α_algo) for all t≥1 given the mentioned initialization Δα_1 < 1. This is because a value Δα_t≥ 1 would cause the update term in (<ref>) to -ηα_algo<0, and hence the maximum value is attained when Δα_t is approaching, but smaller than, 1, and an unsafe decision is made, causing an update equal to η(1-α_algo). Plugging bound (<ref>) back into (<ref>), yields the upper bound on the violation rate violation-rate(T)≤1+η(1-α_algo)-Δα_1/Tη +α_algo. Therefore, by setting (<ref>), we finally verify that the deterministic safety requirement (<ref>) is satisfied. § PROBABILISTIC SAFE-BOCP We now turn to the case in which the observations (<ref>) of constraint function q(𝐱) are noisy (σ_q^2 > 0). The main challenge in extending the approach proposed in the previous section is the fact that the error signal (<ref>) is an unreliable indication of whether candidate 𝐱_t is safe or not due to the presence of observation noise. Accordingly, we start by proposing an alternative way to measure the excess violation rate. §.§ The main idea underlying the proposed is to count as unsafe all solutions 𝐱_t for which the noisy observation z_t=q(𝐱_t)+ϵ_q,t in (<ref>) is smaller than some positive threshold σ_q·ω that scales with the observation noise standard deviation σ_q with proportionality constant ω>0. Accordingly, the safety error signal is defined as err_t = 1(z_t<σ_q·ω). In a manner consistent with the previous section, for noiseless observations, we recover the error signal (<ref>) used by , since we have a zero observation noise power σ^2_q=0. More generally, when the observation noise power σ^2_q is positive, the threshold σ_q·ω used in the error signal (<ref>) causes to cautiously treat as errors all observations z_t that are not sufficiently large to overcome the uncertainty associated with the noisy observation of the constraint function. Intuitively, the slope ω should increase with the target reliability level 1-δ in the probabilistic safety constraint (<ref>). In fact, a larger target reliability level calls for more caution in determining whether a given observation z_t of the constraint function is likely to indicate an unsafe solution or not. In line with this intuition, we set the slope factor ω as ω=F^-1((1-δ)^1/T), where F^-1(·) is the inverse CDF of standard Gaussian distribution (<ref>). The reason for this choice is elucidated by the following lemma, which relates the true violation rate (<ref>) to the estimated violation rate ∑_t=1^Terr_t/T using the error signal (<ref>). For any iterates 𝐱_1,...,𝐱_T, the true violation rate in (<ref>) is upper bounded by the accumulated error signal rate in (<ref>) with probability 1-δ, i.e., (violation-rate(T)≤1/T∑_t=1^Terr_t)≥ (F(ω))^T=1-δ, in which the probability is taken with respect to the observation noise variables {ϵ_q,t}_t=1^T for the constraint function q(𝐱) in (<ref>). When a candidate solution 𝐱_t is unsafe, i.e., when q(𝐱_t)< 0, the probability that the error signal err_t in (<ref>) correctly reports an error, setting err_t=1, is lower bonded by F(ω), where F(·) is the CDF of standard Gaussian distribution. Therefore, the probability that the true violation rate violation-rate(T) no larger than the estimated violation rate ∑_t=1^Terr_t/T=1 is lower bounded by the probability that all the errors correctly reported. This is, in turn, lower bounded by (F(ω))^T by the independence of the observation noise variables {ϵ_q,t}_t=1^T. As specified in Algorithm <ref>, follows the same steps in with the caveat that the error signal (<ref>) is used in lieu of (<ref>). As we prove next, the correction applied via the error signal (<ref>) is sufficient to meet the probabilistic safety requirement (<ref>). §.§ Safety Guarantees The satefy guarantees of are summarized in the following theorem. Under noisy observations of the constraint function (σ_q^2>0), satisfies the probabilistic safety constraint (<ref>) for any pre-determined target violation rate α∈ (0,1] and target reliability level δ∈ (0,1). Using the same arguments as in the proof of Theorem 2, the estimated violation rate can be upper bounded with probability 1 as 1/T∑_t=1^Terr_t ≤1+η(1-α_algo)-Δα_1/Tη +α_algo. Using this bound with Lemma <ref>, we conclude that, with probability at least 1-δ, in which the probability is taken over the observation noise variables {ϵ_q,t}_t=1^T, we have bound on the true violation rate violation-rate(T)≤1/T∑_t=1^Terr_t ≤α, which recovers the probabilistic safety constraint (<ref>). § NUMERICAL RESULTS FOR A SYNTHETIC BENCHMARK In this section, we detail experimental results aimed at comparing with <cit.> on a synthetic benchmark inspired by <cit.>. §.§ Synthetic Dataset In a manner similar to <cit.>, we focus on a synthetic setting with a scalar optimization variable 𝐱∈R in which the objective function f(𝐱) is a realization of a GP with zero mean and RBF kernel κ^*(𝐱,𝐱') (<ref>) with bandwidth h^*=1/1.62, while the constraint function q(𝐱) is a function in this RKHS ℋ_κ^* which has the form (<ref>) with coefficients {a_i}_i=0^10=[-0.05,-0.1,0.3,-0.3,0.5,0.5,-0.3,0.3,-0.1,-0.05] and scalars {𝐱_i}_i=1^10=[-9.6,-7.4,-5.5,-3.3,-1.1,1.1,3.3,5.5, 7.4,9.6]. Accordingly, the constraint function q(𝐱) has RKHS norm ||q||_κ^*=1.69 in (<ref>). In order to investigate the impact of misspecification of GP (see Sec. <ref>) on Safe-BO including the proposed , we consider the two cases: (i) well-specified GP that uses κ^*(𝐱,𝐱') for the GP kernel, i.e., κ(𝐱,𝐱')=κ^*(𝐱,𝐱'); (ii) misspecified GP that uses RBF kernel with smaller bandwidth h=1/14.58 < h^*, i.e., κ(𝐱,𝐱')≠κ^*(𝐱,𝐱'), with unknown ||q||_κ. As discussed throughout the paper, the scaling parameter for the constraint function q(𝐱) in (<ref>) is a priori determined by (<ref>) for , and is adapted by feedback via β_t+1=φ(Δα_t+1) (<ref>) for the proposed , while we fix the scaling parameter for the objective function f(𝐱) in (<ref>) to 3 since it does not affect the safety guarantee for both (see <cit.>) and . The objective observation noise variance is set to σ_f^2=2.5×10^-3; the initial safe decision is chosen as 𝐱_0=0 for which we have q(𝐱_0)=0.946>0; and the total number of optimization iterations is set to T=25. For , we set the update rate in (<ref>) to η=2.0. All results are averaged over 1,000 experiments, with error bars shown to encompass 95% of the realizations. Each experiment corresponds to a random draw of the objective function and to random realization of the observation noise signals. §.§ Deterministic Safety Requirement As explained in Sec. <ref>, requires the GP model for the constraint function q(𝐱) to be well specified (<ref>)–(<ref>) in order to meet safety conditions. To study the impact of violations of this assumption, we start by considering the noiseless case, i.e., σ_q^2=0, and we vary the kernel bandwidth h adopted for the GP models used as surrogates for the objective and constraint functions as discussed earlier. In this experiment, we set the target violation rate in the deterministic safety constraint (<ref>) as α=0.1 for , while we recall that assumes the target α=0. Fig. <ref> shows the violation rate violation-rate(T) in (<ref>), as well as the optimality ratio (<ref>), as a function of constant B assumed by for both well-specified and misspecified GPs. Note that the performance of does not depend on the value of B, which is an internal parameter for , but it is affected by the choice of parameter h. By Theorem <ref>, any value B≥||q||_κ in (<ref>) guarantees the safety of . However, since RKHS norm for the misspecified GP is generally unknown, we plot violation rate and optimality ratio as functions of the ratio B/||q||_κ^*, to highlight the two regimes with well specified and misspecified value of B. Confirming Theorem <ref>, with a ratio B/||q||_κ^*≥1 for the well-specified GP with kernel κ(𝐱,𝐱')=κ^*(𝐱,𝐱'), is seen to strictly satisfy the deterministic safety constraint (<ref>), since the violation rate is equal to zero, as per its target. Instead, when B/||q||_κ^*<1, and/or when the GP is misspecified, i.e., κ(𝐱,𝐱')≠κ^*(𝐱,𝐱'), the violation rate exceeds the target α. In contrast, obtains a violation rate below the target α, irrespective of kernel bandwidth h assumed in GP. In terms of optimality ratio, in the “safe" regime B/||q||_κ≥1 with well-specified GP parameter h, achieves around 85%, while obtains 84.5%. With a misspecified value h, achieves an optimality ratio around 87.5%, while the optimality ratio of is larger, but not relevant given the violation of the safety requirement. Note that a misspecified value of the kernel bandwidth h does not necessarily reduce the performance of , which is improved in this example. The trade-off between violation rate and optimality ratio is studied in Fig. <ref> by varying the target violation rate α for . For each value of α, we show the achieved pair of violation rate and optimality ratio, along with the corresponding realization ranges along the two axes. Recall that for the assumed target is α=0, and hence one pair is displayed. We focus here on the misspecified GP case, i.e., κ(𝐱,𝐱')≠κ^*(𝐱,𝐱'), while the parameter B is selected to the “safe” value B=||q||_κ^*, which is unaware of kernel misspecification. For each value of α∈{0.1,0.2,0.3}, the figure highlights the intervals of violation rates that meet the safety requirement (<ref>) using different colors. Specifically, for α=0.1, all violation rates below 0.1 are acceptable, as denoted by the red interval; for α=0.2, all violation rates in the red and green intervals are acceptable; and for α=0.3, all violation rates below in the cyan, green, and red interval meet the safety constraint. The figure shows that the violation rate obtained by is above its target α=0. In contrast, as per the theory developed in this paper, obtains flexible and guaranteed violation rates within the tolerated ranges for all values of α. Furthermore, as the tolerated violation rate α increases, the optimality ratio of is enhanced, indicating a trade-off between the two metrics. §.§ Probabilistic Safety Constraint We now turn to considering scenarios with observation noise σ^2_q>0, and aim at evaluating the performance in terms of probabilistic safety requirement (<ref>) and optimality ratio (<ref>). We set the target reliability level 1-δ=0.9 with target violation rate α=0.1 for , and with α=0 for in accordance with 's design. For the latter scheme, we set the “safe” value B=10||q||_κ^*, while we consider both the well-specified kernel bandwidth h=h^*=1/1.62, and the misspecified one h=1/14.58<h^*, as considered also in the previous set of experiments. For all schemes, the excess violation rate probability in (<ref>) is obtained by averaging over 10,000 realizations. We plot the excess violation rate probability (<ref>) and the optimality ratio in Fig. <ref> and Fig. <ref> against the observation noise power σ^2_q. The first figure corresponds to the case of a well-specified kernel bandwidth while for the second we adopted misspecified value. Confirming the theory, in the former case, both and attain an excess violation rate probability below the target level 1-δ. In contrast, for a misspecified kernel, can only satisfy the constraint (<ref>) for sufficiently large observation noise, but still meets the probability safety constraint (<ref>). We note that a larger observation noise is beneficial to in terms of safety since it forces a larger level of pessimism in the definition of the safe set 𝒮_t+1 in (<ref>). In terms of optimality ratio, larger observation noise power σ^2_q generally yields a degraded optimality ratio. In the well-specified regime considered in Fig. <ref>, both schemes have comparable performance and the optimality ratio gap is no more than 5%. In the misspecified regime demonstrated in Fig. <ref>, the performance levels are not comparable, since the gains of recorded for come at the cost of violations of the safety constraint (<ref>), except for a sufficiently large observation noise power, here σ^2_q≥0.1. § NUMERICAL RESULTS FOR REAL WORLD APPLICATIONS In this section, we compare <cit.> and in two real-world applications, with the goal of validating the safety gains obtained by the proposed method along the optimization process. §.§ Safe Movie Recommendation As in <cit.>, consider a system that sequentially recommends movies to a user. Each user assigns a score from 1 to 5 to a recommended movie. Following standard matrix factorization algorithms, we introduce a feature vector 𝐱∈R^d for each movie. Accordingly, selecting a movie amounts to choosing a vector 𝐱 within a set of possible movies. Denote as r(𝐱) the rating assigned by a user to movie 𝐱. A recommendation is deemed to be unsafe if the user assigns it a rating strictly smaller than 4, i.e., if r(𝐱)< 4. Accordingly, we set both objective function f(𝐱) and constraint function q(𝐱) to be equal to f(𝐱)= q(𝐱)=r(𝐱)-4. We focus on the deterministic safety constraint (<ref>), since the ratings are assumed to be observed with no noise. To define a GP model for the function that maps a movie feature vector 𝐱 to a rating r(𝐱), we need to specify a kernel function, which describes the similarity between movies. As in <cit.>, we adopt the linear kernel κ(𝐱,𝐱')=𝐱^ T𝐱', for any two movie feature vectors 𝐱 and 𝐱'. The feature vectors 𝐱 for movies are optimized using the MovieLens-100k dataset <cit.>, which includes sparse rating observations of 1,680 movies from 943 users. Specifically, as in <cit.>, we randomly select 200 users to form the training data set, and we set d=20 for the size of the feature vectors. Training applies the standard matrix factorization algorithm <cit.>. For testing, we pick the 10 test users, not selected for training, that have the most rated movies, and remove the movies with no rating from the possible selections. Since the true underlying function that maps movie feature vector 𝐱 to rating r(𝐱) is unknown, it is not possible to evaluate the RKHS norm ||q||_κ in (<ref>) required by . Accordingly, as in <cit.>, we set B=3 a priori for . In this experiment, we run both and for T=100 iterations on the selected 10 test users. We randomly select a movie rated as 4 for each test user as the initial starting point 𝐱_0, and set the update rate η=10 for . To evaluate the performance of both schemes, we show in Fig. <ref> the histograms of the ratings across all selected movies during the optimization procedure. The vertical dashed line represents the safety threshold between safe and unsafe recommendations. The marker on the horizontal axis marks the average rating. For we have the flexibility to vary the target violation rate α, while we recall that for the target is α=0. The top-left panel of Fig. <ref> shows that does not meet the safety requirement (<ref>) with α=0 owing to the mismatch between the assumptions made by the scheme and the true, unknown, constraint function. The remaining panels demonstrate that, in contrast, can correctly control the fraction α of unsafe recommendations. §.§ Chemical Reaction Optimization Finally, we consider the plug flow reactor (PFR) problem introduced in <cit.>[Simulator available at https://github.com/VlachosGroup/Fructose-HMF-Model], which seeks for optimal chemical reaction parameters 𝐱∈ [140,200]× (0,1] ⊂R^2, with the first dimension being the temperature (^∘ C) and the second being the pH value. The goal is to maximize the yield (%), which we set as the objective f(𝐱), while keeping an acceptable selectivity level (%), which we denote as s(𝐱). We refer to <cit.> for a precise definition of these terms. A reaction vector is deemed to be unsafe if the resulting selectivity level is lower than the corresponding yield, hence we define the constraint function as q(𝐱)=s(𝐱)-f(𝐱). We assume the presence of non-zero Gaussian observation noise z_t for the constraint function, i.e., σ^2_q>0. Accordingly, we focus on the probabilistic safety constraint (<ref>), and compare the performance of and . We adopt GP surrogates model for both f(𝐱) and q(𝐱) with RBF kernel having bandwidth h=1/2.88. Similar to Sec. <ref>, since the smoothness property of the true underlying functions q(𝐱) is unknown, we assume the constant B=3 for <cit.>. The initial decision 𝐱_0 is randomly chosen among the a priori known safe decisions that satisfy the constraint q(𝐱_0)≥ 0, and we set the total number of optimization round to be T=50. Other settings are as in Sec. <ref>. In a similar manner to Sec. <ref>, we demonstrate the excess violation rate probability (<ref>) and the optimality ratio in Fig. <ref> as a function of the observation noise power σ^2_q. Confirming the discussion in Sec. <ref> and the theory, is seen to meet the probabilistic safety constraint (<ref>) irrespective of observation noise power, while can only attain an excess violation rate probability below the target 1-δ when the observation noise power is sufficiently large. § CONCLUSIONS In this work, we have introduced , a novel BO-based zero-th order sequential optimizer that provably guarantees safety requirements irrespective of the properties of the constraint function. The key mechanism underlying adapts the level of pessimism adopted during the exploration of the search space on the basis of noisy safety feedback received by the system. From synthetic experiment to real-world applications, we have demonstrated that the proposed performs competitively with state-of-the-art schemes in terms of optimality ratio, while providing for the first time assumption-free safety guarantees. Although in this work we have built on for the acquisition process, the proposed framework could be generalized directly to any other Safe-BO schemes, such as <cit.>. Other possible extensions include accounting for multiple constraints, as well as taking into account contextual information during the optimization process <cit.>. ieeetr
http://arxiv.org/abs/2306.06039v2
20230609170420
Possible high $T_c$ superconductivity in La$_3$Ni$_2$O$_7$ under high pressure through manifestation of a nearly-half-filled bilayer Hubbard model
[ "Hirofumi Sakakibara", "Naoya Kitamine", "Masayuki Ochi", "Kazuhiko Kuroki" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mtrl-sci", "cond-mat.str-el" ]
APS/123-QED [email protected] Advanced Mechanical and Electronic System Research Center(AMES), Faculty of Engineering, Tottori University, 4-10 Koyama-cho, Tottori, Tottori 680-8552, Japan Computational Condensed Matter Physics Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan Forefront Research Center, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan Department of Physics, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043, Japan Inspired by a recent experiment showing that La_3Ni_2O_7 exhibits high T_c superconductivity under high pressure, we theoretically revisit the possibility of superconductivity in this material. We find that superconductivity can take place which is essentially similar to that of the bilayer Hubbard model consisting of the Ni 3d_3z^2-r^2 orbitals. Although the coupling with the 3d_x^2-y^2 orbitals degrades superconductivity, T_c can still be high enough to understand the experiment thanks to the very high T_c reached in the bilayer Hubbard model. 74.20.Mn,74.70.−b Possible high T_c superconductivity in La_3Ni_2O_7 under high pressure through manifestation of a nearly-half-filled bilayer Hubbard model Kazuhiko Kuroki July 31, 2023 ============================================================================================================================================= Introduction.—Seeking for new unconventional high T_c superconductors has been a great challenge ever since the discovery of the two families of unconventional high-T_c superconductors, cuprates<cit.> and iron-based<cit.>. Several previous studies have shown that the cuprates are already in an ideal situation in that they are described by a single orbital Hubbard model near half-filling on a square lattice, and hence their T_c may be difficult to transcend<cit.>. One possible approach for pursuing even higher T_c is to realize in actual materials the bilayer Hubbard model, for which several studies have shown that the superconducting T_c can be higher than that of the d-wave superconducting state in the single orbital Hubbard model<cit.>. In fact, the bilayer Hubbard model has been widely studied from the past<cit.>, and s±-wave superconductivity is found to be strongly enhanced near half-filling when the vertical electron hopping (t_⊥) between the layers is several times larger than the in-plane hopping, and the Fermi level (E_F) lies in the vicinity of the edge of one of the bands<cit.>. Nowadays, a band whose edge lies just below or above E_F is often referred to as an incipient band, and has attracted interest in the study of iron-based superconductors<cit.>, bilayer and ladder-type lattices<cit.>, and flat band superconductivity<cit.>. In fact, one of the present authors proposed that a double layer Ruddlesden-Popper compound La_3Ni_2O_7 can be a good candidate for realizing the bilayer Hubbard model that satisfies the above mentioned conditions<cit.>. In this material, for which the Ni 3d electron configuration is d^7.5, the 3d_3z^2-r^2 orbitals are elongated in the z (out-of-plane) direction so that t_⊥ between the layers is much larger than the in-plane hoppings between the neighboring d_3z^2-r^2 orbitals, and also the d_3z^2-r^2 orbitals are nearly half-filled. Hence the d_3z^2-r^2 portion of the electronic structure appears to be favorable for superconductivity from the above mentioned viewpoint of the bilayer model, although deviation compared from the ideal model arises due to the presence of the Ni 3d_x^2-y^2 bands, which are nearly quarter-filled, overlapping and hybridizing with the d_3z^2-r^2 bands. Given this background, a recent experimental finding that La_3Ni_2O_7 exhibits high T_c superconductivity at high pressures<cit.>, which in itself has huge impact, is certainly intriguing. There, it was shown that the material undergoes a superconducting transition with a highest T_c of 80 K under pressure above 14 GPa. Already several theoretical studies on this material, which have been performed independently from ours, have appeared right after the discovery of superconductivity<cit.>. Inspired by this experiment, here we theoretically revisit the possibility of superconductivity in La_3Ni_2O_7 by constructing a four-orbital model that takes into account the crystal structure at high pressures. We find that s±-pairing superconductivity, which is essentially similar to that of the bilayer Hubbard model, can take place with high T_c that is consistent with the experimental observation. Although the coupling between d_3z^2-r^2 and d_x^2-y^2 orbitals degrades superconductivity, T_c can still be high because of the very high T_c attained in the bilayer Hubbard model. We also discuss ways to further enhance superconductivity of this material. Method.—First, we perform first-principles calculation to obtain the band structure of La_3Ni_2O_7 using the QUANTUM ESPRESSO code <cit.>. Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA) <cit.> and the scalar-relativistic version of the optimized norm-conserving Vanderbilt pseudopotentials <cit.> taken from PseudoDojo <cit.> are used. Experimental lattice constants and theoretical atomic positions of La_3Ni_2O_7 under the pressure of P=29.5 GPa are taken from Ref. <cit.> for the input parameter. Since the orthorombicity at P=29.5 GPa is quite small ((a-b)/a∼ 1.3 %, where a,b are lattice constants for space group Fmmm), we adopt a body-centered tetragonal structure (I4/mmm, Fig. <ref>(a)) as in La_2CuO_4, with the lattice constants determined as an average of the original ones, i.e., a^*=b^*=(a+b)/2√(2). We take 100 Ry plane-wave cutoff energy, a 12 × 12 × 12 k mesh, 0.02 Ry width for Gaussian smearing. We then extract (maximally localized) Wannier functions <cit.> using the RESPACK code <cit.>, by which we also obtain the hopping parameters among the Wannier functions. We construct a four-orbital model consisting of the d_x^2-y^2 and the d_3z^2-r^2 like Wannier orbitals centered at two Ni sites per unit cells. Important parameter values are given in Table <ref>. Figure <ref>(c) shows superposed band-structures given by first-principles and Wannier interpolation, where precise fitting around the Fermi level is achieved. We explore the possibility of superconductivity for the obtained low-energy four-orbital model within the fluctuation-exchange approximation (FLEX) <cit.>. As the interaction term of the Hamiltonian, we only take the on-site interactions, namely, intraorbital(interorbital) Coulomb interactions U(U'), Hund's coupling J, and pair hopping J'. We assume the orbital rotational symmetry, namely, we take the same value of U for the d_x^2-y^2 and the d_3z^2-r^2 orbitals, and U'=U-2J, J=J'. Since typical values for cuprate is U/t=7-10 (where |t|≃ 0.45 eV is first-principles value<cit.> of the nearest neighbor hopping among the d_x^2-y^2 orbitals), we take U=3 eV. We also take J=0.1U, i.e., J=J'=0.3 eV and U'=U-2J=2.4 eV. We calculate the self-energy induced by the spin-fluctuation formulated as shown in the literatures <cit.> in a self-consistent calculation. The real part of the self-energy at the lowest Matsubara frequency is subtracted in the same manner with Ref. <cit.> to maintain the band structure around the Fermi level obtained by first-principles calculation. The obtained Green's function and the pairing interaction, mediated mainly by spin fluctuations, are plugged into the linearized Eliashberg equation. Since the the eigenvalue λ of the Eliashberg equation reaches unity at T=T_c, we adopt it as a measure of superconductivity at a fixed temperature, T=0.01 eV. For convenience, we will call the eigenfunction (with the largest eigenvalue) of the linearized Eliashberg equation at the lowest Matsubara frequency iω(=iπ k_ BT) the “superconducting gap function”. We take a 16×16×4 k-point mesh and 2048 Matsubara frequencies for the FLEX calculation. Results and Discussions.—In Fig. <ref>(a), we show the eigenvalue of the Eliashberg equation λ at T=0.01 eV as a function of the band filling n, denoted as “original model”. n=1.5 corresponds to the stoichiometric composition of the actual material, and n is varied assuming a rigid band. In the figure, we also show a yellow shade presenting the range of the typical values of λ for the high T_c cuprates obtained in the same way<cit.>. It can be seen that the present model, for n=1.5 or larger, exhibits large λ values comparable to those of the cuprates, which implies that the calculation results are consistent with the experimental observation of T_c∼ 80 K. In Fig. <ref>(d), we show the superconducting gap function Δ(k,iω) of the present model at n=1.5 in the band representation. It can be seen that the gap function is large at portions of the band where the d_3z^2-r^2 orbital component is large, and the bonding and antibonding portions of the d_3z^2-r^2 band (see Fig. <ref>(b)) has opposite signs of the gap. To understand the origin of the large λ values, we study three other models in the same manner, namely, models in which the following couplings between d_3z^2-r^2 and d_x^2-y^2 orbitals are eliminated: (i) the interorbital interactions U', J, J', (ii) the hybridization, and (iii) both the interorbital interactions and the hybridization. Definition of the models, including those discussed later, is summarized in Table <ref>. The band structure without the hybridization is also presented in Fig. <ref>(c). In model (iii), the d_3z^2-r^2 and d_x^2-y^2 orbitals are completely decoupled, so that the superconducting state is equivalent to that of the bilayer Hubbard model consisting solely of the d_3z^2-r^2 orbitals. It can be seen that both the hybridization and the interorbital interactions degrade superconductivity of the bilayer Hubbard model, but since λ of the bilayer model is significantly large, λ of the original model (full model with both the interorbital interactions and the hybridization included) is still large enough to explain the experimental observation. The nature of the superconducting gap of the original model (Fig. <ref>(d)) can be more clearly understood by comparing it to that of model (ii) (the model in which the two orbitals are decoupled in one-body level) shown in Fig. <ref>(e). Here, the gap has opposite signs between the bonding and antibonding d_3z^2-r^2 bands. It is an s±-wave superconducting gap in the wide sense of the term in that it changes sign between the two bands, but the antibonding band does not form a Fermi surface. We stress that even when one of the bands do not intersect the Fermi level, the spin fluctuations with finite energy arise as a pairing glue<cit.>. The overall resemblance of the superconducting gaps in Figs. <ref>(d) and (e) further confirms our picture that the superconductivity in the present model is d_3z^2-r^2 orbital driven. In this context, it is also intriguing to give a look into the present system from a strong coupling viewpoint. Calculating within second order perturbation, the interlayer exchange coupling between the d_3z^2-r^2 orbitals gives J_⊥=4t^2_⊥/U≃ 0.6 eV for U=3 eV, which is quite large compared to, for example, the nearest neighbor superexchange coupling in the cuprates. J_⊥ is also much larger than the intralayer hopping between the neighboring d_3z^2-r^2 orbitals. Such a large J_⊥ should lead to opening of a spin gap, and induce interlayer pairing superconductivity<cit.>, whose gap function changes its sign between bonding and antibonding bands in momentum space. This strong coupling picture is indeed consistent with the FLEX results for both the pure bilayer Hubbard model<cit.> and the present model. We note that there is no antiferromagnetic ordering in spin-gapped systems, and this should also apply to the present model of La_3Ni_2O_7. In fact, the Stoner factor of magnetism (the maximum eigenvalue of Uχ_0(q,0), where χ_0(q,0) is the irreducible susceptibility at the lowest Matsubara frequency) at n=1.5 is obtained within FLEX as 0.955 for the original model, which is smaller (less tendency toward magnetism) than 0.967 obtained for model (iii), namely, a model that can be considered as equivalent to the bilayer Hubbard model, in which magnetic ordering should not be present. In Refs. <cit.>, some of the present authors studied cases where superconductivity emerges or is enhanced due to the interorbital interactions between the d_x^2-y^2 and other d orbitals. The effect of the interorbital interactions in the present model is the opposite, namely, they degrade superconductivity. A large difference is that there is a bonding-antibonding splitting in the d_3z^2-r^2 band in the present bilayer system, which might be the reason why the effect of the interorbital interactions is the opposite. Further study on the origin of the difference between the single and bilayer systems is underway. Finally, we discuss possible ways to further enhance superconductivity. The band filling dependence presented in Fig. <ref> suggests that T_c may be enhanced by doping electrons. In case it is difficult to dope electrons in the actual material, here we propose alternative ways for achieving a similar effect. We consider a model in which (iv) the level offset between the d_x^2-y^2 and the d_3z^2-r^2 orbitals Δ E=E_x^2-y^2-E_3z^2-r^2 is increased by δ(Δ E)=0.2 eV or (v) |t_⊥| is increased by δ|t_⊥|=0.2 eV (see also Table <ref>). As depicted in Fig. <ref>(a), in both models, the band filling dependence of λ appears to be shifted toward the left (i.e., toward the smaller n regime), so that larger values of λ are attained at n=1.5, i.e., the stoichiometric band filling. From a material designing viewpoint, increasing Δ E and/or |t_⊥| might be achieved by considering mixed anion materials. The effect of increasing Δ E and/or |t_⊥| can be understood by counting the number of electrons occupying the d_3z^2-r^2 orbitals (namely, summing up the d_3z^2-r^2 orbital weight assuming non-interacting band structure) for each case. In Fig. <ref>(b), we plot λ against n[d_3z^2-r^2], which is the average number of electrons per d_3z^2-r^2 orbital. It can be seen that λ is mainly determined by n[d_3z^2-r^2] (within these three models), which once again supports the picture that the present superconductivity is d_3z^2-r^2 orbital driven. Here, increasing Δ E and/or |t_⊥| results in self-doping of electrons from the d_x^2-y^2 to d_3z^2-r^2 orbitals (see Fig. <ref>(b)). Superconductivity is enhanced as n[d_3z^2-r^2] approaches unity, that is, as d_3z^2-r^2 orbital approaches half-filling, so that the electron correlation effects are enhanced, and at the same time, the Fermi level approaches both the bonding band top and the anti-bonding band bottom, thereby shifting the spin fluctuations toward the lower energy regime and making it more effective as a pairing glue. Summary.—To summarize, we have studied the possibility of superconductivity in La_3Ni_2O_7 taking into account the crystal structure under high pressure. The system can be considered as a bilayer Hubbard model of the d_3z^2-r^2 orbitals coupled with the d_x^2-y^2 orbitals through interorbital interactions and hybridization. Although the interorbital couplings degrade superconductivity, the T_c can still be high enough to explain the experimental observation, thanks to the very high T_c reached in the bilayer Hubbard model. We have also discussed possible ways to enhance the superconductivity. Electron doping is likely to enhance superconductivity, but in case this is not feasible, increasing Δ E and/or |t_⊥| are alternative ways of achieving a similar effect. This is because these modifications result in a self-doping of electrons from the d_x^2-y^2 to the d_3z^2-r^2 orbitals. Studies on material designing along this line is underway. We are supported by JSPS KAKENHI Grant No. JP22K03512 (H. S.) and JP22K04907 (K. K.). The computing resource is supported by the supercomputer system HOKUSAI in RIKEN, and the supercomputer system (system-B) in the Institute for Solid State Physics, the University of Tokyo.
http://arxiv.org/abs/2306.09144v1
20230615140328
On the $k$-Hamming and $k$-Edit Distances
[ "Chiara Epifanio", "Luca Forlizzi", "Francesca Marzi", "Filippo Mignosi", "Giuseppe Placidi", "Matteo Spezialetti" ]
cs.CC
[ "cs.CC", "cs.CL", "cs.DS" ]
Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021 Stefan Hajkowicz, Conrad Sanderson, Sarvnaz Karimi, Alexandra Bratanova, Claire Naughtin   CSIRO, Australia July 31, 2023 ========================================================================================================================================================================================================== In this paper we consider the weighted k-Hamming and k-Edit distances, that are natural generalizations of the classical Hamming and Edit distances. As main results of this paper we prove that for any k≥ 2 the DECIS-k-Hamming problem is ℙ-SPACE-complete and the DECIS-k-Edit problem is NEXPTIME-complete. § INTRODUCTION Measuring how dissimilar two strings are from each other, is a task that occurs often and which has great importance in various practical fields, such as biometric recognition and the study of DNA, up to spell checking. A formal treatment of the problem passes through the definition of a notion of distance between strings. Numerous distance functions have been proposed and studied from a computational point of view in the literature, based on the idea of measuring the minimum number of modification operations, chosen in a given set of admissible operations, necessary to transform one string into another: two of the best known are certainly the Edit distance and the Hamming distance, but since 1950 other distances have been introduced and scientific studies have been carried on (cf. for instance <cit.>). In this framework, measuring how similar two strings are is then formalized as an optimization problem, i.e. minimizing the amount of operations to transform one into the other. It is quite useful to also consider the decision version of one of such problem, in the following way: in any instance there are two words together with a natural number h and we ask whether or not the two given words have a distance (Hamming, edit or another one) that is smaller than or equal to h. Previous approach could seem well formalized but there is something hidden: is the description of the fixed distance (Hamming, edit or another one) included inside the instances of the problem or the description of the distance has to be considered as a constant that can vary depending on the problem but that should not be considered in the asymptotic analysis of the algorithms that solves the problems? Usually the second approach is the one that seems preferred in literature. For instance we say that the complexity of the classical algorithm for the edit distance is O(nm), where n and m are the lengths of the two strings. And, moreover, it seems that there is no much difference if we choose the first approach for the edit distance. On the contrary something different happens in some cases. In <cit.> it is proved that including the description of a special distance inside the instances gives rise to an ℕℙ-hard problem, whilst it has been proved much later in <cit.> that the same problem has a solving algorithm that is polynomial when the size of the description of the distance is considered as a constant. The interested reader can see <cit.> and references therein for more details. In this paper we study the problems of computing the k-Hamming and the k-Edit distances, for k≥ 2, in the first setting, i.e. we suppose that the description of the distance is a part of the instances. The study of these problems following the second approach is still open, as discussed in Section <ref>. In Section <ref> we introduce our notation and some formal definitions. Section <ref> is devoted to prove that, for k≥ 2, the decision problems of computing the k-Hamming (DECIS-k-Hamming) and the k-Edit (DECIS-k-Edit) are, respectively, ℙ-SPACE-complete and NEXPTIME-complete. To do so, we follow the same strategy for both problems. First we prove the results for k=3, using polynomial time reductions from any L∈ℙ-SPACE to DECIS-3-Hamming and from any L∈ NEXPTIME to DECIS-3-Edit, and straightforwardly extend them to larger values of k. Then we reduce (in polynomial time) the problems with k=3 to the respective problems with k=2, proving the results also for these cases. Section <ref> concludes the paper foreshadowing possible research developments. § PRELIMINARIES Given a finite alphabet Σ of cardinality σ, a string over Σ is a sequence w= w_1w_2… w_n, with w_i∈Σ, for any 1≤ i≤ n. The number of characters composing a string w is called its length, denoted by |w|=n. The string of length 0, also called the empty string, is indicated by ϵ. We denote by Σ^* the set of all strings on Σ and by Σ^n the set of all strings of length n in Σ^*. Trivially ϵ∈Σ^*, for any Σ. String x is a substring of string w if there exist u and v such that it is possible to write w as the concatenation of u, x and v i.e., such that w=uxv. The empty string is a substring of any string. We denote by x=w_j … w_j+k-1 the substring of w of length k that appears in w at position j, i.e. w=uxv, with |u|=j-1, |x|=k and |v|=|w|-(k+j-1). Given a string v, it is possible to define a set Op{o:Σ^*→Σ^*} of operations that allow to modify it in a new string w. Some well-studied subsets of operations are the edit operations: Given a string v∈Σ^*, we define: * Insertion (I, ϵ→ a) allows to insert a character a ∈Σ in any position i of v, i.e. w=v_1 … v_i-1 a v_i … v_|v|; * Deletion (D, a →ϵ) is the removal of any character a=v_i in v, i.e. w=v_1 … v_i-1 v_i+1… v_|v|; * Substitution (S, a→ b) replaces any character a=v_i in v with another character b ∈Σ in the same position, i.e. w=v_1 … v_i-1 b v_i+1… v_|v|. We describe here some other operations that allow to define some more distances. Given a string v∈Σ^*, we define the following operations on v. * 2-Substitution (2S, a_1a_2→ b_1b_2) replaces any pair of consecutive characters a_1a_2= v_iv_i+1 in v, with another pair of characters b_1b_2 ∈Σ^2, i.e. w=v_1 … v_i-1 b_1b_2 v_i+2… v_|v|. * k-Substitution (kS, a_1… a_k→ b_1 … b_k) allows substitutions of k consecutive characters all at once. It replaces in v the substring a_1… a_k=v_i … v_i+k-1 with b_1 … b_k, i.e. w=v_1 … v_i-1 b_1 … b_k v_i+k… v_|v|. Obviously, kS is a generalization of the S and 2S previously introduced, e.g. S=kS if k=1. For the sake of readability, we henceforth use the notation Op={A_1,… ,A_m}, with A_1,… ,A_m ∈{I,D,kS | k∈ℤ^+} to indicate that all the possible operations defined by each A_i are in Op, e.g. Op={I} allows all the insertions ϵ→ a with a∈Σ. At this point, we can define a cost function γ: Σ^* ×Σ^* →ℤ^+ for each operation. That cost can be fixed or can depend on the type of operation or on the characters on which it is applied. Let v,w ∈Σ^* be two strings, Op be a set of operations defined on Σ^*, γ be an arbitrary cost function. If T=t_1t_2… t_p is a sequence of operations over Op, i.e. T(v)=t_p(…(t_2(t_1(v)))…), the overall cost of the sequence is: γ(T)=∑_i=1^p γ(t_i). The distance between v and w is the minimum cost required to transform v into w through operations in Op, i.e. δ (v, w) = min{γ(T) | T(v) = w}. Depending on the set Op of operations allowed on Σ^* we can define different distances. The Edit distance between v and w is δ(v,w), considering Op = {I, D, S}. The Edit distance is also formally known as Levenshtein distance, due to the work carried out by Vladimir Levenshtein who introduced for the first time an algorithmic approach to calculate this distance <cit.>. We define Hamming distance between v and w <cit.> δ(v,w), when Op={S}. Apart from the well-studied Edit Distance and Hamming Distance, it is possible to define some other distances between strings. The 2-Edit distance between two strings v and w is the minimum cost to transform the string v into w, δ(v,w), setting the set of admissible operations Op = { I, D, S, 2S }. It is a direct extension of the previously defined Edit distance, with the addition of the double substitution operation. Last but not least we define the generalizations of 2-Edit and Hamming distances, the k-Edit and k-Hamming distance, respectively, for a given k≥ 2 ∈ℤ. Given two strings v and w and a positive integer k≥ 2, * k-Edit distance the k-Edit distance between v and w is δ(v,w), with Op={I,D,S,kS}. * k-Hamming distance the k-Hamming distance between v and w is δ(v,w), with Op={kS}. § COMPLEXITY §.§ DECIS-3-Hamming ℙ-SPACE completeness We prove in this section that DECIS-3-Hamming problem is ℙ - SPACE-complete. DECIS-3-Hamming contains all the strings encoding quadruples of the form <v, w, D, h> where v and w are two strings on Σ^n of the same length n, D is an encoding string that describes the weighted 3-Hamming distance we are considering and h is an integer. Hence, a istance x =< (v, w, D, h) > fits into DECIS-3-Hamming if and only if D(v,w)≤ h. Therefore 3 = {< (v, w, D, h) > : D(v,w) ≤ h}. In order to say that DECIS-3-Hamming is ℙ-Space complete, we need to prove the two following properties: a) DECIS-3-Hamming is in ℙ-Space; b) for every language L in ℙ-Space there exists a polynomial reduction from L to DECIS-3-Hamming. * DECIS-3-Hamming is in ℙ-Space. By a corollary to Savitch's Theorem <cit.> we know that ℙ-Space=ℕℙ-Space. Hence, proving that the problem is in ℕℙ-Space will be enough to prove the Theorem. We define a Nondeterministic Turing Machine N that accepts the DECIS-3-Hamming language in polynomial space, even in the worst case. N starts with <v,w,D,h> coded on its tape and operates iteratively. In each loop, it non-deterministically chooses a substitution to apply to the string, executes it and updates h by subtracting the weight of the substitution just chosen. N exits the while loop when v becomes equal to w or h is negative. In both cases it will be possible to establish whether the given instance belongs to DECIS-3-Hamming. It is possible to observe that the total occupied space is linear with respect to the length of the input strings, hence DECIS-3-Hamming is in ℕℙ-SPACE, and, therefore, in ℙ-SPACE. For each language L in ℙ-SPACE there is a polynomial time reduction from L to DECIS-3-Hamming. If L is in ℙ-SPACE there exists a deterministic Turing machine M=<Q,Γ, B, Σ,Δ,q_0,F> that stops on every input of size n in O(c^q(n)) time and decides L in polynomial space O(p(n)), being c a constant and p and q two polynomials. The idea is to take a Semi-Thue system <cit.> that simulates M and make it a 3-Hamming distance by adding all the missing substitutions with a very large weight, leaving instead with a very low weight, i.e. equal to 1, the substitutions of the Semi-Thue system. We define M' as the Turing Machine that accepts the DECIS-3-Hamming language. We define an algorithm for mapping each instance x in L into an instance x' = <(v,w,D,h)>, such that M accepts x if and only if M' accepts x'. We formally define the parameters of instance x' as follows. * v = $B^p(n)+1 q_0 x B^p(n)+1$, where $∉Γ; * w = $B^l$ with l = 2p(n)+n+3; * h =min{c^m > c^q(n) + 2p(n)+4+n}. This value of h can be represented in base c as the string obtained by the concatenation of 1 and m times 0, with m=⌈log_c c^q(n) + 2p(n)+4+n ⌉. The last parameter to define is the distance D. We note immediately that the description of the distance is independent of x, therefore it is constant with respect to n. This distance is a weighted 3-Hamming that assumes only two weights 1 and h+1. To give the full description of D we would need to define the weight for all 3-substitutions. For each y ∈Γ the following 3-substitutions with cost 1 are produced: * every transition Δ(q_h,a) = (q_j,b,R) in M produces yq_ha → ybq_j in D; * every transition Δ(q_h,a) = (q_j,b,L) in M produces yq_ha → q_jyb in D; * every transition Δ(q_h,B) = (q_j,b,R) in M produces yq_hB → ybq_j in D; * every transition Δ(q_h,B) = (q_j,b,L) in M produces yq_hB → q_jyb in D. In addition, the following 3-substitutions with cost 1 are added, for each q_s∈ F and a,b ≠$, with #_l, #_r ∉Γ. * aq_sb →#_lB#_r * a#_lB →#_lBB * $#_lB →$BB * B#_rb → BB#_r * B#_r$→ BB$ This set of 3-substitutions is required if a q_s ∈ F appears on the simulated tape. In fact, it is used to erase the entire tape. For the remaining undefined 3-substitutions we set the cost to h+1. It is possible to observe that the algorithm is polynomial. Let x be an instance in L∈ℙ-SPACE, the x transformation in x' just defined is a reduction, i.e. x∈ L x' ∈3. Suppose first that x∈ L. This means that there exists a finite sequence of ID α_1 …α_t such that α_1=q_1x, for any i<t<c^q(n) α_i⊢α_t and α_t is a final ID. For each implication from one ID to another one there is a corresponding transition rule which can be simulated by a substitution of unit weight in the distance D, as previously described. Formally we match α_i to the string v and at the end of the simulation we will have reached α_t which will correspond to a string v' containing q_s. In this way we will be able to say that there exists a sequence of substitutions of unitary weight in D which, starting from v, allows us to arrive at v' with a total weight less than c^q(n). Using, at this point, the substitutions of unitary weight that cancel the symbols different from $ and B around q_s we will obtain the string w=$B^l$, with l=2p (n)+n+3. In total, therefore, the cost of obtaining w is less than or equal to c^q(n)+2p(n)+n+4 and therefore less than h. So x' ∈ DECIS-3-Hamming. Let us prove now the converse. We do it by contraposition. If x ∉L then there is no sequence of transitions that can lead the initial ID to an ID in which a final state appears. In the simulation using 3-substitutions, no sequence of substitutions of unitary weight can ever transform the string v into a string v' containing a final state and therefore w cannot be obtained. The only way to get an accepting state on the tape would be to use a substitution costing h+1. But in this case the 3-Hamming distance between v and w will certainly be greater than h, so x'∉ DECIS-3-Hamming. Any DECIS-k-Hamming, with k≥ 3, is ℙ-SPACE-complete. It is easy to observe that the previous proof can be used to demonstrate, by induction, the ℙ-SPACE-completeness of any DECIS-k-Hamming problem, with k≥ 3, since: a) Algorithm <ref> works for any DECIS-k-Hamming; b) There exists a polynomial time reduction from DECIS-k-Hamming to DECIS-(k+1)-Hamming (k≥ 2). The reduction has just to pad input and target string (to handle strings with lenght 3) and to inhibit any (k+1)-substitution that does not represent a k-substitution. §.§ DECIS-2-Hamming ℙ-Space-Completeness In this section we prove that also DECIS-2-Hamming is ℙ-Space Complete. We first define the following set, for any k∈ℤ^+ and x,y ∈Σ^k: k= {<v,w,D,h>|δ(v,w)≤ h, γ(x→ y)∈{1, h+1}} It is to note that the proof of ℙ-Space-completeness of DECIS-3-Hamming holds for DECIS'-3-Hamming, since: a) DECIS'-3-Hamming is a special case of DECIS-3-Hamming, thus the Algorithm <ref> is valid; b) the reduction defined in Theorem <ref> actually produces instances of DECIS'-3-Hamming. We can, therefore, state the following lemma. DECIS'-3-Hamming is ℙ-Space-Complete. It is also to note that an algorithm similar to Algorithm <ref> can be defined for DECIS-2-Hamming, thus: DECIS-2-Hamming ∈ℙ-Space. There is a reduction from DECIS'-3-Hamming to DECIS-2-Hamming. It is possible to prove this reduction thanks to a technique which belongs to the folklore of Information Theory and to Markov chains. This technique reduces the dependence of a random variable on k previous random variables, including itself, to just two random variables, including itself, via a sliding window over a larger alphabet. Let x=<v,w,D,h> be an instance in DECIS'-3-Hamming, we transform it in an instance x'=<v',w',D',h'> in DECIS-2-Hamming, where * v'=c_1c_2 … c_n+1 is obtained from v=a_1a_2… a_n, by: * $-padding v, i.e. v̅=b_1b_2 … b_n+2=$v$; * coding any symbol of v' as a pair of consecutive symbols of v̅, obtained with a sliding window of length 2 and stride 1, i.e. c_i=(b_i,b_i+1) * w' is constructed from w in an analogous way; * h'=3h; * for each 3-substitution abc → def, with γ=1 in D, the following unit cost 2-substitutions are added to D': * (ab)(bc)→ S_(ab)(de)^← S_(bc)(ef)^→; * (xa)S_(ab)(de)^←→ (xd)(de), ∀ x∈Σ∪{$} * S_(bc)(ef)^→ (cx)→ (ef)(fx), ∀ x∈Σ∪{$} * any other 2-substitution has cost h'+1 The algorithm is a polynomial in the size of the input, indeed: a) |v'|=|v|+1 and |w'|=|w|+1; b) coding h'=3h requires linear time; c) the algorithm increases the size of the alphabet with a polynomial function and coding D' requires O(|Σ'|^2) steps. Moreover, it is possible to observe that the algorithm is a reduction, i.e.: x∈3 x' ∈2 Suppose x∈3, i.e. ∃ T=t_1 … t_k s.t. T(v)=w, γ(T)≤ h, with each t_i ∈ D. Then, ∃ T'=t'_1 … t'_3k s.t. T'(v')=w', γ(T')≤ h', with each t'_i ∈ D'. T' is obtained by T, by translating each t_i into the corresponding sequence of 2-substitutions described by the algorithm, thus x∈3⇒ x' ∈2 Suppose x∉3, i.e. ∀ T=t_1 … t_k s.t. T(v)=w, γ(T)> h, with each t_i ∈ D. Since the algorithm, by construction, do not insert any S_(ab)(de)^← or S_(bc)(ef)^→ symbols in w', the only way to obtain w' from v' is to remove all these symbols from the string, thus completing simulated (and legal) 3-substitutions in the input instance. Therefore, ∀ T' s.t. T'(v')=w', γ(T')>3h, thus x∉3⇒ x' ∉2 These lemmas imply the following result. Decis-2-Hamming is ℙ-Space-Complete. §.§ DECIS-3-Edit NEXPTIME-completeness We will now prove that DECIS-3-Edit distance is NEXPTIME-complete, that is: a) DECIS-3-Edit ∈ NEXPTIME; b) ∀ L ∈ NEXPTIME, there exists a polynomial time reduction from L to DECIS-3-Edit. DECIS-3-Edit ∈ NEXPTIME We show a Nondeterministic Turing Machine N that, given x =< (v, w, D, h) > in input, accepts if and only if D(v,w)<h. N acts as described in <ref>. Since γ: Σ^* ×Σ^* →ℤ^+, the algorithm performs at most h=O(2^n) loops, each composed by linear time operations. Thus, M halts in an exponential time in n and DECIS-3-Edit ∈ NEXPTIME. ∀ L ∈ NEXPTIME, there exists a polynomial time reduction from L to DECIS-3-Edit If L∈ NEXPTIME, there exists a Nondeterministic Turing Machine N'=<Q,Γ, B, Σ,Δ,q_0,F> that recognizes if x∈ L and stops within an exponential number of moves, i.e. if n=|x|, it will halt after 2^p(n) steps at most, where p(n) is a polynomial function of n. The reduction transforms any instance x for L in an instance x' =< (v, w, D, h) > for DECIS-3-Edit as follows: * v=$q_0x$, with $∉Γ * w=$$ * h=5*2^p(n)+2*(n+1) Finally, D is defined in the following way: * any insertion has cost γ=h+1, with the exception of ϵ→ B_1 (being B_1 ∉Γ a new blank symbol), that has cost γ=1; * any deletion has cost γ=h+1, with the exception of *→ϵ, that costs γ=1, where *∉Γ is a new symbol used to delete the simulated tape after the acceptance of N'; * any substitution has cost γ=h+1 * any 3-substitution has cost γ=h+1, with the following exceptions: * for each element of {<(q,a),(p,b,R)>|(p,b,R)∈Δ(q,a)}, with q and p state symbols not in Γ and a,b ∈Γ: * qax→ bpx, with ∀ x ∈Γ, has cost γ=3; * qa$→ bp$, with ∀ p ∉ F, has cost γ=1; * qa$→ bp$, with ∀ p ∈ F, has cost γ=3; * for each element of {<(q,a),(p,b,L)>|(p,b,L)∈Δ(q,a)}, with q and p state symbols not in Γ and a,b ∈Γ: * xqa→ pxb, with ∀ x ∈Γ, has cost γ=3; * $ qa→ p$ b, with ∀ p ∈ Q, has cost γ=1; * to simulate moves that require to expand the tape length behind |x|, the following 3-substitutions have cost γ=1: * qB_1$→ qB$; * p$ B_1 →$ pB; * to delete symbols and reach the target string $$ after the acceptance N', with p ∈ F and a,b ∈Γ, the following 3-substitutions have cost γ=1: * apb →#_l * #_r; * $pa →$ * #_r; * ap$→#_l*$; * $p$→$ * $; * a#_l* →#_l * *; * *#_ra → **#_r; * $#_l* →$**; * *#_r$→ **$. The algorithm takes polynomial time q(n): writing v and w requires linear time in n, while coding D would take O(|Γ|^6). Moreover, it is actually a reduction, i.e.: x∈ L x' ∈3 Suppose x ∈ L. There exists a finite sequence of non-deterministic moves (and, therefore, of IDs) that makes N' accept x. It is easy to see that there is a corresponding sequence of transformations that modifies v and results in the string $xpy$, with x,y ∈Γ^* and p∈ F. Each 3-substitution that simulates a N' move has cost γ=3, if it does not involve the $ symbol (or if it ends in a final state symbol within the $ symbols), otherwise it has cost γ=1 if it results in one of the strings: $xp$ (p∉ F, x∈Γ^*), p$x$ (x∈Γ^*). In the latter cases γ has a reduced value because the insertion of B_1 (point <ref>) and a further 3-substitution are needed to obtain a string that correctly represents the output ID. In any case, a move of N' is simulated by a sequence of transformation S, such that γ(S)=3, and, therefore, a sequence of moves from the initial ID to an accepting one can be simulated with a total cost 3*2^p(n). At this point, a sequence of 3-substitutions has to be applied to transform all the symbols within the two $ into *. They are n+1+2^p(n) at most and each 3-substitution adds one * at unitary cost. Thus, the whole sequence has cost γ≤ n+1+2^p(n). Finally, the same cost is required by the sequence of deletions that results in the string $$. Thus, x ∈ L⇒δ($q_0x$,$$)≤ h. Suppose now x∉ L. Any 3-substitution that does not correspond to a legal move of N', or is part of it, has cost h+1, with the exception of those used to transform symbols into * and they can be applied only when the simulated ID is an accepting one. The same holds for deletion of * symbols. Thus, all the sequences of transformations from $q_0x$ to $$ have cost γ > n+1+2^p(n), i.e.: x ∉ L⇒δ($q_0x$,$$)> h Any DECIS-k-Edit, with k≥ 3, is NEXPTIME-complete. The proof is analogue to that of Theorem <ref>. It is easy to demonstrate, by induction, the NEXPTIME-completeness of any DECIS-k-Edit problem, with k≥ 3, since: a) Algorithm <ref> works for any DECIS-k-Edit; b) There exists a polynomial time reduction from DECIS-k-Edit to DECIS-(k+1)-Edit (k≥ 2). Again, the reduction has to inhibit any (k+1)-substitution not representing a k-substitution. §.§ DECIS-2-Edit NEXPTIME-completeness We deal now with the proof of NEXPTIME-completeness of DECIS-2-Edit. To prove that DECIS-2-Edit ∈ NEXPTIME, the same algorithm of Section <ref> can be employed (Algorithm <ref>), but, instead of explicitly showing that exists a polynomial time reduction from any problem in NEXPTIME to DECIS-2-Edit, we show a polynomial time reduction from a NEXPTIME-complete problem. We, indeed, proved in Section <ref> the NEXPTIME-completeness of DECIS-3-Edit, but the same proof is actually valid for a restricted version of the problem, namely DECIS'-3-Edit, where, given an instance <v,w,D,h>: a) any insertion, deletion or substitution costs either 1 or h+1; b) 3-substitutions costs are limited to 1, 3 or h+1. Therefore, to prove the NEXPTIME-completeness of DECIS-2-Edit, it is sufficient to show a reduction from DECIS'-3-Edit to it. There exists a polynomial time reduction from DECIS'-3-Edit to DECIS-2-Edit. The reduction transforms any instance x= < (v, w, D, h) > for DECIS'-3-Edit in an instance x' =< (v, w, D', 5h) > for DECIS-2-Edit as follows: * if Σ is the alphabet of the input instance, Σ' for the output instance is augmented by adding the following new symbols: * S^i_(abc)(def), ∀ a,b,c,d,e,f ∈Σ, i ∈{1,2,3}; * the supporting symbol *; * for each ϵ→ a ∈ D s.t. γ(ϵ→ a)=1, add ϵ→ a, with γ=5, in D'; * for each a→ϵ∈ D, s.t. γ (a→ϵ)=1, add a→ϵ, with γ=5, in D'; * for each a → b ∈ D s.t. γ(a→ b)=1, add a→ b, with γ=5, in D' * for each 3-substitution abc → def ∈ D s.t. γ(abc→ def)=k≤ h, add the following operations to D': * ϵ→ S^1_(abc)(def), with γ=5k-4; * aS^1_(abc)(def)→ dS^2_(abc)(def), with γ=1; * S^2_(abc)(def)b→ eS^3_(abc)(def), with γ=1; * S^3_(abc)(def)c→ f*, with γ=1; * *→ϵ, with γ=1; * any other operation has cost 5h+1 in D'. The algorithm requires polynomial time. Source and target strings are unchanged, the limit h has to be multiplied by 5 and the size of the alphabet (and of D') is increased by a polynomial function: |Σ'|=O(|Σ|^6). We can observe that the algorithm is actually a reduction, i.e: x∈3 x' ∈2 Suppose x=<v,w,D,h>∈3, i.e. ∃ T s.t. T(v)=w, γ(T)≤ h. Let be T=t_1t_2… t_n: it is possible to “simulate” each t_i on x' with a sequence of one or more operations T'_i at cost γ(T'_i)=5*γ(t_i). Insertions, deletions and substitutions require a single operation, while, for 3-substitutions, the whole sequence of operations described at point <ref> is needed, with total cost of 5k, where k is the original 3-substitution cost. Therefore, x=<v,w,D,h>∈3⇒ x'=<v,w,D',5h>∈2 On the other hand, suppose x=<v,w,D,h>∉3. It can be observed that each operation on x' either: * has cost larger than 5h+1 and can not be part of an acceptable sequence; * corresponds to an operation t_i on x, with cost 5*γ(t_i); * is part of a 3-substitution simulation. Each step of the sequence can be executed only after the previous and the first step introduces a symbol that can not be part of w'. The only possibility to remove “exogenous” symbols is to apply all the operations in the sequence, at cost 5*γ(t_i), where t_i is the simulated 3-substitution. Therefore, x=<v,w,D,h>∉3⇒ x'=<v,w,D',5h>∉2 § CONCLUSIONS In this work we studied the computational complexity of the problems of computing the cost of the k-Hamming and k-Edit distances, for k≥ 2, proving that the decision versions that include the description of the distance as part of the instances are, respectively, ℙ-SPACE-complete and NEXPTIME-complete. We have some preliminary results, not included in this paper, for some special cases where the size of the description of the distance is considered constant. For instance, we found a polynomial time algorithm to compute the 2-Hamming distance when every operation has the same constant cost. It is an open problem to find the complexity of solving both problems as the lengths of the two words increase when the distance is fixed, i.e. its size is considered as a constant, or, more generally, when the complexity is further parameterized analogously as done in <cit.> for the swap-insert correction distance. splncs04
http://arxiv.org/abs/2306.06170v1
20230609180005
Intranight optical variability of TeV blazars with parsec-scale jets dominated by slow-moving radio knots
[ "Vibhore Negi", "Gopal-Krishna", "Hum Chand", "Silke Britzen" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
firstpage–lastpage Analytic description of monodromy oscillons V.E. Maslov July 31, 2023 =========================================== BL Lac objects detected at TeV energies preferentially belong to the subclass called ‘high-frequency-peaked’ BL Lacs (HBLs). Parsec-scale radio jets in these TeV-HBLs often show dominant, slow moving radio knots that are at most mildly superluminal. We report the first systematic campaign to characterise the Intra-Night Optical Variability (INOV) of TeV-HBLs using a representative sample of 6 such sources, all showing a fairly high degree of optical polarization. Our campaign consists of high-sensitivity monitoring of this sample in 24 sessions of more than 3 hour duration each. For these TeV-HBLs, we find a striking lack of INOV and based on this, we discuss the importance of superluminal motion of the radio knots vis-a-vis the optical polarization, as the key diagnostic for INOV detection. galaxies: active — galaxies: jets — BL Lacertae objects: general — quasars: general — galaxies: photometry § INTRODUCTION Quasars whose observed radiation at centimetre and shorter wavelengths arises predominantly from a jet producing nonthermal radiation relativistically beamed towards the observer, are termed as blazars. They exhibit flux variability across the electromagnetic spectrum on diverse time scales <cit.>. Spectroscopically, blazar population is subdivided between `broad-line emitting’ flat-spectrum radio quasars (FSRQs) and BL Lac objects (BL Lacs) showing an almost featureless optical/UV spectrum <cit.>, excepting the few cases for which spectral features due to host galaxy have been detected. Blazars with synchrotron emission peaking at high frequencies, between UV and X-ray bands, i.e., ν_syn^peak >10^15 Hz are most commonly BL Lacs and these are called HBLs (e.g., reviews by ). Compared to the BL Lacs with synchrotron spectra peaking below ∼ 10^14 Hz (called LBLs, ; ), HBLs are preferentially detected at TeV energies and a few dozen such TeV-HBLs have been catalogued <cit.>. HBLs typically have modest intrinsic radio luminosities, as compared to LBLs and are usually hosted by `low-excitation radio galaxies’ (LERGs), whose central engines are powered by radiatively inefficient gas accretion on to the central supermassive black holes (SMBH, see, e.g.; ). It is commonly believed that the parent (i.e., misaligned) population of BL Lacs is Fanaroff-Riley type I (FR I, ) radio galaxies <cit.>. <cit.> showed that relativistic beaming, rather than obscuration, of the nuclear jets can account for the 10 - 10^4 difference in radio and optical luminosities between BL Lacs and FR I radio galaxies and the required beaming typically needs bulk Lorentz factors of just a few <cit.>. The transverse dual-velocity structure of jets was independently hypothesized by <cit.> taking into account the observed correlation between the radio and the optical core luminosity in FR I radio galaxies and BL Lacs. A more direct evidence for the `spine-sheath’ jet scenario comes from the observed limb-brightening of parsec-scale jets in several lower-luminosity radio sources, e.g., the HBLs Mrk 421 and Mrk 501 (, ), and also in some kiloparsec-scale jets <cit.>. Another well-documented manifestation of blazar activity is their Intra-Night Optical Variability (INOV) ( and references therein; ). At least in the context of blazars, INOV is believed to arise mainly due to a combination of two factors: (i) generation of turbulence within the jet plasma whose synchrotron emissivity and fractional polarization can increase while passing through one or more shocks ( and references therein; ; also, ) and (ii) Doppler factor δ_j of the post-shock turbulent jet plasma. While the former condition is crucial for inducing micro-variability, the latter can play a key role in making it detectable (via a Doppler boost). A strong dependence of INOV on fractional optical polarization (p_opt) was first established by <cit.>, who showed that a flat/inverted radio spectrum by itself does not ensure a strong tendency for INOV. The question remains whether p_opt alone suffices, or the other factor mentioned above, namely, a strong beaming as inferred from the apparent speed of the VLBI radio knots, also plays a dominant role? Since, to our knowledge, no systematic investigation of this issue has been reported, we have carried out an INOV campaign targeting representative sample of 6 TeV blazars. The crucial aspect of these blazars is that even though they fall within the high polarization class (HPQ), their nuclear jets are dominated by radio knots showing at most mildly superluminal motion which is statistically consistent with zero radial velocity from the core, in a majority of cases (see, Table <ref>). The selection of the sample representing this extreme subset of HPQs is described in Section <ref>. The observations and data reduction procedures are outlined in Sections <ref> & <ref>. Section <ref> presents the results together with a brief discussion. Our main conclusions are summarised in Section <ref>. § SAMPLE SELECTION For the purpose of (optical) differential aperture-photometry, the present sample of 6 TeV-HBLs (Table <ref>) has been drawn from the VLBI data published in <cit.> for a sample of 38 TeV-HBLs. We imposed a limit of z ≳ 0.3, in order to minimise the relative contribution from the host galaxy and thereby the possibility of claiming spurious INOV detection, in case the `point spread function' (PSF) changes during the monitoring session <cit.>. This resulted in exclusion of 30 of the sources. Another two sources got discarded due to the second filter, imposed by observational considerations, namely (i) declination > 0 and (ii) m_r≤ 17.50-mag, taking m_r from the Pan-STARRS DR1 (). This left us with a sample of 6 VLBI monitored TeV-HBLs (Table <ref>). It is seen that for only two of the 6 sources, J0507+6737 and J1427+2348, the estimated β_app deviates from zero by more than ∼ 2σ, the most deviant being J0507+6737 for which the deviation is significant at 5.1σ (but, even in this case, the motion is only mildly superluminal). § THE MONITORING AND DATA REDUCTION The sample of 6 TeV-HBLs was monitored in Johnson-Cousins R-band in 24 sessions (i.e., 4 sessions per source), using the 1.3-metre Devasthal Fast Optical Telescope (DFOT; ) located at Devasthal station of ARIES (India). The images were recorded on a Peltier-cooled ANDOR CCD having 2k × 2k ( 0.53 arcsec pixel^-1) pixels, covering a field of view of 18.5 × 18.5 arcmin^2. The CCD detector has a gain of 2 e^- per analog-to-digital unit (ADU) and a readout noise of 7e^- at a speed of 1000 kHz. In each session, one target blazar was monitored continuously for minimum 3 hours, with a typical exposure of 1.5–5 min per frame. The pre-processing and cleaning of the CCD frames was done following the standard procedures in IRAF. The instrumental magnitude of the blazar and the two (steady appearing) comparison stars contained in all the CCD frames taken in the session were determined by aperture photometry <cit.>, using the DAOPHOT II (Dominion Astronomical Observatory Photometry II) package. The PSF was estimated by averaging the full width at half-maximum (FWHM) of the Gaussians fitted to the brightness profiles of five moderately bright stars within each frame, and aperture radius was set equal to two times the PSF (see e.g., ). The variation of PSF during each session is plotted in the bottom panel in the online Figures S1-S6. For each session, we then derived differential light curves (DLCs) for all pairs involving the target blazar and the chosen two comparison stars (Figures S1-S6 and Tables S1 and S2 available online as Supporting Information). § STATISTICAL ANALYSIS To ascertain the presence of INOV in our TeV-HBL sample, we applied the widely used F_η test <cit.>, following the basic procedure described in <cit.> and <cit.>. The two steady comparison stars were chosen by inspecting several star-star DLCs derived for each session and the F_η test was applied to the DLCs of the target blazar relative to the two comparison stars (whose basic parameters are listed in the online Table S2). The F-values for the two blazar DLCs of a session are computed as: F_1^η = Var(q-s1)/η^2 ∑_i=1^Nσ^2_i,err(q-s1)/N, F_2^η = Var(q-s2)/η^2 ∑_i=1^Nσ^2_i,err(q-s2)/N where Var (q - s1) and Var (q - s2) are the variances of the two DLCs of the target blazar, and σ_i, err(q - s1) & σ_i,err(q - s2) represent the rms error returned by DAOPHOT on the i^th data point in a DLC of the target blazar. N is number of data points in the DLCs and the scaling factor η = 1.54 <cit.>. Online Table S2 (Column 5) compares the computed values of F_η for the two blazar DLCs of each session, with the critical value of F (=F_c^α) estimated for that session. The values of α are set at 0.05 and 0.01, corresponding to 95 per cent and 99 per cent confidence levels for INOV detection. If the computed F_η for a DLC of the target blazar exceeds F_c^α, the null hypothesis (i.e., no variability) is discarded at the corresponding confidence level. Thus a DLC is classified as variable (`V') if the computed F_η≥ F_c (0.99); probably variable (`PV') if the Fη falls between F_c(0.95) and F_c(0.99); and non-variable (`NV') if F_η≤ F_c(0.95). Note that the target blazar in a session is designated as variable (V) only if both its DLCs (relative to the two comparison stars) belong to the `V' category, and `NV' if any of the two DLCs is of `NV' type. The remaining sessions are designated `PV'. The last column of the online Table S2 lists the session's averaged photometric accuracy, the `Photometric Noise Parameter' (PNP) = √(η^2⟨σ^2_i,err⟩) , where η = 1.54. § RESULTS AND DISCUSSION The present observations were mostly made under good sky conditions, using a 1.3-metre telescope located at a good site. With a typical threshold of ψ∼ 2% for INOV detection, these observations compare well, both in sensitivity and cadence, with practically all other INOV observations reported in the literature. Yet, strikingly, INOV was not detected in any of the 24 monitoring sessions targeting our sample of 6 TeV-HBLs (online Figures S1-S6; Table S2). One possible exception is the session on 2021-10-10, during which a hint of gradual fading by ∼ 2.5% over 3 hours was noticed for the z = 0.314 blazar J0507+6737. The fading was observed relative to both comparison stars which themselves remained steady throughout that session, as did the PSF (Fig. <ref>). Interestingly, this is the only blazar in our sample for which <cit.> have reported a (mildly) superluminal motion at a high confidence level (β_app = 2.23 ±0.44c, i.e., 5.1σ, see Table <ref>). Even taking, conservatively, this possible INOV detection as confirmed (despite its formal classification being `non-variable', see online Table S2), the INOV duty cycle for our sample would still be only ∼ 4%. This is miniscule in comparison to (i) the INOV DC of ∼60% found for the sample of 13 TeV detected LBLs/FSRQs, which too mostly lie at z > 0.3 <cit.>, and (ii) the INOV DC of ∼ 60 - 70% generally found for LBLs <cit.>. Thus, TeV-HBLs with parsec-scale jets dominated by subluminal (or mildly superluminal) radio knots, appears to be an extreme subclass of blazars with an INOV duty cycle bordering on zero. While such a possibility, i.e., INOV positively correlating with β_app has been hinted in some INOV studies <cit.>, the present study demonstrates this link, for the first time with statistical robustness, based on an extensive INOV campaign focused on a blazar sample selected specifically for addressing this question (see Section <ref>). Here, it may be reiterated that our TeV-HBLs do exhibit the other common trait of blazars, namely a substantial fractional polarization (p_opt > 3%, Table <ref>; Section <ref>). Not only is the maximum recorded p_opt consistent with this lower limit, but so is the mean value p_opt (except in the case of the blazar J0136+3905, but here too, p_opt was found to be above 3% in 2 out of the total 7 measurements available in the RoboPol survey () [Note that, unless the number of measurements is very large, the maximum value of p_opt may be preferred over the mean value, as this would reduce the chance of missing out genuine blazars/HPQs since their polarization is known to vary and hence might average below the defining threshold of 3% due to frequent dips <cit.>.]). In this context, we further note that the high-polarization TeV-HBL J0416+0105 of our sample, having p_opt (mean) = 6.3±0.3% did not exhibit INOV (down to the 2% detection limit), not only in the 4 sessions reported here, but also in the 8 sessions (2016-18) reported by <cit.>. It is also noteworthy that the non-detection of INOV even at ∼2% level for essentially our entire sample of TeV-HBLs, also circumscribes the role of accretion disk flares (or instabilities) as a possible cause of INOV <cit.>, at least in the case of geometrically thick radiatively-inefficient disks that are supposed to fuel intrinsically low-power AGN like the TeV-HBLs being discussed here (Section <ref>). As mentioned in Section <ref>, the two main factors perceived to be responsible for INOV of jet-dominated AGN (blazars) are: (i) injection/growth of turbulence within the jet plasma whose synchrotron emissivity and fractional polarization get enhanced while passing through one or more shocks (e.g., and references therein; ; see also, ); and (ii) the bulk Doppler factor δ_j of the post-shock turbulent plasma in the jet. While the former physical process is crucial for the origin of micro-variability (INOV) of the jet's emission, the latter factor holds the key to the INOV detection (via a Doppler boost). Clearly, it is important to find observational basis for this scenario. A decade ago, <cit.> investigated the dependence of INOV on fractional optical polarization (p_opt), by carrying out sensitive, high-cadence optical monitoring of 21 radio-loud quasars, including 9 high- and 12 low-polarization quasars (HPQs and LPQs), taking the dividing line at the conventional p_opt = 3% <cit.>. Remarkably, the HPQ subset showed strong INOV (amplitude ψ > 4%) on 11 out of 29 nights, in stark contrast to the LPQs for which strong INOV was observed on just 1 out of 44 nights. This clearly established a high p_opt as a key attribute of the radio quasars showing strong INOV. But, is this alone a sufficient marker for detection of strong INOV? What about the role of the afore-mentioned second factor, namely, δ_j ? Indeed, an observational hint for such a correlation was noticed in a recent INOV study of 3 narrow-line Seyfert1 galaxies <cit.>. The results presented here place such a correlation on a statistically firm footing, for the first time, by focusing on an extreme subset of blazars, namely TeV-HBLs, whose nuclear radio jets exhibit only slow moving (at most mildly superluminal) features, consistent with small Doppler boosting of their emission. For this subset, the present work demonstrates an essentially total lack of INOV detection. This can be readily understood if the apparent kinematics of these dominant radio knots, which appear at most mildly superluminal, reflects the bulk motion of the underlying jet (at least the sheath layer), as argued by <cit.> and others <cit.>. In this framework, compared to the highly superluminal radio knots typically observed in blazar jets, the dominant slow moving radio knots observed in the VLBI jets of TeV-HBLs would have to be much more luminous intrinsically, in order to be detectable even without the benefit of a strong Doppler boosting. On the other hand, this would not be required in case the VLBI knots are mere `patterns’, kinematically decoupled from the underlying (much faster) jet, as suggested in several studies (e.g., and references therein; ). In that event, the observed brightness of the radio knots and the level of INOV originating in the jet’s turbulent zone, would both be dictated by the beaming associated with the bulk velocity of the underlying jet (the, so called, `emission velocity’ of the jet, cf. ), despite little direct evidence for a relativistic flow coming from VLBI observations. The rather tight correlation of INOV with the apparent speed of the VLBI knots, as found here, suggests that at least the zone of turbulence within the (post-shock) jet-flow remains kinematically coupled to the (slow moving) shock/knot, perhaps due to entanglement of the magnetic field lines, regardless of whether the observed kinematics of such shocks reflects the bulk speed of the underlying jet. Finally, it should be emphasized that the very low INOV duty cycle inferred here for TeV-HBLs represents a `population characteristics’ and it is not meant to be a permanent metric for the INOV of any individual member of this class of blazars, e.g., by implying that no such blazar would ever exhibit a strong INOV. This important point has been underscored in <cit.> by highlighting the case of the prominent TeV-HBL PKS 2155-304. This blazar, well-known for ultra-rapid variability of its TeV emission, is prone to slipping into prolonged spells of INOV quiescence, as noted by these authors. Another such example, the TeV-HBL PG1553+111, is a member of the present sample itself (Table <ref>). Its low INOV duty cycle implied by the non-detection of INOV on all 4 nights during 2022 (online Fig. S6) is statistically compatible with its recent study by <cit.> in which INOV was detected on just 4 out of 27 nights of R-band monitoring during 2019. In contrast, during 2009–10, this blazar exhibited strong INOV (ψ≳ 5%) on all 3 nights it was monitored in R-band <cit.>. This indicates a transition to INOV quiescence, occurring somewhere between 2009-10 and 2019-22. Although a detailed comparison of this pattern with the jet's kinematic on parsec scale is currently lacking, it is interesting to note that the published MOJAVE images at 15 GHz do indicate a drop in the apparent speed of the dominant VLBI knots by a factor of ∼ 3 over the period from 2008 to 2018 <cit.>, which is consistent with the above-inferred change in the INOV state (from high to low) of this TeV blazar. It would be desirable to garner further evidence on the question whether INOV state transitions are accompanied by a changing kinematics of the parsec-scale radio jets. § CONCLUSIONS We have carried out an extensive, high-sensitivity intranight optical monitoring programme targeted on a well-defined sample of 6 TeV detected HBLs whose parsec-scale jets had been shown to be dominated by radio knots exhibiting either subluminal, or at most mildly superluminal motion. An essentially zero INOV duty cycle is estimated here from the 24 monitoring sessions devoted to these TeV-HBLs, despite their exhibiting fairly high degree of optical polarization. This INOV duty cycle is at least an order-of-magnitude lower than that typical of radio-selected blazars (LBLs, whose parsec-scale jets are usually dotted with highly superluminal knots, e.g., ). Thus, TeV-HBLs with slow-moving VLBI knots are clearly identified for the first time as an extreme sub-population of blazars, from the perspective of INOV. Their highly subdued INOV, as found here, demonstrates that the presence of dominant superluminal radio knot(s) in the parsec-scale jet constitutes a key diagnostic for INOV detection and while a high degree of optical polarization is also an important marker, as shown in <cit.>, it alone is not a sufficient diagnostic for INOV detection. § ACKNOWLEDGEMENTS We thank the anonymous referee for the valuable comments on our manuscript. GK would like to thank Indian National Science Academy for a Senior Scientist position. The assistance from the scientific and technical staff of ARIES DFOT is thankfully acknowledged. VN thanks Krishan Chand, Nikita Rawat, Bhavya Ailawadhi and Sriniwas M Rao for help with observations. § DATA AVAILABILITY The data used in this study will be shared on reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2306.10238v1
20230617023947
Quantum super-resolution for imaging two pointlike entangled photon sources
[ "Huan Zhang", "Wei Ye", "Ying Xia", "Zeyang Liao", "Xue-hua Wang" ]
quant-ph
[ "quant-ph" ]
1. State Key Laboratory of Optoelectronic Materials and Technologies, School of Physics, Sun Yat-sen University, Guangzhou 510275, China 2. School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China [email protected] We investigate the resolution for imaging two pointlike entangled sources by using the method of the moments and the spatial-mode demultiplexing (SPADE), where the pointlike entangled sources can be generated by injecting single-mode sources with arbitrary quantum statistics distribution into an optical parametric amplifier (OPA). We demonstrate that the separation estimation sensitivity is mainly determined by the photon distribution in each detected modes and it can be enhanced by either increasing the squeezed parameter of the OPA or eliminating the relative phase difference of the entangle sources. Furthermore, in the limiting case of infinitely small source separation, the usage of entangled sources can have better resolution than those using incoherent and coherent sources. The results here can find important applications for the quantum super-resolution imaging and quantum metrology. § INTRODUCTION Due to the Abbe's diffraction limit <cit.>, the minimum resolvable separation of classical optical instrument is about half wavelength of the detection light source (i.e., d_min=λ/2NA, where λ is the wavelength of light and NA is the numerical aperture). How to further improve the resolution of the optical instrument is always a hot research topic which has attracted extensive interests in the past few decades <cit.>. In the past few decades, a number of methods have been proposed to overcome the diffraction limit, such as the stimulated emission depletion microscopy (STED) <cit.>, structured illumination microscopy (SIM) <cit.>, the single-molecule localization microscopy (SMLM) <cit.>, and stochastic optical fluctuation imaging (SOFI) <cit.>. These methods have been widely used for biological imaging with typical resolution being about 20-50 nm. Using quantum effects such as quantum entanglement <cit.>, quantum coherence <cit.> and quantum statistics <cit.>, the diffraction limit can also be in principle overcome. A natural question arises that what is the ultimate limit of quantum imaging? The above question may be addressed from the point of view of quantum metrology <cit.>. The simplest task for the superresolution optical imaging is discrimination of two close pointlike sources <cit.>. According to the theory of quantum metrology, the quantum limit for the separation estimation of point-like sources is determined by the quantum Cramer-Rao bound (Δ d)^2≤1/F_Q, where F_Q is the quantum Fisher information (QFI), quantifying the sensitivity of the quantum state of the point source to the change of distance d <cit.>. Based on this idea, Tsang et al. showed that the estimation error is not diverging as the separation between the two point sources become infinitely small and moreover they proved that the spatial-mode demultiplexing method (SPADE) can saturate this quantum bound <cit.>. In particular, the separation of two equally bright light sources within the diffraction limit can be estimated with high sensitivity by analyzing the signals on different spatial modes (e.g., Hermitian Gaussian modes) instead of direct intensity measurements, which has been experimentally demonstrated <cit.>. This method has also been generalized to the two- and three- dimensional cases <cit.>. Recently, Ugo Zanforlin et al. experimentally demonstrated the super-resolution imaging task for resolving two sources with unequal brightness based on hypothesis testing and quantum metrology techniques <cit.>. Another resolution-enhanced measurement scheme based on the method of moments has also been proposed, which enables us to analyze and resolve bright unrelated thermal sources without requiring the full measurement statistics <cit.>. Although early research mainly focused on incoherent pointlike sources, discrimination of mutually coherent pointlike sources has also been studied <cit.>. In this paper, we study the super-resolution imaging for a pair of pointlike entangled sources using the SPADE based on the method of moments, where the pointlike entangled sources can be generated by injecting quantum states with different different photon statistics into the OPA. We show that the sensitivity of separation estimation in our scheme does not vanish even if the two entangled sources are infinitely close which renders the diffraction limit irrelevant to the problem. We also show that the sensitivity using the entangled sources is significantly better than those using the incoherent and coherent sources and the sensitivity can be enhanced by increasing the squeezing parameter. In addition, we also show that smaller phase difference between the entangled sources is in favor for better sensitivity. The results here can find important applications for the quantum super-resolution imaging and quantum metrology. The structure of the paper is as follows. In Sec. II, we illustrate the schematic setup and basic principle for the imaging of a pair of pointlike entangled sources. After introducing the method of moments for estimating sources separation in Sec. III, we investigate in detail the estimation sensitivity of our scheme in Sec. IV. We summarize our results in Sec. V. § MODEL OF SOURCES AND IMAGING SYSTEM In this section, we propose theoretically an optical scheme for resolving a pair of pointlike entangled sources (corresponding to the orthogonal modes with field operators ŝ_1,2) located at positions 𝐫_1,2=( ± d/2,0,0). The light emitted by this sources can be considered the result of a single mode light source that inject into the optical parametric amplification (OPA), and then adds a phase shifting element θ to one of the output modes (see the left part of Fig.1). The OPA can be described as a two-mode unitary squeezed operator S_2(r) =exp{r( ŝ_0^†v̂_0^†-ŝ_0v̂_0) } with a real squeezing parameters r. In these situations, the transition of the field operators from the incident field ŝ_0 to the modes of sources ŝ_1,2 can be given by <cit.> ( [ ŝ_1; ŝ_2^† ] ) =( [ 1 0; 0 e^iθ ] ) ( [ cosh r -sinh r; -sinh r cosh r ] ) ( [ ŝ_0; v̂_0^† ] ), where v̂_0^† is the creation field operator of the vacuum mode. For any initial incident field (a single mode light ρ _ŝ_0), corresponding to field operator ŝ_0, the first-order coherency matrix of the modes ŝ_1,2 can be calculated as ⟨ŝ_i^†ŝ_j⟩=( [ N_s_1 C; C^* N_s_2 ] ), with N_s_1=N_scosh ^2r+sinh ^2r, *2(a) N_s_2=N_ssinh ^2r+sinh ^2r, *2(b) C=-1/2e^iϕTr[ρ _s_0s_0^2]sinh 2r, *2(c) where N_s=Tr[ρ _0ŝ_0^†ŝ_0] is the average intensity of the incident field. In particular,the case of r→∞ corresponds to equally bright sources and r→ 0 corresponds to all light in mode ŝ_1. Although theoretically we can achieve an arbitrary large squeezing parameter, it is an impossible task experimentally. Therefore, we only focus on finite squeezing parameter r∈[ 0,1] of the OPA in the following discussion, as far as the current technology is concerned, which means that it is easy to generate entangled light sources experimentally for finite squeezed intensity <cit.>. In the imaging system, the separation parameters d of the sources ŝ_1,2, are estimated from measurements of the diffracted light. Now, let us consider that a pair of pointlike entangled sources emit light, which passing through a diffraction-limited imaging system with finite aperture that has a transmissivity κ and a point spread function (PSF) u_0(𝐫) (see the right part of Fig. 1). In general, for the case of the paraxial approximation, the parameter κ does not depend on the positions of the light source ŝ_1,2. The evolution of the field operators through a diffraction-limited imaging system can actually be described as the following transformations <cit.> ŝ_1,2→√(κ)b̂_1,2+√(1-κ)v̂_1,2, where b̂_1,2=∫ d^3𝐫u(𝐫-𝐫_1,2)b̂_𝐫 are the image operators and v̂_1,2 are the field operators of auxiliary environmental modes of diffraction-limited imaging system, which are assumed to be in the vacuum state. Due to the diffraction limit, the image profile functions u(𝐫-𝐫_1) and u(𝐫-𝐫_2) are usually overlapped when the distance between the two point sources is small. Hence, it should be noted that the image operators b̂_1 and b̂_2 do not satisfy the usual canonical commutation relations. In order to obtain the ultimate sensitivity of separation parameters d estimation, we can apply the moment-based estimation technique and here we use the SPADE method to measure the light in the image plane <cit.>. The SPADE can be described as measurements over some field modes h_m( 𝐫) with corresponding operator â_m, where h_m( 𝐫) are general nonlocalized modes (such as the Hermite-Gauss modes <cit.> which can saturate the QFI for the estimation of the separation between equally bright thermal sources). The input-output relation of the field operators of the measurement modes is given by <cit.> â_m=A_mŝ_0+v̂_m, where A_m are complex coefficients and v̂_m are nonnormalized nonorthogonal combinations of the field operators of vacuum modes that are orthogonal to the mode ŝ_0. By using the SPADE method, the separation parameters d can in principle be estimated from the measured numbers of photon N_m=⟨â_m^†â_m⟩ in the image plane. The average number of detected photons in the mth measurement mode reads N_m=| A_m| ^2N_s, and for our scheme as shown in Fig. 1, the complex coefficients A_m is given by A_m = √(κ)∫ d^3𝐫h_m^∗(𝐫) [u_0( 𝐫-𝐫_1) cosh r -u_0( 𝐫-𝐫_2) e^iθsinh r]. From Eqs. (<ref>) and (<ref>), we can see that the average photon number in each measurement modes contain the information of source seperation d, the phase difference θ of sourec modes ŝ_1,2, the transmissivity κ, the PSF u_0(𝐫), and the shapes of the Hermite-Gauss modes h_m( 𝐫). § THE METHOD OF MOMENT AND SPATIAL-MODE DEMULTIPLEXING In this section, we derive the measurement sensitivity using the method of moments <cit.>. For a given observable Ô, an estimator d̂ of the separation parameter d can be obtained from the sample mean o _τ=∑_i=1^τo_i/τ of τ independent measurements of Ô. After performing enough measurements, i.e. τ≫1, according to the central limit theorem, o_τ presents the normally distribution with the mean value ⟨Ô⟩ and the variance (ΔÔ) ^2=⟨Ô^2⟩ -⟨Ô⟩ ^2. The estimation error of the separation parameter d can be calculated by ( Δ d) ^2=( ΔÔ) ^2/τ( ∂⟨Ô⟩ /∂ d) ^2, which also determines the sensitivity of estimating the separation parameter d in the method of moments. According to the Cramer-Rao lower bound ( Δ d) ^2⩾1/τ F(d,Ô), where F(d,Ô) is classical Fisher information. Finally, we can obtain the ultimate sensitivity of separation parameter d by calculating the QFI F_Q( d), i.e., maximizing the classical Fisher information over any positive-operator value measure (POVM) F_Q( d) =max_ÔF(d,Ô). Assuming that a set of parameters {λ_i} is measured, the measurement sensitivity matrix is then given by S_ij=∑_m,nΛ _mn^-1∂⟨Ô⟩ _m/∂λ _i∂⟨Ô⟩ _n/∂λ _j, with Λ _mn=⟨Ô_mÔ_n⟩-⟨Ô⟩ _m⟨Ô⟩ _n being the covariance matrix of the observables. It should be noted that this sensitivity matrix is obtained by optimizing the linear combination of the average values ⟨Ô⟩ _m of observable measurements {Ô_m}. The covariance of the estimator {λ̂_i} is given by the inverse of the sensitivity matrix, i.e., cov( λ̂_i,λ̂_j) =1/τS_ij^-1. In practice, we cannot measure all observables experimentally. Fortunately, the method of moments allows us to avoid estimating parameters from the full photon counting statistics. We take photon number operators N̂_m=â_m^†â_m in the mth measurement mode as observables. By using the Sherman-Morrison formula <cit.> and the method of moments, the elements of the inverse covariance matrix in Eq. (<ref>) can be then calculated as Λ _mn^-1=δ _mnN_m^-1-g^(2) -1/1+( g^( 2) -1) N_D, where g^( 2) =( Δ N_s^2-N_s) /N_s^2 is the degree of second-order coherence of the initial incident field and N_D=∑_mN_m is the total average photon number in the image plane. In our scheme, we assume that all parameters except the separation parameter d of the pointlike entangled sources are known. In other words, we only need to estimate the parameter d, which corresponds to single-parameter estimation. Then, by substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we can obtain the sensitivity, i.e., _d=N_D∑_m1/𝒩_m( ∂𝒩_m/∂ d) ^2+1/Δ N_D^2( ∂ N_D/∂ d) ^2, where 𝒩_m=N_m/N_D is the ratio of the photon number of the mth Hermite-Gaussian mode to the total photon number of the image plane, and Δ N_D^2 is the variance of the total average photon number in the image plane, which can be calcaulate as Δ N_D^2=∑_mnΛ _mn=N_D[ 1+( g^(2) -1) N_D]. According to Eq. (<ref>), the variance of the separation parameter d can be derived as ( Δ d) ^2=1/τ1/N_D _𝒩 + _D, where _𝒩=∑_m1/𝒩_m( ∂𝒩_m/∂ d) ^2, *14(a) _D=1/Δ N_D^2( ∂ N_D/∂ d) ^2. *14(b) From Eq. (<ref>), we can find that the ultimate sensitivity of the separation parameter only depends on the relative photon number 𝒩_m and the total photon number in image plane N_D. _𝒩 can be viewed as the sensitivity of the relative intensity measurements (RIM), which only depends on the relative photon number 𝒩_m, while _D can be viewed as the sensitivity of the total photon number detection (TPD). Next, we analyze the sensitivity of separation parameter estimation in details for different initial incident fields. For this purpose, we consider that the spatial field distribution, in the diffraction-limited imaging system, is a Gaussian PSF <cit.>, i.e., u_0( 𝐫) =√(2/πω ^2)exp( -|𝐫| ^2/ω^2), where ω is the width of the PSF. For this PSF, the quantum Cramer-Rao bound can be approached by demultiplexing Hermite-Gauss (HG) modes <cit.>. The measurement HG modes can be defined as h_m( x,y) =1/√(2^mm!)H_m( √(2)x/ω) u_0( √(x^2+y^2)), where H_m( ·) is the Hermite polynomial. For ideal measurement in the HG mode basis, combining Eqs. (<ref>), (<ref>) and (<ref>), the coefficients A_m can be calculated as A_m=√(κ)[ ( -1) ^mcosh r-e^iθsinh r] G_m( d/2ω), where G_m(γ )=γ ^me^-γ ^2/2/√(m!) with γ=d/2ω. Inserting Eq. (<ref>) into Eq. (<ref>), one can achieve the mean photon number in the measurement modes N_m=N_sκ[ cosh 2r-( -1) ^mη] G_m^2( d/2ω), with η=cosθsinh2r, and the total mean photon number can be calculated as N_D=N_sκ( 1+℘η), where ℘ is the overlap function between the images of the two point sources, which is given by<cit.> ℘ =∫ u_0( 𝐫-𝐫_1) u_0( 𝐫-𝐫_2) d𝐫 =exp( -d^2/2ω ^2). where we have used the PSF shown in Eq. (<ref>). § SENSITIVITY OF THE SEPARATION PARAMETERS D ESTIMATION In this section, we theoretically and numerically calculate the sensitivity of our scheme under different circumstances. We have demonstrated above that the ultimate sensitivity of parameter d estimation is determined by the RIM and TPD. Here, let's first consider the case when the full HG bases are measured (i.e., m→∞). From Eq. 14(a), the RIM sensitivity can be calculated as _𝒩=κ N_s/ω ^2N_D( cosh 2r+℘η -d^2℘ηcosh 2r/ω ^2( cosh 2r-℘η) ). For simplicity, we here define R_𝒩=ω^2N_D _𝒩/N_sκ as the normalized RIM sensitivity which is given by R_𝒩=cosh 2r+℘η -d^2℘ηsinh 2r/ω ^2( cosh2r-℘η) . From the above equation, we can see that the normalized RIM sensitivity depends on the source seperation d, the phase difference θ, the squeezed parameters r of the OPA but not on the property of the initial incident field ρ _0. On the other hand, for the TPD sensitivity, it is seen from Eq. 14(b) that the TPD sensitivity does not depend on average photon number of individual HG modes N_m while only depends on the total average photon number detection N_D. Substituting Eqs. (<ref>) and (<ref>) into 14(b), we can obtain _D=N_s^2κ ^2d^2℘ ^2η ^2/ω ^4N_D[ 1+N_sκ( g^( 2) -1) ( cosh2r-℘η) ] . It is not difficult to find that the TPD sensitivity depends on the quantum statistics distribution of the incident light field. The incident light field with antibunching statistics ( g^( 2) <1) can provide a better sensitivity than those with bunching statistics ( g^( 2)>1). This is because the incident field with antibunching statistics has a lower photon number variance. Similar to the RIM sensitivity, here we also define R_D=ω ^2 _D/N_sκ as the normalized TPD sensitivity, which is given by R_D=℘ ^2η ^2d^2(cosh 2r-η)^-1/2ω ^2[ 1+( g^( 2) -1) N_sκ(cosh 2r-℘η) ] . Finally, we discuss overall sensitivity of the separation d estimation, which is given by Eq. (<ref>) and its normalized sensitivity can be obtained by combining the RIM sensitivity Eq. (<ref>) with the TPD sensitivity Eq. (<ref>), i.e., R_d=R_𝒩+R_D We assume that the incident field ρ_0 is in a coherent state whose photon statistics is Poisson distributed and g^( 2) =1. In this case, we have R_D=℘ ^2η ^2d^2/2ω ^2( cosh 2r-η). From Eqs. (<ref>) and (<ref>), we can find that, after propagation through the diffraction-limited imaging system, the normalized separation estimation sensitivity does not depend on the source intensity of the coherent state. In order to clearly see the effect of different parameters d and r on the amount of the normalized sensitivity, we plot the normalized sensitivity R_𝒩 as a function of r and d when θ=0 (Fig. 2). Form Fig. (2a), we can see that the improvement of the normalized RIM sensitivity for a fixed d can be found by increasing the squeezed parameters r of the OPA especially when d approaches zero. Surprisingly, the optimal value of the normalized RIM sensitivity appear in the range of d≪2ω which is completely different from that using the direct imaging where the sensitivity approach zero for very small separation (Fig. 2(b)). This indicates that the pointlike entangled symmetric sources can still be distinguished by using the SPADE method even when the separation d is much less than the diffraction limit. Meanwhile, we also find that the ultimate separation d estimation sensitivity of symmetric sources is mainly dependent on the RIM sensitivity (Fig. (2c)). This is due to the fact that the TPD sensitivity of the coherent state is far less than the RIM sensitivity for a certain parameter (d,r) threshold, i.e., R_𝒩≫ R_D. On the other hand, we also study the separation estimation sensitivity for different second-order correlation g^( 2) of the initial incident light field. From Eq. (24), we can see that the quantum statistics of the incident field can affect the separation estimation sensitivity. From Fig. 3(a), we can see that the separation estimation is enhanced when g^( 2) decreases for all the chosen squeezing parameters. This indicates that anti-bunching of the incident field is beneficial for the separation estimation when d/2ω=0.5. In Fig. 3(b), we compare the separation estimation sensitivity as a function of d for the coherent state (g^( 2) =1), the thermal state (g^( 2) =2) and the antibunching state (g^( 2)=0.5) as the initial incident sources. We can find that the light sources with different g^( 2) can give different estimation sensitivities when d≈ω and antibunching light source can improve the sensitivity in this region. However, the separation estimation sensitivity of the pointlike entangled sources generated by the non-classical source is not significantly improved than that of the classical sources when the separation d is much less than the diffraction limit (i.e., d→ 0). Therefore, when the source separation is very small, the pointlike entangled sources generated by the OPA can give similar sensitivity for any initial incident sources in our scheme. In Fig. 4, we compare the normalized separation estimation sensitivities for the incoherent sources<cit.>, mutually-coherent sources <cit.>, and entangled sources when d→0. From the figure, we can see that the normalized separation estimation sensitivity R^inc_d=1 for the incoherent sources and R^muc_d=1-2√(T(1-T)) for mutually coherent sources generated by the optical beam splitter, where T=cos^2ϕ is the transmissivity of the optical beam splitter used in Ref. <cit.>. It is seen that the estimation sensitivity of the coherent sources is less than that of the incoherent sources, and interestingly the normalized sensitivity disappears for the equally bright mutually coherent sources (T=0.5). In contrast, the estimation sensitivity of the entangled sources is always greater than 1 and increases as r increases (blue line with star symbols in Fig. 4). This can be easily seen from the asymptotic behavior of the estimation sensitivity when d→0. When d→0 and θ=0, R_D→0 which can be seen from Eq. (26) and R_d≈ R_N→cosh2r+sinh2r which is always larger than 1 and clearly increases with r. These results indicate that the estimation sensitivity can be enhanced when the entangled light sources are used instead of coherent and incoheren classical sources. Finally, in Fig. 5, we discuss the impact of the phase differences θ∈ 0,π ] on the normalized sensitivity when given squeezed parameters r=0.5 and g^(2)=1. Our numerical results show that increasing the phase difference θ of the pointlike entangled sources can reduce the RIM sensitivity of our scheme when d≪ 2ω, but it can increase the sensitivity when d≃ 2ω [see Fig. 5(a)]. For TPD sensitivity, when 0≤θ≤π/2, R_D decreases as θ increases. In contrast, when π/2≤θ≤π, R_D increases as θ increases. When θ=π/2, R_D=0. The optimal normalized TPD sensitivity of resolving symmetric sources (θ=0) is about 1.75 times higher than that of antisymmetric sources (θ=π). From Fig. 5(c), when θ =π /2, R_d=R_𝒩=cosh 2r/ω ^2 which does not depend on d. For the small separation (i.e., d≪2ω), R_d increases as θ decreases and when θ=0 (symmetrical entangled sources) we have the largest measurement sensitivity. In the contrast, when d≈ 2ω, R_d is the largest when θ=π. These restuls indicate that for deep subdiffraction limit separation, the entangled sources with zero phase difference can have the largest estimation sensitivity. § CONCLUSION In summary, we analyze the separation estimation sensitivity of pointlike entangle sources, which is produced by injecting a single-mode light source with arbitrary quantum statistics distribution into an OPA. By using the method of moments together with the SPADE, we show that the separation estimation sensitivity is completely determined by the photon distribution in each detected modes and the total photon number in the image plane. The results show that the separation estimation error does not diverge even when d→ 0 when the light sources are the pointlike entangled sources generated by the OPA. This indicates that the Rayleigh's curse can be overcome in our scheme. Moreover, the detection sensitivity in the case of symmetric entangled light sources can increase with the squeezing parameter r when d→ 0. In addition, we also compare the effects of the quantum statistical distribution of different incident light sources on the realization of super-resolution imaging. We find that for the separation around the diffraction limit, the incident light field with antibunching photon statistics has higher measurement sensitivity, but when the separation is much smaller than the diffraction limit, the pointlike entangled light source generated by OPA with arbitrary initial incident light field, can achieve similar measurement sensitivity in our scheme. The results here can find important applications in the quantum imaging and metrology with super-resolution and super-sensitivity. § ACKNOWLEDGMENTS This work was supported by the National Key R&D Program of China (Grant No. 2021YFA1400800), the Key-Area Research and Development Program of Guangdong Province (Grant No.2018B030329001), the Guangdong Special Support Program (Grant No.2019JC05X397), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023B1515040023), and the Natural Science Foundations of Guangdong (Grant No.2021A1515010039). Wei Ye is supported by the Scientific Research Startup Foundation (Grant No. EA202204230) at Nanchang Hangkong University. § REFERENCE 66 1 Ernst A 1873 Beitrage zur theorie des mikroskops und der mikroskopischen wahrnehmung Archiv. f. mikrosk. Anatomie. 9 413. 2 Rayleigh 1879 Xxxi. investigations in optics, with special reference to the spectroscope The London, Edinburgh, and Dublin Philos. Magaz. J. Sci. 8 261. Vangindertael2018 Vangindertael J, Camacho R, Mizuno S W H, Dedecker P and Janssen K 2018 An introduction to optical superresolution microscopy for the adventurous biologist Methods Appl. Fluoresc. 6 022003. Pujals2019 Pujals S, Feiner-Gracia N, Delcanale P, Voets I and Albertazzi L 2019 Super-resolution microscopy as a powerful tool to study complex synthetic materials Nat. Rev. Chem. 3 68-84. Hell2000 Klar T A, Jakobs S, Dyba M, Egner A and Hell S W 2000 Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission Proc. Natl. Acad. Sci. 97 8206. Chen2015 Chen X, Zou C, Gong Z, Dong C, Guo G and Sun F 2015 Subdiffraction optical manipulation of the charge state of nitrogen vacancy center in diamond Light Sci. Appl. 4 e230. Gustafsson2000 Gustafsson M G L 2000 Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy J. Microsc. 198 82. Mudry2012 Mudry E, Belkebir K, Girard J, Savatier J, Le Moal E, Nicoletti C, Allain M and Sentenac A 2012 Structured illumination microscopy using unknown speckle patterns Nat. Photon. 6 312. Zeng2014 Zeng X, Al-Amri M and Zubairy M S 2014 Nanometer-scale microscopy via graphene plasmons Phys. Rev. B 90 235418. Xi2019 Zhang K H, Chen X, Liu W, Li M, Liu Y, Wang Y, Luo S, Wang X, Shan C, Xie H, Gao J, Chen X,Jin D, Li X, Zhang Y, Dai Q and Xi P 2019 Super-resolution imaging of fluorescent dipoles via polarized structured illumination microscopy Nat. Commun. 10 4694. Betzig2006 Betzig E, Patterson G H, Sougrat R, Lindwasser O W, Olenych S, Bonifacino J S and Hess H F 2006 Imaging intracellular fluorescent proteins at nanometer resolution Science 313 1642. Zhuang2006 Rust M J, Bates M and Zhuang X 2006 Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM) Nat. Methods 3 793. Dertinger2009 Dertinger T, Colyer R, Iyer G, Weiss S and Enderlein J 2009 Fast, background-free, 3d super-resolution optical fluctuation imaging (SOFI) Pro. Natl. Acad. Sci. 106 22287. Boto2000 Boto A N, Kok P, Abrams D S, Braunstein S L, Williams C P and Dowling J P 2000 Quantum interferometric optical lithography: Exploiting entanglement to beat the diffraction limit Phys. Rev. Lett. 85 2733. Shih2001 DAngelo M, Chekhova M V and Shih Y 2001 Two-photon diffraction and quantum lithography Phys. Rev. Lett. 87 013602. Moreau2019 Moreau P A, Toninelli E, Gregory T and Padgett M J 2019 Imaging with quantum states of light Nat. Rev. Phys. 1 367. Cui2023 Cui D, Yi X, and Yang L P 2023 Quantum imaging exploiting twisted photon pairs Adv. Quantum Technol. 6 2300037. Liao2010 Liao Z Y, Al-Amri M and Zubairy M S 2010 Quantum lithography beyond the diffraction limit via rabi oscillations Phys. Rev. Lett. 105 183601. Liao2012 Liao Z Y, Al-Amri M and Zubairy M S 2012 Resonance-fluorescence-localization microscopy with subwavelength resolution Phys. Rev. A 85 023810. Rui2016 Rui J, Jiang Y, Lu G P, Zhu M J, Zhao B, Bao X H and Pan J W 2016 Demonstration of interferometric atom-pattern engineering via rabi oscillations Phys. Rev. A 93 033837. Cui2013 Cui J M, Sun F W, Chen X D, Gong Z J and Guo G C 2013 Quantum statistical imaging of particles without restriction of the diffraction limit Phys. Rev. Lett. 110 153901. Classen2017 Classen A, von Zanthier J, Scully M O and Agarwal G S 2017 Superresolution via structured illumination quantum correlation microscopy Optica 4 580. Tenne2019 Tenne R, Rossman U, Rephael B, Israel Y, Krupinski-Ptaszek A, Lapkiewicz R, Silbberberg Y, and Oron D 2019 Super-resolution enhancement by quantum image scanning microscopy Nat. Photon. 13 116. Bhusal2022 Bhusal N, Hong M, Miller A, Quiroz-Juarez M A, Leon-Montiel R d J, You C and Magana-Loaiza O S 2022 Smart quantum statistical imaging beyond the abbe-rayleigh criterion NPJ Quantum Inf. 8 83. Delaubert2008 Delaubert V, Treps N, Fabre C, Bachor H A and Refregier P 2008 Quantum limits in image processing Europhys. Lett. 81 44001. Tsang2015 Tsang M 2015 Quantum limits to optical point-source localization Optica 2 646. Giovannetti2011 Giovannetti V, Lloyd S and Maccone L 2011 Advances in quantum metrology Nat. photon. 5 222. 26 Chao J, Ward E S and Ober R J 2016 Fisher information theory for parameter estimation in single molecule microscopy: tutorial J. Opt. Soc. Am. A 33 B36. 27 Tsang M, Nair R and Lu X M 2016 Quantum theory of superresolution for two incoherent optical point sources Phys. Rev. X 6 031033. 28 Lupo C and Pirandola S 2016 Ultimate precision bound of quantum and subwavelength imaging Phys. Rev. Lett. 117 190802. 29 Nair R and Tsang M 2016 Far-field superresolution of thermal electromagnetic sources at the quantum limit, Phys. Rev. Lett. 117 190801. 30 Tsang M 2019 Quantum limit to subdiffraction incoherent optical imaging Phys. Rev. A 99 012305. 31 Lupo C, Huang Z and Kok P 2020 Quantum limits to incoherent imaging are achieved by linear interferometry Phys. Rev. Lett. 124 080503. 32 de Almeida J O, Kolodynski J, Hirche C, Lewenstein M and Skotiniotis M 2021 Discrimination and estimation of incoherent sources under misalignment Phys. Rev. A 103 022406. 33 Zanforlin U, Lupo C, Connolly P W, Kok P, Buller G S and Huang Z 2022 Optical quantum super-resolution imaging and hypothesis testing Nat. Commun. 13 5373. 34 Sorelli G, Gessner M, Walschaers M and Treps N 2022 Quantum limits for resolving gaussian sources Phys. Rev. Res. 4 L032022. 35 Barbieri M 2022 Optical quantum metrology PRX Quantum 3 010202. 36 Gessner M, Fabre C and Treps N 2020 Superresolution limits from measurement crosstalk Phys. Rev. Lett. 125 100501. 37 Hradil Z, Rehacek J, Sanchez-Soto L and Englert B G 2019 Quantum fisher information with coherence Optica 6 1437. 38 Zhang H, Ye W, Chang S, Xia Y, Hu L Y and Liao Z Y 2023 Quantum multiparameter estimation with multi-mode photon catalysis entangled squeezed state Front. Phys. 18 42304. 39 Paur M, Stoklasa B, Hradil Z, Sanchez-Soto L L and Rehacek J 2016 Achieving the ultimate optical resolution Optica 3 1144. 40 Yang F, Tashchilina A, Moiseev E S, Simon C and Lvovsky A I 2016 Far-field linear optical superresolution via heterodyne detection in a higher-order local oscillator mode Optica 3 1148. 41 Tham W K, Ferretti H and Steinberg A M 2017 Beating rayleigh’s curse by imaging using phase information Phys. Rev. Lett. 118 070801. Pushkina2021 Pushkina A A, Maltese G, Costa-Filho J I, Patel P, and Lvovsky A I 2021 Superresolution linear optical imaging in the far field Phys. Rev. Lett. 127 253602. Wang2021 Wang B, Xu L, Li J C, and Zhang L 2021 Quantum-limited localization and resolution in three dimensions Photon. Res. 9 1522. 42 Sorelli G, Gessner M, Walschaers M and Treps N 2021 Optimal observables and estimators for practical superresolution imaging Phys. Rev. Lett. 127 123604. 43 Sorelli G, Gessner M, Walschaers M and Treps N 2021 Moment-based superresolution: Formalism and applications Phys. Rev. A 104 033515. 44 Xie D, Xu C and Wang A M 2022 Quantum thermometry in diffraction-limited systems Phys. Rev. A 106 052407. 45 Karuseichyk I, Sorelli G, Walschaers M, Treps N and Gessner M 2022 Resolving mutually-coherent point sources of light with arbitrary statistics Phys. Rev. Res. 4 043010. 46 Datta C, Len Y L, Lukanowski K, Banaszek K and Jarzyna M 2021 Sub-rayleigh characterization of a binary source by spatially demultiplexed coherent detection Opt. Express 29 35592. 47 Agarwal G S 2012 Quantum optics (Cambridge University Press). 48 Boyer V, Marino A M, Pooser R C and Lett P D 2008 Entangled images from four-wave mixing Science 321 544. 49 Wang Y, Zhang W, Li R, Tian L and Zheng Y 2021 Generation of- 10.7 db unbiased entangled states of light Appl. Phys. Lett. 118 134001. 50 Pezze L and Smerzi A 2014 Quantum theory of phase estimation (arXiv:1411.5164) 51 Gessner M, Smerzi A and Pezze L 2019 Metrological nonlinear squeezing parameter Phys. Rev. Lett. 122 090503. 52 Sherman J and Morrison W J 1950 Adjustment of an inverse matrix corresponding to a change in one element of a given matrix Ann. Math. 21 124. 53 Hager W W 1989 Updating the inverse of a matrix SIAM review 31 221. 54 Pawley J B 2006 Fundamental limits in confocal microscopy Handbook of biological confocal microscopy 20-42.
http://arxiv.org/abs/2306.04083v1
20230607005456
Coverage Path Planning with Budget Constraints for Multiple Unmanned Ground Vehicles
[ "Vu Phi Tran", "Asanka Perera", "Matthew A. Garratt", "Kathryn Kasmarik", "Sreenatha Anavatti" ]
cs.MA
[ "cs.MA", "cs.RO" ]
Coverage Path Planning with Budget Constraints for Multiple Unmanned Ground Vehicles Vu Phi Tran, Asanka Perera, Matthew A. Garratt, Senior Member, IEEE, Kathryn Kasmarik, Senior Member, IEEE, Sreenatha Anavatti July 31, 2023 ====================================================================================================================================== This paper proposes an innovative approach to coverage path planning and obstacle avoidance for multiple Unmanned Ground Vehicles (UGVs) in a changing environment, taking into account constraints on the time, path length, number of UGVs and obstacles. Our approach leverages deformable virtual leader-follower formations to enable UGVs to adapt their formation based on both planned and real-time sensor data. A hierarchical block algorithm is employed to identify areas in the environment where UGV formations can spread out to meet time and budget constraints. Additionally, we introduce a novel control scheme that allows each UGV to generate a local steering force to dodge any static and mobile obstacles based on the closest safe angle. Results from simulations and real UGV experiments demonstrate that our approach achieves a higher coverage percentage than rule-based and reactive swarming approaches without planning. Our approach offers a promising solution for efficient coverage path planning and obstacle avoidance in complex environments with multiple UGVs. Coverage Path Planning, Spanning Tree Coverage, Optimisation Technique, Formation Control, Obstacle Avoidance, Autonomous Vehicles. § INTRODUCTION Intelligent transportation is gaining popularity with an increase in the number of practical applications <cit.>. This is particularly true for autonomous systems, such as unmanned ground vehicles (UGVs). UGVs have different applications, including coverage of a given area. In particular, when a number of UGVs/agents are employed to cover a given area, the control systems need to be intelligent to achieve the mission while overcoming the obstacles, both static as well as dynamic. The coverage path planning problem is a task wherein a UGV or UGVs, possessing a complete geometric description of the area of interest, generates an efficient coverage path to visit every point in a given area while avoiding all possible obstacles <cit.>. Various technological developments and advancements in sensor technology, navigational, communication, and computational systems have facilitated the rapid growth in the use of coverage path planning (CPP) methods to assist UGVs in performing many specific applications, ranging from humanitarian missions such as surveillance, search and rescue tasks, to military operations such as surveillance <cit.>, environmental monitoring <cit.>, and civilian applications such as area cleaning, seeding or harvesting <cit.>, and mapping and model reconstruction <cit.>. In recent years, the literature has discussed several approaches for coverage by a single vehicle <cit.>. However, real-world factors such as battery capacity or sensor payload restrictions <cit.> may limit the ability of a single agent to meet an operational time limit. Compared to single-vehicle CPP, a group of multiple vehicles may solve a coverage task more rapidly due to its larger footprint. Yet, to exploit the capacity of a multi-vehicle team, novel algorithms are required to determine the route for each vehicle when they can spread out and take on a narrow formation. Motivated by the aforementioned observations, the contributions of the present paper are: * A novel problem definition for non-backtracking coverage path planning with budget constraints assuming the use of a multi-UGV team in cluttered and uncertain environments. * A novel algorithm is presented for solving the problem, which utilizes a hierarchical block approach to decompose a given map into appropriate cell sizes. This allows us to exploit flexible multi-UGV formations to meet multiple budget constraints. * A distributed virtual leader-follower formation control strategy including automatic role assignment in the formation and obstacle avoidance. * A comprehensive comparative study in both simulated and real-world settings to confirm the viability of our approach. Our approach outperforms existing methods in terms of maximum coverage percentage, time to achieve coverage and computational complexity. The virtual leader-follower control approach taken in this paper is based on the virtual spring system <cit.>. The virtual leader-follower approach offers several advantages over the traditional leader-follower and swarm-based methods in coverage path planning. Firstly, it eliminates the need for a physical leader, which can be costly and risky to implement. Secondly, it allows for the efficient coordination of multiple followers without the risk of collisions or formation breakage, which is a common issue with swarm-based approaches. Thirdly, a virtual leader can easily adapt to changes in the environment, dynamically adjust the path plan, and provide more accurate and reliable instructions to the followers. This is in contrast to the real leader-follower system, which may not be able to respond quickly enough to changes in the environment <cit.>. Finally, the virtual leader-follower approach offers more flexibility and scalability, enabling the coordination of a large number of followers without requiring additional resources. The remainder of this paper is organised as follows. Section II discusses related work from the literature. Section III states our coverage path planning problem definition and describes our approach to solving this problem. Our approach has components for path planning and prediction of how long it will take to follow a path in formation. Section IV presents a series of experiments with each of these components, first in simulations then on real UGVs in an outdoor setting. We offer conclusions and directions for future work in Section V. § LITERATURE REVIEW Various techniques for area coverage exist, including Voronoi-based <cit.>, graph-based <cit.>, next best view <cit.>, frontier-based <cit.>, and spanning tree coverage methods <cit.>. While some techniques provide incomplete coverage, others like spanning tree coverage guarantee complete coverage by generating a non-overlapping path along the spanning tree. In this study, we employ the spanning tree method for multi-unmanned ground vehicle (UGV) settings where several UGVs are used to cover an area. Developing Multi-Cell Path Planning (MCPP) strategies for real-world scenarios is a challenging task that requires considering additional factors and requirements. Previous studies on MCPP have mainly focused on obstacle-free spaces <cit.> or single cover types <cit.>, which may not apply to many real-world scenarios. Only a block data structured method <cit.> has been developed so far to obtain a comprehensive and safe area coverage path for a team of vehicles, using the concepts of a contour map, connected graph, and spanning tree. However, this method may not scale well to large-scale multi-vehicle systems due to the tiny cells generated around obstacle boundaries. Additionally, no existing methods focus on dynamic environments with non-stationary obstacles, which are more common in practice. Therefore, detecting and avoiding moving obstacles correctly and efficiently is crucial. Most MCPP algorithms rely on centralized control, which can lead to communication overhead and coordination difficulties as the number of agents increases <cit.>. To address this, a distributed approach can be used to improve coordination and communication among vehicles, generate more efficient coverage paths, and better adapt to changes in the environment. Additionally, a distributed approach provides greater robustness to failures or disturbances (e.g., the loss of a vehicle, changes in the environment, or communication failures) by allowing the system to reconfigure dynamically <cit.>. Furthermore, most MCPP algorithms, such as multi-robot forest coverage <cit.> and spiral spanning tree coverage <cit.>, assume a desire for 100% area coverage without recharging or refilling robots <cit.>. However, in real-world applications, many physical constraints need to be considered, such as limited exploration time, travelled path length, restricted access to some areas, and different types of coverage required. Therefore, the state-of-the-art MCPP strategies compute plans that lead to imperfect coverage but are considerably more energy/time-efficient <cit.>. Additionally, complete coverage may not even be feasible when areas to be covered have impeded or hidden components, further highlighting the need for partial coverage <cit.>. Several solutions have applied multi-objective evolutionary techniques in path planning to balance coverage and energy/time <cit.>. However, such multi-objective approaches cannot provide a satisfactory solution when multiple objectives are considered, as in many real-world scenarios. To the best of our knowledge, optimizing the grid cell size, a key parameter in the MCPP problem with constraints, while meeting all specific goals and limitations, has not been extensively examined. Additionally, for large-scale workspaces, multi-objective evolutionary optimization techniques are often time-consuming due to offline training. Therefore, the presented methods are not appropriate for a dynamic setting, where real-time decisions must be made <cit.>. A simpler alternative with very low computational complexity must be taken into account. Moreover, controlling a flexible leader-follower formation to track a pre-defined path is a relatively unexplored approach compared to allocating a specific area to each vehicle. However, this approach has significant benefits which we wish to exploit. By staying close together in a re-configurable formation, the robots can communicate and use visual relative positioning, which can be crucial in scenarios where GPS signals are degraded or denied. The leader-follower approach also allows the vehicles to adapt to changes in the environment or mission requirements, which is not possible with fixed area allocation. This approach also enables the formation to be more robust to failures or disturbances, as the leader can quickly adjust the formation to account for the loss of a vehicle, making the system more resilient <cit.>. Overall, the MCPP problem is a complex optimization problem that requires consideration of multiple factors and constraints, including area coverage, energy/time efficiency, communication overhead, robustness, obstacles, and uncertainty. While there have been many advancements in this field, there is still much work to be done to develop practical and effective MCPP strategies that can be applied to a wide range of real-world scenarios. In the next section, we formulate the problem addressed in this paper and present our solution. § PROBLEM DEFINITION AND ALGORITHM In this section, we begin in Part A by describing the MCPP problem under the physical limits we address in this paper. We provide our algorithmic solution in Part B, followed by a complexity analysis in Part C. §.§ Problem Formulation The simple unicycle model captures the trade-off between linear and angular velocity control for small unmanned ground vehicles (UGVs) <cit.> and forms the basis of the UGV simulation used in this paper. We denote q as the UGV state comprising its position and heading, whilst u is the velocity command vector: q=[x y θ]^T ∈ℝ^3, u = [v ω]^T ∈ℝ^2. We will refer to two components of the state q: q_p ≜ [x y]^T ∈ℝ^2, the position, and q_θ≜θ∈ℝ the yaw angle. The coordinates x and y capture the position of the center of gravity of the vehicle while θ is the angle of the wheel with respect to the x-axis of the reference coordinate frame. The control variable v is the forward velocity of the vehicle (along its body fixed frame x-axis) while ω is the rate of change of the yaw angle. The following equations capture the unicycle model with the known initial conditions x_0, y_0 and θ_0: ẋ = v cos(θ), x(0) = x_0, ẏ = v cos(θ), y(0) = y_0, θ̇ = ω, θ(0) = θ_0. We assume the environment's shape and obstacles are known. Consider an environment 𝒞 to be explored by a formation of n_q vehicles, 𝒬 = {q_1,..., q_n_q}. We choose a tessellated environment of grid cells of width C_W and height C_H such that 𝒞 = {c_ij | i = 1,…,𝒞_W, j = 1,…,𝒞_H}. Given a discretised map 𝒞 with a set of known obstacles 𝒪, a leader-follower formation with initial configuration Q^0 ∈𝒞_free and 𝒞_free≜𝒞∖𝒪, a time budget of T_max steps, (and/or a UGV path length budget of L_max) and a set of observed cells at time k, σ^k := σ^k-1∪ζ^k, where ζ^k = ∪_i=1^n_q(ζ_i^k) is the set of new cells covered by all UGVs' total observation area at the time k. Calculate a plan 𝒫^T := Q^0,..., Q^k,..., Q^T, ∀ T ≤ T_max that does not cross any obstacles (Q^k ∈𝒞_free, ∀ k), and maximizes the coverage percentage CP with respect to cell size CS. In other words, the MCCP problem with constraints requires us to maximise: CP=σ^T/C_free + OB× 100% subject to: ∑_k=0^TQ_p^k - Q_p^k+1_2 ≤ L_max, Q_p^k - Q_p^k+1_2 ≤ v_max, ∀ k, and T ≤ T_max. where Q_p^k ∈ℝ^2 represents the coordinates of the centre of the formation. Additionally, OB is the areas occupied by the obstacle cells, L and T denote the actual path length and coverage time, respectively. v_max indicates the maximum velocity of the mobile vehicle. The nomenclature for the problem and our algorithm is summarised in Table <ref>. §.§ Algorithm for Coverage Path Planning using Flexible Formation Control This section describes our algorithm for MCPP in a known environment. We begin with an informal description supported by diagrams here, then provide algorithmic details in the following sub-sections. The algorithm is designed to produce a coverage path that maximises the area of the environment that will be visited while meeting a given time budget. We use a binary and linear search to find the smallest grid cell size (CS) that will produce a coverage path in the presence of obstacles as per (<ref>). The algorithm proceeds as follows: * The algorithm starts by choosing a grid cell size and overlaying it on the given map. For example, grid lines can be seen in grey in Fig. <ref>. * Next, a scanline algorithm is used to convert the environment geometry onto the grid as filled or free cells. The blue cells in Fig. <ref> show (filled) obstacles, while yellow cells represent (free) open space. * The open space is then decomposed into variable-sized square blocks that follow the grid lines. These blocks are colored red, aqua, and purple in Fig. <ref>. * The centre points of these blocks are joined by a minimum spanning tree, shown in black in Fig. <ref>. * Afterward, a coverage path is constructed that circumnavigates the spanning tree. This is shown as a dashed green line in Fig. <ref>. * Using the coverage path, the algorithm predicts how long coverage will take. Based on this predicted time and maximum path length, the binary search either outputs the final grid cell size or continues to dot point one above until a grid cell size is determined that will meet the budget. If the cell size increases above the observation range of the UGV, the algorithm will exit without a solution. * Once the cell size is chosen, waypoints are generated at each bend on the coverage path. These are shown as black circles in Fig. <ref> and contain data about the size of the block they sit in. * The waypoints are then sent, in order, to a group of UGVs, which use the block size data to assume a formation that will enable them to pass through the block in a way that covers all the cells within it. The group of UGVs adapt their formation in response to both planned and real-time data from their sensors. Example formations are shown in Fig. <ref>. Fig. <ref>(a) has a fine grid, while Fig. <ref>(b) has a coarse grid. The effect of the grid resolution is twofold. First, the obstacle boundaries (shown in dark purple cells) become coarser, blocking off (or de-prioritising) parts of the environment. For example, in Fig. <ref> the area behind the large obstacle is blocked off. This, in turn, has the effect that the spanning tree becomes smaller, and the coverage path is shorter. These two effects contribute to the UGVs being able to meet a lower time budget by trading-off coverage percent. Further, Figure <ref> shows the flowchart of the algorithm, which provides a visual representation of the steps described in the algorithm list. We now consider the algorithm in detail. §.§.§ Interpreting Obstacle Geometry as a Grid We assume obstacles are input to the system as a set of vertices. A scanline algorithm <cit.> is used to classify grid cells as being either an obstacle or free space. The scanline algorithm scans through the grid horizontally. Locations of the intersection between the scan lines and the obstacle edges are detected. Each obstacle is then discretized into grid cells at all intersection points and between pairs of intersection points. §.§.§ Creating Variable Sized Blocks Once the accessible areas are realized, a block-building algorithm is applied to group adjacent free grid cells into variable-sized blocks. The largest block size is chosen to accommodate the size of the group of UGVs when they are spread out. The remaining block sizes are calculated by progressively halving the largest size. We assume several window-based scans are performed from left to right and from bottom to top using each block size in turn. The largest block size is selected first, then gradually reduced to 1. If all grid cells in the scan window are free and do not belong to another block, they will be merged into a block and labelled with the block identifier. Otherwise, the scan window will skip and continue to the next position. After scanning all block sizes, a list of blocks covering the entire map is obtained. The block-building algorithm is summarised in Algorithm 1 of the Supplementary Material section. Next, the connectivity between the blocks needs to be established. Every grid cell always has four connected neighbours (top, bottom, left, right). See Fig. <ref> for an example. The white and black cells are free and obstacle cells, respectively. The black lines are the edges of the spanning tree. The green dash lines are the generated paths. Blue dash lines and yellow dots are the boundary lines and the center of block parts. If the two adjacent cells belong to two different blocks, these two blocks will be marked as connected. Meanwhile, the connection direction of the two blocks is equal to the connection direction of the two cells. However, there is also no connecting edge between the center block and the bottom left block in the MST (black line) because the top left block is closer to the bottom left block than the center block. The connection direction d ∈{TOP, LEFT, BOTTOM, RIGHT}, from cell c_ij to cell c_i^'j^' d_c_ij,c_i^'j^' is derived as: d_c_ij,c_i^'j^' = LEFT , [i-i^',j-j^']=[-1,0], RIGHT , [i-i^',j-j^']=[1,0], BOTTOM , [i-i^',j-j^']=[0,-1], TOP , [i-i^',j-j^']=[0,1]. The block to which grid cell c belongs is b_c. Assume blocks of adjacent cells c_ij and c_i^'j^' are different. The direction from the block of cell c_ij to the block of cell c_i^'j^' is given by: d_b_c_ij,b_c_i^'j^'=d_c_ij,c_i^'j^' After building a graph with the connection between blocks, the minimum spanning tree algorithm will be used to find a spanning tree. Connections that do not belong to the spanning tree will be removed. §.§.§ Minimum Spanning Tree A spanning tree is a subset of an undirected graph G(V,E) that connects all the vertices V of the graph with a minimum number of edges E. A spanning tree cannot contain cycles or disconnected nodes. However, a connected and undirected graph G may be connected with more than one spanning tree since every vertex can be connected from many directions. The cost of the spanning tree is the sum of edge weights in the tree. The most efficient spanning tree can be found by a minimum spanning tree algorithm. The MST algorithm constructs a tree including every vertex, where the sum of the weights of all the edges in the tree is minimized. Prim's algorithm <cit.> is used to construct the spanning tree by computing an edge with the least weight and adding it to the growing spanning tree. Prim's algorithm is selected for our work since it is one of the fastest algorithms in dense graphs <cit.>. §.§.§ Coverage Path Planning Algorithm For the purpose of path planning, each block is logically divided into four parts, as depicted in Fig. <ref>, and the center of each part serves as the connection point along the constructed route. The construction of the coverage path depends on the number of adjacent blocks in a given direction, which can be categorized into three cases: (1) no adjacent block, (2) one adjacent block, or (3) multiple adjacent blocks. In the first case, the centers of the two parts are simply connected. In the second case, if the adjacent block is already connected to the current block, it is ignored. Otherwise, the two parts of the current block are connected to the adjacent parts of the neighboring block in the same direction. In the third case, the adjacent blocks are sorted in the same direction, and for each adjacent pair of blocks, a joint point is generated as the intersection point between two lines. The first line connects the midpoint between the two centers of the adjacent blocks to the center of the current block, and the second line connects the center of the two adjacent parts of the current block. Fig. <ref> (d) illustrates these lines, and the relevant formulas are located in lines 16-20 of Algorithm <ref>. Finally, the adjacent parts of the adjacent blocks are connected to the generated joint points. Figs. <ref> and <ref> visually demonstrate the linking of blocks and the generation of the UGV route. Denote D_d,b as the list of neighbor blocks of the block b in the direction d, Q_c,b as the coordinate of the center of the part c ∈{TL, TR, BL, BR} of the block b, P_C,b as the coordinate of the center of the block b, P_L,b as the coordinate of the left edge of the block b, N_b as the list of neighbor blocks of the block b. The specific implementation is given in Algorithms 1 and <ref>. §.§.§ Predicting Time to Follow the Path The turnaround time T̂ and total angle difference θ̂ are estimated by: θ̂ = ∑_i=1^|P||α_p_i-α_p_i+1| T̂ = L/v + n_t*θ̂/ω, where α_p_i is the orientation of the i^th line p with respect to the x-axis. The component n_t*θ̂/ω in the T̂ equation is the leader's rotational time. The total path length L̂ and the coverage percentage ĈP̂ are defined as in (<ref>). At sharp corners, the leader only rotates around its heading; whereas, the followers modify their positions. The formation's rotational motion at the center of the formation, therefore, can be approximated as that at the leader position. §.§.§ Optimising the Grid Cell Size to Meet the Time Budget We use a binary search strategy with low memory and low complexity <cit.> to identify the appropriate grid cell size that produces a path length that can be followed within a given time budget. Binary search repeatedly divides the search interval in half of the lower m and the upper m bounds. If all estimated values are equal to the given metrics, the search is completed. The searching also stops when the lower m is greater than or equal to the upper m. Otherwise, if all estimated values of the current CS are more than the given metrics, the search narrows the interval in the upper half. Otherwise, the search recurs in the lower half. However, the maximum coverage percentage metric cannot be calculated accurately due to intermittent scanning values. According to Algorithm <ref>, the binary search begins to compute L̂ and T̂ for every chosen grid cell size. It repeatedly checks until the two target values, which are less than L̂ and T̂, are obtained. Then, the linear search resumes estimating ĈP̂ for larger subsequent cell sizes until the final objective (maximum coverage percentage) is attained within 10 next cell size values by the linear search optimisation algorithm. §.§.§ Formation Control Each of the n_q UGVs is given an identification number subscript q_0, q_1, etc. Each real UGV is assumed to be connected by a virtual spring system to its virtual leader, representing the physical interconnection between the UGVs and wireless communication. Referring to Fig. <ref>, orange lines represent the desired formation. Red circles indicate the follower UGVs, while green circles illustrate the virtual leader. Black circles are waypoints. Dashed green lines show the planned path. The black line is the spanning tree. Multiple UGVs (q_1,..., and q_5) will be controlled to move in a formation whose pattern depends on the various block sizes. q_0 is treated as the virtual leader; the remaining UGVs are followers. Once virtual UGV q_0 and UGV q_i (i ≥ 1) are linked together through virtual springs (VSs), spring forces are generated between them. Based on the desired natural length vector l_o and actual length vector l_a of each VS, the common control rules of the VS method can be set. The degree of difficulty in assigning a specific character for a UGV in the formation is defined as a character cost. The cost function equation is expressed as: ϕ_ij = ω_cx |l_a_0ix| + ω_cy |l_a_0iy| + ω_cθ |l_a_0iθ|, ∀ i, j ≥ 1 where ϕ_ij is the cost of the UGV i to obtain the j^th goal role. l_a_0ix and l_a_0iy are the relative positions from the actual UGV position to the virtual leader position along the x and y axes. l_a_0iθ is the angular difference between the initial direction and the target direction. ω_cx, ω_cy, and ω_cθ are weight constants. After all costs for each character in the formation are calculated, the UGV with the maximum cost will be assigned a corresponding role in fo, as shown in Fig. <ref>(a) and the following equation: fo_j = i , if ϕ_ij > ϕ_hj,∀ i ≠ h, i & h ≤ n_q, ∀ fo_k > i, k ≠ j. A relative position-based formation control method is derived here to maintain the desired formation shape for networked UGVs. The natural lengths of the springs l_0 = {l_01,...,l_0n_q} are set according to the geometrical requirements for the desired formations. Depending on the block size (BS), there are three formation shapes: the first is V-shaped (V-formation), the second is the U-shaped U-formation, and the third is a queuing (line) formation (Q-formation). The V-formation or U-formation is selected when the BS is greater than one. For a block size of 1, the Q-formation is used. Using the BS information and the formation type, the desired relative positions for each follower with respect to the virtual leader are computed to generate an obstacle avoidance formation, as described in Algorithm 4 of the Supplementary Material section. The control input to each ground vehicle is the resultant of the force vector generated by the virtual spring pairs connected to the UGVs. §.§.§ Path Tracking Algorithm After the leader UGV's coverage trajectory is computed, an online path planner outputs a series of way points wp = wp_0,..., wp_k,..., wp_n_w around the trajectory for the UGV navigation, where n_w denotes the number of way points. Further, each UGV in the formation knows the remaining UGVs' position information. Using the virtual leader strategy, the spanning-tree path is shared among UGVs after the calculation process is completed. Based on the virtual vehicle tracking error of follower 1 (the closest follower, named fo_1 ∈ fo), the proposed virtual velocity for the virtual vehicle v_q_0 is: v_q_0 = v (1-min(max(|l_a_01|-l_q_0/l_q_0-l_q_0,0),1)), where |l_a_01| represents the distance between the virtual leader and follower 1, l_q_0, l_q_0 represent the maximum and minimum distance between the virtual leader and follower 1 to reach the minimum and maximum speed, respectively. After every time step, the next way point wp_k+1 is produced: wp_k+1 := wp_k+v_q_0 dt, where dt is the sample time. Based on the Euler distance between the leader and the goal, the goal following force vector F_g acting on the leader is derived as: F_g = ω_g*(wp_k+1-q_p_0), where ω_g is the attractive force weight. The proposed formation control approach is distributed to each UGV to guarantee that if any UGV fails the rest can adopt new roles and continue to track the virtual leader. §.§.§ Closest-Safe-Angle-Based Obstacle Avoidance In order to prevent further collisions, a LiDAR sensor system is equipped on each UGV to measure distances (d_o) and angles (α) from any surfaces. It works by emitting pulsed light waves in all directions and measuring how long it takes for them to bounce back off surrounding objects. To be convenient for further computation, the LiDAR information chain is converted to Cartesian coordinates as follows: α_ι = ι ϕ + θ, ∀ 0≤α_ι≤ 360, p^ι_o = (x^ι_o,y^ι_o) = (d^ι_o cosα_ι,d^ι_o sinα_ι), where ϕ denotes an angular distance between measurements while θ illustrates the UGV heading and ι stands for the measurement step. p^ι_o = (x^ι_o,y^ι_o) is the obstacle position at the ι^th scan time. Additionally, d^ι_o is the ι^th relative distance measured at the ι^th α scanning angle. When any of the UGVs or obstacles move or violate the avoidance radius R_av, the UGV's angular width causing possible collisions called the blocked angles α_b, can be estimated and incorporated into the planner. To guarantee safe passage, the UGV is steered towards angles that are not blocked, called safe angles α_s. A discrete list of all possible heading angles between 0 and 2π is generated based on the angle increment ϕ. For example, if ϕ = π/180, the list's size n will be 360. The algorithm then searches for all safe angles in the list: 0 ≤α^ι_s < 2 π, α^ι_s ∈α 1 ≤ι≤ n. We define a collection of obstacle cells that need to be avoided: P' = P^k_o: 1 ≤ k ≤ m, the UGV circle's radius R, the UGV centre O, the obstacle circle's radius R_p, the P^k_o centre C_k, and the relative angle η^k between the centre line OC_k and the X-Axis. The two tangent lines between the circle of the obstacle P^k_o and the UGV circle are constructed as shown in Fig. <ref>. Next, a set of two symmetric k^th blocked angles [-β^k,β^k] with respect to the centre line OC_k can be expressed as: [-β^k;β^k] = [-arcsinR+R_p/OC_k,arcsinR+R_p/OC_k]. All global heading angles α located in the β angle's width are treated as blocked angles for the relevant UGV to transverse. The collection of angles blocked by the obstacle P^k_o, named Δ_k, can be defined as: Δ_k:{α_ι: ≤α_ι≤η^k+β^k,α_ι∈α}. A complete list of the blocked angles Δ is obtained from: Δ = ∪^k=1_mΔ_k. Now, a list of safe angles S can be established by excluding the list D from the list α: S = α - Δ. If S = ∅, the UGV would halt ( V = 0 ) until any safe angle is scanned. Otherwise, the safe angle closest to the target angle α_t is selected as follows: α_r = min(|α_ι - α_t|), α_ι∈ S. The outcome of our obstacle avoidance strategy is the UGV's desired minimum orientation α_r. If no obstacles are predicted, the virtual force vector (v_x, v_y) is fused from all force components (target force and spring force). Otherwise, this vector is turned towards the angle α_r in order to generate a new avoiding vector v_av as in (<ref>). v_av = [ v_avx; v_avy ]^T = [ v_x cos(α_r) - v_y sin(α_r); v_x sin(α_r) + v_y cos(α_r) ]^T, Based on (<ref>), the control input of each UGV is computed as: u = [ v; ω ] = [ v = |v_av|,; ω = atan.(v_av), ] §.§ Complexity Analysis This section discusses the time complexity of the algorithm components mentioned above. Dynamic formation implementations demonstrate a worst-case time complexity of O(n_q^2) where n_q stands for the number of mobile UGVs. This is because each physical UGV must inspect its position relative to the virtual leader and then transfer the relative position information to every other UGV to determine its role in the formation. The other significant element of our approach is the exchange of coverage matrices between UGVs. On receipt of such data from another UGV, each UGV must compare C_W × C_H cells to update its coverage and obstacle matrices. This process delivers time complexity O(C_WC_H). Hence, the worst-case time complexity would be O(n_q^2) + O(C_WC_H) if all agents exchange their coverage matrices and their own relative position information with all other agents. The algorithm is theoretically scalable to large formations and environments due to no exponential terms. In our case where there is a small number of UGVs, O(C_WC_H) is the dominant term. § EXPERIMENTS This section presents a range of experiments designed to evaluate the performance of our planning and prediction engine in various scenarios. In Section IV.A, we demonstrate the engine's accuracy in predicting execution time by conducting an experiment with a simulated UGV. This experiment shows that the engine can efficiently find a suitable coverage path plan that meets a given time budget in only a short time. In Section IV.B, we conduct a series of comparative experiments using our novel coverage path planning with formation control (CPPF), the coverage path planning with swarming, and the frontier-led swarming <cit.> on simulated Jackal UGVs. These experiments investigate the impact of UGV speed and time budget on path following performance while maintaining the group and order of the UGV formation. Section IV.C compares the multi-robot coverage path planning performance of CPPF and another method that uses a quadtree data structure without physical limits <cit.>. This comparison provides insights into the strengths and weaknesses of each approach to the optimal coverage path planning problem in cluttered environments. In the simulated experiments, we used the same settings (number of agents, maximum exploration time, path length, environmental setups, and initial locations) to guarantee a fair comparison between algorithms. In Section IV.D, we describe experiments conducted on real Jackal UGVs in outdoor environments. These experiments aim to evaluate the engine's performance in real-world conditions. Finally, we summarize the experiments in Section IV.E, providing an overview of the findings and their implications. §.§ Experiment 1: Characterising Prediction Performance §.§.§ Experiment 1 Setup To verify the effectiveness of the prediction engine, a simulated, cluttered map of 25m×25m is used (see Fig. <ref>). Four static and complex-shaped obstacles (e.g., star, U-shaped, cylinder, and castle-shaped) were placed randomly. Cell sizes ranging from 0.75m to 3m are used to generate coverage paths and time-to-follow predictions. A single simulated UGV was then permitted to follow the path and the time taken compared to the prediction. A single small Turtlebot UGV was used in these experiments so that we could examine the approximate path following performance without the complexity of formation control. The UGV was equipped with a 360-degree LiDAR sensor for reactive obstacle avoidance while path following. The forward speed v of the UGV was initialised to 0.14m/s. The maximum turn rate when maneuvering was set to 0.7rad/s. Other parameters of this experiment are summarised in Table <ref>. The i7-1260P CPU, Ubuntu Focal (20.04), ROS Noetic framework, and Python 3.8 are used to program and implement the planning approaches. The dynamic behavior of all vehicles was simulated in Gazebo. The sampling time of the whole system is set at 30Hz. §.§.§ Experiment 1 Performance Metrics Three criteria are chosen to evaluate the prediction performance of the proposed algorithms. They are: * PLD: Difference between the predicted and actual coverage path length, * TTD: Difference between the predicted and actual turnaround time (where turnaround time is defined as the time to complete following the path), * CPD: Difference between the predicted and actual coverage percentage. §.§.§ Experiment 1 Prediction results Tables <ref>-<ref> show the results from the prediction module versus the Gazebo path following simulation. PLD was under 25m and TTD under 300s. Our observation of the UGV's movement revealed that the need to slow down to perform turns was the main factor causing the error between predicted and actual performance. Fig. <ref> shows two examples of the predicted paths for different grid cell sizes. We can see that the larger grid cell size causes a coarser boundary around obstacles (shown in purple around the blue obstacles). Fig. <ref> shows that this has the effect of reducing the total area that will be covered by the UGVs when the grid cell size is very large. Thus, while CPD was 0%, the actual coverage percent reduces with increasing grid cell size. This has the effect of meeting a tighter budget. We also observed that the lower the grid cell size, the greater the computation time to make a prediction. For example, the computation time for the 0.75m cell size is approximately 6s, while that for cell size above 1.5m does not go above 1.5s. §.§ Experiments 2-4: Characterising Path Following Performance In this section, we study the efficacy of our formation control system (denoted CPPF) in following a planned coverage path. We examine the impact of the top speed of the UGVs and the time budget on performance. We include three comparative algorithms: the first (denoted CPPS and shown in Algorithm <ref>) substitutes the leader-follower flexible formation strategy with a rule-based swarm strategy. A swarm can adapt its shape reactively to wider or narrower parts of the environment but does not require a leader. The second comparative approach (denoted FS <cit.>) uses a reactive, frontier-led swarming strategy with no planning. §.§.§ Experiments 2-4 Simulation Setup The simulation environments in these experiments are implemented on the same PC running Ubuntu, ROS Noetic, and Gazebo's Jackal models as in Experiment 1. The simulated environment is a replica of the University of New South Wales Canberra campus, shown in Fig. <ref>, where light blue objects represent the real static obstacles. A UGV team comprising 5 Jackal mobile UGVs is used. Each experiment is repeated 5 times. Mean and 95% confidence intervals are reported where appropriate. The P-Value from the Student T-test is used to distinguish the performance of different algorithms. If the variance of means between two sets is less than the expected P value of 0.05 (i.e., 5%), we assume the results are statistically significant. The initial locations of the five vehicles are varied after every trial and given in Table <ref>: Each UGV's maximum linear speed is 2.0m/s, and the maximum angular velocity is set to ±1.5rad/s. Table <ref> summarises all parameter settings for the simulation experiments. The maximum coverage path length and coverage time (L_max,T_max) are 3500m and 30000s. §.§.§ Experiments 2-4 Performance Metrics In this series of experiments, the following metrics are examined: * Coverage percentage (CP), the percentage of the environment visited by the UGVs * Path length (PL), the length of the coverage path * Turnaround time (TT), the time to follow the coverage path (or achieve 100% coverage in the case of the FS algorithm) * Coverage redundancy (CR). Coverage redundancy (backtracking over areas already covered) can be calculated using (<ref>) where C_repeated is the average number of all repeated coverage cells. * Group (G), how close together the UGVs are, defined as (<ref>) where N_s is the number of swarming UGVs, N_t = 5 is the number of trials. p̅_a is the average position of the involved UGVs at the given time. We evaluate the group and order every 500 steps, as an average over the proceeding 150-time steps. * Order (O), how well-aligned UGVs are in terms of both speed and direction as defined in <ref>) where ε̅_a is the average velocity of the involved UGVs at the given time. We also evaluate ordering every 500 steps as an average over the proceeding 150-time steps. CR = |C_repeated|/|C_free|× 100%, G = ∑_i=0^N_t∑_t=T_0^TT∑_i=1^N_s ||p^i - p̅_a||_2/TT-t/N_s N_t, O = ∑_i=0^N_t∑_t=T_0^TT∑_i=1^N_s ||ε^i - ε̅_a||_2/TT-t/N_s N_t, §.§.§ Experiment 2 Group and Order results First, we examine the ability of all three algorithms to keep the UGVs together and heading in the same direction and at the same speed. Fig. <ref> indicates that all the approaches maintain a reasonable level of grouping and ordering relative to the search space size. The FS approach maintains a tighter formation than the CPPF and CPPS approaches in all experimental settings. The difference in group metrics is statistically significant at the 95% confidence level. This is likely due to the influence of the block-size switches. However, grouping and order are still maintained by the CPPF and CPPS methods, and the group and order metrics do not change significantly over time. The order metrics are similar for all three approaches. This makes sense because the swarming and formation approaches should both encourage ordering. §.§.§ Experiment 3 Coverage performances with three different speed caps Three maximum vehicle speeds are set as 0.45m/s (low), 0.7m/s (medium), 0.95m/s (high) in this experiment. This permits us to examine whether high speeds cause the UGVs to miss covering parts of the environment. Based on the default coverage path length budget, turnaround time budget, and the maximum linear and rotation speeds, the optimisation algorithm proposes a grid cell size of 2.25m that should result in a coverage path with L̂ of 2007.34m and T̂ of 5039.04s for the low speed; 3445.92s for the medium speed and 2991.28s for the high speed. ĈP̂ should be 100%. The LiDAR sensor's observation range is 4m. Fig. <ref> shows the turnaround time, path length, coverage percentage, and coverage redundancy metrics. As expected, the higher the maximum vehicle speed, the shorter the turnaround time is. Using CPPF and FS, the TTD is below 300s, while TTD for CPPS is approximately double. CPPF and FS strategies were also able to cover the whole region (78.19±0.00% direct CP plus 21.81±0.00% indirect CP), whereas the CPPS achieved an incomplete coverage percent (77.37±0.32% for the direct coverage and 21.99±0.00% for the indirect coverage). This is because CPPS uses an organic formation control strategy that is unable to backtrack. CPPF has a PLD of under 90m, while FS has the largest PLD. This is because FS (which does not include a planning component) often needs to backtrack to cover a missed area. The differences in CR for the comparative algorithms are insignificant at the 95% confidence level, except in the high-speed case, where the FS produces the highest redundant coverage percentage, namely, 2.1±0.06%. While the path planning-based coverage methods generate the non-backtracking paths, the frontier search technique guides the physical UGVs to track frontier cells, leading to an increase in back-tracking the covered areas. §.§.§ Experiment 4 Performance with different time and path length budgets All algorithms were also run with three different time and path length budgets: (1) high budgets of [4000s, 2500m] (more than enough time for all algorithms to achieve 100% coverage), (2) tight budgets of [2800s, 1800m] (barely enough time for one of the algorithms to finish), (3) very tight budgets of [1200s, 800m] (no algorithm will be able to achieve optimal solution-approximately 77.15% map covered). The planner produces the paths shown in Fig. <ref> for each of these cases. In these figures, blue polygons show the real obstacle positions. Purple cells indicate regions denoted as obstacles as a result of different grid cell size recommendations. Yellow cells represent open areas. All UGVs' maximum speed in all experiments is set at 0.7m/s. In general, Fig. <ref> demonstrates that the CPPF strategy presents minimal offsets from the desired coverage parameters (such as the maximum coverage percentage, total path length, and coverage time) in both cases. The FS and CPPF exhibit quite similar metrics for the very tight and high budgets scenarios. In contrast, the FS has a superior TT figure for the tight-budget case, with the lowest PL, and CR values. This difference is statistically significant at the 95% confidence level. Moreover, there are no significant differences in the TT and CR parameters when high budgets are allowed. As shown in Fig. <ref>, it is interesting to see that both strategies only achieve partial (direct) coverage in the tight and very tight-budget scenarios because all obstacle boundaries are not observed by the LiDAR sensor (its observation range is smaller than the grid cell size). They produce the same direct CPs of only 45±0.00% and 68.46±0.00%. However, they generate full coverage, including direct and indirect coverage, when a high budget is employed. The CP metric of the CFFS is 0.2% lower than those of the CPPF and FS. The reason is that the higher the desired budget is set, the smaller the grid cell size is. In the high-budget case, the grid cell size of 2.25m is significantly less than LiDAR's observation range of 4m. §.§ Experiment 5: Comparison with a Quadtree Data Structure The spanning tree produced by the multi-resolution block structure used in CPPF can have a significant impact on coverage results. To assess this impact, we compared the performance of MCPP in CPPF with that of a conventional homogeneous quadtree algorithm, where all blocks are of uniform size (0.45m), using a 25m × 25m grid map containing oddly shaped obstacles as illustrated in Fig. <ref>. Complex-shaped obstacles can cause a block to have multiple neighbor blocks in the same direction, resulting in incomplete coverage routes (green line) and discontinuous minimum spanning trees (black line) in the quadtree method. However, our MCPP algorithm resolves this issue in the third case of Algorithm <ref> (Lines 12-24). In this case, when obstacle cells exist between two adjacent blocks, the joint point still satisfies the condition of being within the intersection region between the considered block and the triangle formed by the three centers of the considered block and the two adjacent blocks (see Fig. <ref>). As observed in Figs. <ref> and <ref>, blocks B2 and B3 are located on top of block B1, as determined by the MST. The quadtree algorithm is then applied to link the two upper parts of B1 with the two lower parts of B2, thereby precluding the possibility of establishing an additional connection with B3. The disruption of the connection between B1 and B3 would result in the loss of the remaining branch of the MST. In the absence of obstacles on the map, the minimum spanning tree will exhibit, at most, a single connection between any two blocks in any given direction, and the seamless continuity of the path is ensured by the quadtree algorithm. Our proposed algorithm establishes a path that connects B1 to both B2 and B3, resulting in a path that guarantees that all parts of the MST are followed to form the robot path. Furthermore, although all estimates, such as maximum coverage percentage, total path length, and coverage time, can be well achieved by both methods when combined with our formation control solution, as demonstrated in Experiments 1-4, the coverage percentage obtained by running the quadtree algorithm is lower than our spanning tree method with modified rules. This is because the multi-robot team cannot move through the blocks, such as the eleven yellow blocks shown in Fig. <ref>, to update the coverage information due to the absence of the coverage paths. §.§ Experiment 6: Outdoor experiments using Jackals and DGPS positioning §.§.§ Experiment 6 Outdoor Setup To verify the effectiveness of the algorithms on real UGVs, a 15.25m × 24.4m outdoor setup was used involving several arbitrary-shaped obstacles (see Fig. <ref>) and a team of three Jackal UGVs. There were three significant obstacles: a hut, a shipping container, and a piece of equipment covered with a metal mesh cage. Several steel posts were on the terrain, each with a Y cross-section. The diameter of a post (assuming a circular cross-section) was roughly 6cm. In addition to these fixed obstacles, a no-go zone for the UGVs was included to prevent the UGVs from crossing over known rabbit burrows. The Jackal UGVs used in our experiments are fitted with a differential GPS (DGPS) rover module, Inertial Measurement Unit (IMU) and a SICK LMS-111 LiDAR. The LiDAR's observation range is 4m. Incoming GPS rover signals and the internal IMU sensor were read at a rate of 10Hz. The DGPS base station was stationary and transmits DGPS corrections to the rovers. The sampling time of the whole system was set at 30Hz. The role of the ROS Master is to enable individual ROS nodes to locate one another. Unlike the traditional implementations that run the ROS Master on only a single base station, the ROS Master was implemented on each UGV to facilitate ROS communications and improve the system's robustness. Five trials were conducted. The problem constraints given were the maximum path length L_max of 100m, and maximum coverage time T_max of 600s. For each test, the UGVs were initially setup in a V-formation close to the bottom-left corner at GPS-measured coordinates of [9.53, 15.58, 0]m, with the formation facing down towards the bottom of the map. Some slight variations in initial UGV locations were introduced at the start of each test. The initial locations of the three vehicles are given in the columns of (<ref>). The virtual leader's forward speed v is initialised to 0.2m/s. The maximum turn rate when maneuvering is set to 0.73rad/s. Other parameters of our experiments are summarised in Table <ref>. q_p_i(0) = [ 9.23 10.01 8.07; 15.69 18.4 18.32; 0 0 0 ] §.§.§ Experiment 6 Performance metrics The metrics used in this experiment were: (1) difference between the actual and predicted path length (PLD); (2) difference between the actual and predicted turnaround time (TTD); (3) Coverage percentage (CP); (4) coverage redundancy (CR); (5) group (G); and (6) order (O). §.§.§ Experiment 6 Results The experimental test performed can be viewed in the following videos: <https://youtu.be/z4kK6OnXXg8>. The prediction engine took 2s to produce the recommended path shown in Fig. <ref>. It recommends a grid cell size of 3.05m, and predicts the path length L̂ of 96.86m, coverage time T̂ of 575.04s, and coverage percentage CP of 100%. As shown in the video, the map is entirely covered (namely 90±0.00% for the direct coverage and 10±0.00% for the indirect coverage) after 586.17±1.2s (mean over five tests ± standard deviation). The travelled path length is approximately 103.03±0.9m. These results reflect a PLD of 6.17m and TTD of 11.13s. These results are slightly higher than those from our simulations. We observed this is due to the variations in the initial UGV position among five tests and the actual tracking errors caused by the terrain's uneven round with thick grass and the DGPS and IMU sensor errors. On the other hand, the advantage of using the spanning tree to design the complete coverage paths can be seen through the low CR of 7.33±1.41 when the back-tracking routes are eliminated, and only several cells at the corners are covered repeatedly. As shown in Fig. <ref>, there are slight fluctuations in the actual paths. This is due to three factors. First, we measured GPS static position errors of 0.08m that contribute to some deviations in movement. Secondly, as the robots switch between V, U and queuing formations, there is some distortion in the paths. Finally, inter-UGV collision avoidance exerts influence on the robots causing deviations from the path. However, under the control of our formation and role assignment strategies, three Jackal follower UGVs switch positions and effectively form the desired formations when tracking the virtual leader's motion and avoiding obstacles along the designed coverage path. The UGV behaviors exposed in the real-time environment are similar to those obtained in simulations. As a result, the proposed methods yield reasonable G and O figures, although there are short periods where there is high variance in the grouping. This occurs when one UGV strays away from the others (e.g. as a result of a brief increase in GPS error). The system self-corrects when GPS is re-acquired. Additionally, there are no significant variations of these two variables over time (see Fig. <ref>). Besides stationary obstacles, dynamic obstacles such as humans can move unpredictably and obstruct a flock's movement along a planned path. However, by combining the MCPP method with the closest safe angle-based obstacle avoidance, our robot team was able to successfully navigate through a challenging environment without any communication. In a video demonstration (available at <https://youtu.be/UjihyOj-VCU>), the relevant UGVs quickly adjusted their path when the human obstacle violated their obstacle avoidance radius, safely steering towards the nearest safe area and then resuming their intended path. Furthermore, during formation switches, the UGVs were able to avoid mutual collisions. This work addresses the limitations of existing coverage path planning methods by incorporating obstacle avoidance techniques, which enable robots to navigate safely in a dynamic environment <cit.>. §.§ Summary of Experiments To validate the effectiveness and efficiency of our algorithm for multi-agent systems, we conducted a comprehensive set of experiments in both simulated and real-world environments, with five trials for each experiment. The obtained results showed that our algorithm yielded the best coverage accuracy rate without collisions and a coverage redundancy rate of only 2.7% when all physical limits were met. These results demonstrate our algorithm's high accuracy and efficiency in ideal conditions. We encountered some limitations in the real-time field tests due to hardware and environmental factors and mobile obstacles. However, even under these challenging conditions, we still achieved similar results to those obtained in the simulated experiments. This indicates that our algorithm is robust and can perform and adapt well to changing real-world scenarios. The significance of these results lies in the potential applications of multi-agent systems, such as search and rescue operations or environmental monitoring. Our algorithm provides an efficient and reliable method for coordinating multiple agents to explore a dynamically changing area with minimal overlap and using minimal resources. § CONCLUSION This paper has described a novel approach to MCPP in a dynamic environment. The CPPF algorithm produces a coverage path that maximises the area of the environment that will be visited while meeting given time and path length budgets. Further, it permits the detection of unknown obstacles simultaneously with the steering of the mobile robot to avoid collisions. We demonstrated this algorithm on simulated and real Jackal UGVs. We provided a range of statistics showing the performance of the prediction engine and the path-following algorithms. We saw that: We saw that: * The planner is able to recommend suitable grid cell sizes to meet given time and path length budgets within approximately 10s. * Path length predictions are reasonably accurate for both simulated and real UGVs. The longer the necessary path, the lower the path length prediction error, with as little as 0.4% error on a 1km path. * Turnaround time predictions range in accuracy from a few seconds over 100 metres to up to 5 minutes discrepancy on a 1km path. * Formation-based path following achieves comparable coverage performance to a state-of-the-art reactive swarming approach but offers the guarantee of a time and path length prediction. * It is feasible to use the algorithm on real UGVs in an outdoor setting. The possibilities for future work in this area are rich. The proposed method constructs the optimal coverage path for every UGV using the MST such that the union of all paths generates a full coverage of the terrain. However, these paths are static, and the coverage is performed in static environments where the obstacles do not move. In future work, we will use a rapidly-exploring random tree (RRT) algorithm for local path re-planning to update the current path to achieve mobile obstacle avoidance in dynamic environments. § ACKNOWLEDGEMENT This work was supported by the Australian Defence Science and Technology Group (DSTG) under grant No. 9729. unsrt
http://arxiv.org/abs/2306.02172v1
20230603183239
On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms
[ "Chandra Chekuri", "Manuel R. Torres" ]
cs.DS
[ "cs.DS" ]
indent=1.1em On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms Chandra ChekuriDept. of Computer Science, Univ. of Illinois, Urbana-Champaign, Urbana, IL 61801. [email protected]. Supported in part by NSF grant CCF-1910149. Manuel R. TorresDept. of Computer Science, Univ. of Illinois, Urbana-Champaign, Urbana, IL 61801. [email protected]. Supported in part by fellowships from NSF and the Sloan Foundation, and NSF grant CCF-1910149. ======================================================================================================================================================================================================================================================================================================================================================================================================== Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg <cit.> introduced the generalized p-mean densest subgraph problem which captures the maxcore problem when p=-∞ and the densest subgraph problem when p=1. They observed that the objective leads to a supermodular function when p ≥ 1 and hence can be solved in polynomial time; for this case, they also developed a simple greedy peeling algorithm with a bounded approximation ratio. In this paper, we make several contributions. First, we prove that for any p ∈ (-1/8, 0) ∪ (0, 1/4) the problem is NP-Hard and for any p ∈ (-3,0) ∪ (0,1) the weighted version of the problem is NP-Hard, partly resolving a question left open in <cit.>. Second, we describe two simple 1/2-approximation algorithms for all p < 1, and show that our analysis of these algorithms is tight. For p > 1 we develop a fast near-linear time implementation of the greedy peeling algorithm from <cit.>. This allows us to plug it into the iterative peeling algorithm that was shown to converge to an optimum solution <cit.>. We demonstrate the efficacy of our algorithms by running extensive experiments on large graphs. Together, our results provide a comprehensive understanding of the complexity of the p-mean densest subgraph problem and lead to fast and provably good algorithms for the full range of p. empty § INTRODUCTION Dense subgraph discovery is an essential tool in graph mining and network analysis. One can view the approach as finding clusters or communities in a graph where the edges between the nodes in the cluster are denser compared to those in the entire graph. There are a number applications of dense subgraph discovery in biological settings <cit.>, protein-protein interaction networks <cit.>, web mining <cit.>, social network analysis <cit.>, real-time story identification <cit.>, and finance and fraud detection <cit.>. Different density definitions are used and studied in the literature, motivated by the needs of applications and theoretical considerations (see <cit.> for some surveys). Each density definition leads to a corresponding combinatorial optimization problem: given a graph G, find a subgraph of maximum density. Two of the most popular density measures in the literature are (i) the minimum degree of the subgraph and (ii) the average degree of the subgraph. These measures lead to the maxcore problem and densest subgraph problem (); the goal is to find the subgraph with the maximum minimum degree and maximum average degree, respectively. They are both polynomial-time solvable and have been extensively studied. We briefly describe them before discussing a common generalization that is the focus of this paper. A k-core of a graph is a maximal connected subgraph with all vertices of degree at least k. The largest value of k for which G contains a k-core is known as the degeneracy. We refer to the k-core obtaining this maximum as the maxcore. k-cores are a popular notion of density, commonly finding use in what is known as the k-core decomposition, a nested sequence of subgraphs that captures all k-cores. One nice feature of a k-core decomposition is that there is a simple linear-time peeling algorithm to compute it. We refer the reader to <cit.> for a survey on k-core decomposition and applications. The densest subgraph problem (), where the goal is to find a subgraph of maximum average degree, is a classical problem in combinatorial optimzation that is polynomial time solvable <cit.> via network flow techniques among others. It is widely studied in graph mining. Even though can be solved exactly, the algorithms are slow and this has spurred a study of approximation algorithms for  <cit.>. Amongst these approximation algorithms is the algorithm introduced by Asahiro et al. <cit.> to solve and shown to be a 1/2-approximation by Charikar <cit.>. Note that the peeling order is the same as the one for computing the optimum k-core decomposition; it is only in the second step where one checks all suffixes that the specific density measure for is used. Charikar's analysis has spurred the development and analysis of a variety of peeling algorithms for several variants of in both graphs and hypergraphs <cit.>. Veldt, Benson and Kleinberg <cit.> introduced the generalized mean densest subgraph problem. It is a common generalization of the two problems we described in the preceding paragraphs and is the focus of this paper. Given a real parameter p ∈∪{-∞, ∞} and an undirected graph G = (V,E), the density of a subgraph G[S] induced by a set S ⊆ V is defined as: M_p(S) := (1/S∑_v ∈ S d_S(v)^p)^1/p where d_S(v) is the degree of the vertex v in G[S]. To make the dependence on p more explicit, we refer to this problem also as the p-mean densest subgraph problem (). Note that M_-∞(S) = min_v ∈ S d_S(v) is the minimum degree in G[S], M_∞(S) = max_v ∈ S d_S(v) is the maximum degree, M_1(S) = 2|E(S)|/|S| is twice the average degree of G[S], and M_0(S) = (∏_v∈ S d_S(v))^1/S, which can be transformed to 1/S∑_v∈ Slog d_S(v) by taking the logarithm of the objective. Thus, for different values of p captures two of the most well-studied problems in dense subgraph discovery. As p varies from -∞ to ∞, M_p(S) prioritizes the smallest degree in S to the largest degree in S and provides a smooth way to generate subgraphs with different density properties. Veldt et al. made several contributions to . They observed that 1 is equivalent to and that -∞ is equivalent to finding the maxcore. For p ≥ 1, they observe the set function f_p:2^V →ℝ_+ where f_p(S) := ∑_v ∈ S d_S(v)^p is a supermodular function[A real-valued set function f : 2^V → is supermodular if f(B + v) - f(B) ≥ f(A + v) - f(A) for all A ⊆ B⊆ V and v ∈ V ∖ B. Equivalently, f(A) + f(B) ≤ f(A ∪ B) + f(A ∩ B) for all A, B ⊆ V. f is supermodular iff -f is submodular.]. This implies that one can solve in polynomial time for all p > 1 via a standard reduction to submodular set function minimization, a classical result in combinatorial optimization <cit.>. Note that the function f_p(S) is not supermodular when p < 1, which partially stems from the fact that x^p is not convex in x when p < 1. Motivated by the fact that exact algorithms are very slow in practice, they describe a greedy peeling algorithm that runs in O(mn) time and is a (1/p+1)^1/p-approximation (here m and n are the number of edges and nodes of the graph). Note that the peeling order of is not the same as that of and depends on p. They supplement this work with experiments, showing that returns solutions with desirable characteristics for values of p in the range [1,2]. Motivation for this work. In this paper we are interested in algorithms for that are theoretically and empirically sound, and its complexity status for p < 1 which was left open in <cit.>. It is intriguing that p = -∞ and p ≥ 1 are both polynomial-time solvable while the status of p ∈ (-∞,1) is non-trivial to understand. We focus on the case of p ∈ (-∞, 0) ∪ (0, 1) as the objective for p = 0 is significantly different. Second, we are interested in developing fast approximation algorithms for when p > 1 and when p < 1. The O(mn)-time algorithm in <cit.> is slow for large graphs. Moreover, for p > 1, it is of substantial interest to find algorithms that provably converge to an optimum solution rather than provide a constant factor approximation (since the problem is efficiently solvable). Such algorithms have been of much interest for the classical problem <cit.>. This paper is also motivated by a few recent works. One of them is a paper of Chekuri, Quanrud, and Torres <cit.> that introduced a framework under which one can understand some of the results of <cit.>. In <cit.>, they observe that many notions of density in the literature are of the form f(S)/S where f:2^V → is a supermodular function. They refer to this general problem as the densest supermodular subset problem (). captures for p ≥ 1. One can see this by noting that for p ≥ 1, finding _S M_p(S) is equivalent to finding _S M_p(S)^p. Thus, for p ≥ 1, is equivalent to max_S f_p(S)/S. <cit.> describes a simple peeling algorithm for whose approximation guarantee depends on the supermodular function f; they refer to this as . In particular, is the same as when specialized to f_p, and the analysis in <cit.> recovers the one in <cit.> for . It is also shown in <cit.> that an iterated peeling algorithm, called , converges to an optimum solution for (thus, for any supermodular function). is motivated by and generalizes the iterated greedy algorithm that was suggested by Boob et al. <cit.> for . When one specializes to , the algorithm converges to a (1-)-approximation in O(Δ_p log n/λ^*^2) iterations where Δ_p = max_v ∈ V f_p(V) - f_p(V-v) and λ^* is the optimal density. A naive implementation yields a per-iteration running time of O(mn). Another iterative algorithm that converges to an optimum solution for is described in the work of Harb et al. <cit.> and is based on the well-known Frank-Wolfe method applied to ; this is inspired by the algorithm of Danisch et al. <cit.> for . When one specializes this algorithm to , a naive implementation yields a per-iteration running time of O(m + n log n). Since for p ≥ 1 is a special case of , these two iterative algorithms will also converge to an optimum solution for . §.§ Our Results As remarked earlier, our goal is to understand the complexity of and develop improved algorithms for all values of p. We make four contributions in this paper towards this goal and we outline them below with some discussion of each. NP-Hardness of for p < 1. We prove that is NP-Hard for any fixed p ∈ (-1/8,0)∪(0,1/4) and for weighted graphs is NP-Hard for any fixed p ∈ (-3, 0) ∪ (0,1). The hardness reduction is technically involved, especially due to the non-linear objective function. Our proof involves numerical computation and we restrict our attention to the range (-1/8, 0) ∪ (0,1/4) for the unweighted case and (-3,0) ∪(0,1) for the weighted case to make the calculations and the proof transparent. We believe that our proof methodology extends to all p ∈ (-∞, 0) ∪ (0, 1) and describe an outline to do so. 1/2-approximation algorithms for p ∈ (-∞, 1). Our NP-Hardness result for motivates the search for approximation algorithms and heuristics when p < 1. In <cit.> the authors do some empirical evaluation using for p ∈ (0,1) even though the corresponding function f_p is not supermodular; no approximation guarantee is known for this algorithm. is only well-defined for p > 0. Do we know a bad example for when p <1? We describe two different and simple 1/2-approximation algorithms for when p ∈ (-∞, 1), one based on simple greedy peeling and the other based on an exact solution to . These are the first algorithms with approximation guarantees for this regime of p. We also show that 1/2 is tight for these algorithms. We describe another algorithm, based on iterative peeling, that interpolates between the preceding two algorithms. It is also guaranteed to be a 1/2-approximation but allows us to generate several candidate solutions along the way in a natural fashion. Fast and improved algorithms for p > 1. Greedy-p from <cit.> runs in O(mn) time, which is slow for large high-degree graphs. We describe a near-linear time algorithm for when p > 1. In particular, given a parameter > 0, our algorithm runs in O(pmlog^2 n/) time and has an approximation ratio of (1 - )η, where η is the approximation guarantee of . This algorithm, , is effectively a faster implementation of that utilizes lazy updates to improve the running time[We use notation to suppress polylogarithmic factors.] to (pm/) while only losing a factor of (1 - ) in the approximation ratio compared to . Another important motivation and utility for is the following. Recall the preceding discussion that iterating the greedy peeling algorithm yields an algorithm that converges to the optimum solution <cit.>. However, iterating the vanilla implementation of is too slow for large graphs. In contrast, the faster allows us to run many iterations on large graphs and gives good results in a reasonable amount of time. Experiments. We empirically evaluate the efficacy of our algorithms and some variations on a collection of ten publicly available large real-world graphs. For p > 1, we show that returns solutions with densities close to that of while running typically at least 2 to 4 times as fast. We evaluate the performance of the two iterative algorithms that provably converge to an optimum solution and compare their performance. We find that our algorithm converges faster on all graphs and values of p tested. For p < 1, we empirically evaluate our two new approximation algorithms, and we compare against which was tested in <cit.> as a heuristic. We compare the three algorithms both for their running time and solution quality and report our findings for different values of p ∈ (0,1). One of our approximation algorithms significantly outperforms all algorithms tested in terms of running time while returning a subgraph with comparable solution quality. Organization. In an effort to highlight the algorithmic aspects of our work, we start by describing and analyzing in Section <ref> and presenting our two new approximation algorithms for p < 1 in Section <ref>. We also discuss our heuristics for p > 1 and p < 1 based on iterative algorithms for in Section <ref>. We then present our hardness result in Section <ref>. We conclude with experiments in Section <ref>. All proofs that are omitted from the main body of the paper are contained in the appendix. § RELATED WORK has been the impetus for a wide-ranging and extensive subfield of research. The problem is solvable in polynomial time via different methods, such as a reduction to maximum flow <cit.>, a reduction to submodular minimization, and via an exact LP relaxation <cit.>. Running times of these algorithms are relatively slow and thus approximation algorithms have been considered. The theoretically fastest known approximations run in near-linear time, obtaining (1 - )-approximations in (m ·(1/)) time <cit.>. The fastest known running time is (m/) <cit.>. These algorithms are relatively complex to implement and practitioners often prefer simpler approximation algorithms but with worse approximation ratios, such as  <cit.> and  <cit.>. However, there has been recent work that showed that one can obtain near-optimal solutions using continuous optimization methods that are both simple and efficient on large real-world graphs <cit.>. We discuss some of the density definitions considered in the literature. Given a graph G = (V,E) and a finite collection of pattern graphs , many densities take the form f(S)/S where f: 2^V → counts the number of occurrences of the patterns in in the induced subgraph G[S]. This exact problem was considered in <cit.>. <cit.> considered the special case where is a single triangle graph and <cit.> considered the special case where is a single clique on k vertices. One can also consider a version of for hypergraphs, where the density is defined as E(S)/S where E(S) is the set of hyperedges with all vertices in S <cit.>. We can reduce the pattern graph problem to the hypergraph problem by introducing a hyperedge for each occurrence of the pattern in the input graph. Other density definitions look at modifying by considering the density E(S)/g(S) where g is an arbitrary function of S <cit.>. Given a parameter α, another density considers E(S) - αδ(S)/S <cit.>, which has connections to modularity density maximization <cit.>. The objective M_p is a class of densities that captures a wide-range of different objectives, including when p = 1 and the maxcore problem when p = -∞ <cit.>. Aside from the variation modifying the denominator of from <cit.>, all of the different densities mentioned above fall into the framework of where we want to maximize f(S)/S for a supermodular function f <cit.>. This field of research is vast and we cannot hope to do it justice here, so we point the reader to five separate tutorials/surveys on the topic and the references therein <cit.>. § PRELIMINARIES Fix a graph G = (V,E). Let d_S(v) denote the degree of v in the induced subgraph G[S]. Let N(v) denote the neighborhood of v in G. For any p ∈∪{-∞, ∞}, and any S ⊆ V, we define f_p(S) := ∑_v ∈ S d_S(v)^p, ρ_p(S) := f_p(S)/S and M_p(S) := ρ_p(S)^1/p. We let S_p^*(G) denote _S ⊆ V M_p(S) and let M_p^*(G) := M_p(S_p^*). Note we use S_p^* and M_p^* in place of S_p^*(G) and M_p^*(G) when the graph G is clear from context. For S ⊆ V where v ∈ V ∖ S and u ∈ S, we denote S ∪{v} as S+ v and S ∖{u} as S -u. For v ∈ V and S ⊆ V, let f_p(v | S) := f_p(v + S) - f_p(S). As we often consider p ∈{-∞, ∞}, we use [-∞, ∞] to denote the extended real line ∪{-∞, ∞}. Peeling algorithms. Recall that the greedy peeling algorithm of Veldt et al. for is a (1/p+1)^1/p-approximation and runs in O(mn) time <cit.>. We give the pseudocode in Figure <ref>. We now introduce the iterative peeling algorithm of Chekuri et al. for applied specifically to the function f_p <cit.>, which is a generalization of the algorithm of Boob et al. <cit.>. We refer to the specialization of to as . maintains weights for each of the vertices and instead of peeling the vertex v minimizing f_p(v | S_i) like , it peels the vertex v minimizing f_p(v| S_i) plus the weight of v. Since initializes the weights to 0, the first iteration is exactly . However, subsequent iterations have positive values for the weights and therefore different orderings of the vertices are considered. <cit.> shows a connection between this algorithm and the multiplicative weights updates method and proves convergence via this connection. We give the pseudocode for in Figure <ref>. Properties of M_p. We give a known fact about the monotonicity of the M_p objective in the graph setting we consider. We provide a short proof in the appendix (see Appendix <ref>). Let S ⊆ V. For p ≤ q, we have M_p(S) ≤ M_q(S). Degeneracy. The degeneracy of a graph is a notion often used to measure sparseness of the graph. There are a few different commonly-used definitions of degeneracy. We use the following definition for convenience. The degeneracy d(G) of a graph G is the maximum min-degree over all subgraphs i.e. max_S min_v ∈ S d_S(v). The subset of vertices attaining the maximum min-degree is the maxcore i.e. _S min_v ∈ S d_S(v). The degeneracy, which is exactly M_-∞^*, is easy to compute via the standard greedy peeling algorithm. We state this well-known fact in the following proposition (e.g., see <cit.>). Note that the algorithm constructs the same ordering of the vertices as the algorithm 1. Let v_1, …, v_n be the order of the vertices produced by the standard greedy peeling algorithm for computing the degeneracy. For i ∈ [n], let S_i = {v_i, v_i+1, …, v_n}. The subset S_i maximizing the minimum degree is the maxcore and therefore the minimum degree of S_i, d_S_i(v_i), is the degeneracy of G. We have the following statement connecting different values of M_p^*. The first two inequalities follow directly from Proposition <ref> and the last follows via a simple known argument connecting the degeneracy to a subgraph with maximum average degree S_1^* (e.g., see <cit.>). For any graph and any p ∈ [-∞, 1], we have M_-∞^* ≤ M_p^* ≤ M_1^* ≤ 2M_-∞^*. § FAST IMPLEMENTATION OF In this section, we present a fast algorithm that runs in near-linear time for constant p > 1 that is effectively a faster implementation of . The naive implementation of dynamically maintains a data structure of the values f_p(v| V - v) for all v subject to vertex deletions in overall time O(mn). The idea behind our near-linear time implementation of is simple: we can dynamically maintain a (1+)-approximation to the values f_p(v | V-v) in Õ(pm/) time while only losing a factor of 1 - in the approximation ratio. To dynamically maintain such a data structure, we use approximate values of vertex degrees to compute a proxy for f_p(v| V-v) and will only update f_p(v | V-v) when an approximate vertex degree changes. We show that if we keep (1+/p)-approximations of degrees, then this leads to a (1+)-approximation of f_p(v| V-v) values. In particular, note that for p ∈∖{0}, for any vertex v ∈ V and S ⊆ V with v ∈ S, we have f_p(v | S - v) = ∑_u ∈ S d_S(u)^p - ∑_u ∈ S - v d_S-v(u)^p = d_S(v)^p + ∑_u ∈ N(v) ∩ S d_S(u)^p - (d_S(u) - 1)^p, where (<ref>) follows from (<ref>) as vertices not incident to v have the same degree in G[S] and G[S-v]. Our algorithm will always exactly update the first term in (<ref>) but will only update the sum when approximate degrees change. We give pseudocode for this algorithm in Figure <ref>. It is important to note that for p > 0, maximizing M_p(S) is equivalent to maximizing f_p(S)/S. This implies that Line <ref> of and Line <ref> of are equivalent for p > 0. Let p ≥ 1, G = (V,E) be an undirected graph, and let ∈ (0,1/2]. Then (G, ) is a (1-/p+1)^1/p-approximation to with an O(p m log ^2 n/) running time. Note that when = 0, the approximation guarantee of Theorem <ref> matches that of  <cit.>. is exactly when = 0 and thus the running time is O(mn) in this case. To prove Theorem <ref>, we first observe that at each iteration of , if S is the current vertex set and we return a vertex v satisfying f_p(v | S - v) ≤ (1+)min_u ∈ S f_p(u| S - u), we only lose a (1 - )-multiplicative factor in the approximation ratio. We show this in the following lemma. Note that the proof only requires a slight modification to the proof of Theorem 3.1 in <cit.> and is therefore included in the appendix for the sake of completeness (see Appendix <ref>). Assume p ≥ 1 and ∈ [0,1]. Suppose we greedily peel vertices with the update rule in (<ref>) (i.e. modify update rule, Line (<ref>), of with the update rule in (<ref>)). Then the output is a (1-/p + 1)^1/p-approximation for . The algorithm dynamically maintains an approximate value of f_p(v | V -v) subject to vertex deletions for each vertex v. specifically uses approximate vertex degrees to estimate the sum in (<ref>). We use the following lemma to show that approximate degrees suffice in maintaining a close approximation to ∑_u ∈ N(v) ∩ S d_S(v)^p - (d_S(v) - 1)^p. Let ∈ [0,1/2], d be an integer and p ≥ 1. Let α∈ [0, 1 + /p]. Then (α d)^p - (α d - 1)^p ≤ (1 +)(d^p - (d-1)^p). The proof of the preceding lemma follows easily by essentially arguing that (α d)^p - (α d - 1)^p /1+ is decreasing in . The following lemma handles the issue of running time. Note that the statement also holds for p ∈ (0,1). Fix ∈ (0,1]. Suppose p > 0. (G,) runs in O(mlog n + mlog^2 n/log(1 + /p)) time. If p ≥, this simplifies to O(pmlog^2 n/) time. Can cut a line here if necessary. The running time of is dominated by the inner for loop on Line (<ref>). The proof of Lemma <ref> proceeds by recognizing that this for loop is only ever called when the approximate degree of a vertex u exceeds (1 + /p)D[u], where D[u] is the exact current degree. Thus, the maximum number of times we update any vertex is O(log_1 + /p(n)) = O(p log n/). Since the for loop only requires O(d_G(u) log n) time to run, the total time spent on a single vertex u in this for loop is O(d_G(u) ·p log^2n/). Summing over all vertices, this leads to the desired running time. The analysis above for > 0 implies a better analysis for the case of = 0. If = 0, the condition on Line (<ref>) is always satisfied. As noted above, the inner for loop always requires O(d_G(u)log n) time to run. Further, as u can only be the neighbor of a peeled vertex at most d_G(u) times, the total amount of time spent on Line (<ref>) for vertex u is O(d_G(u)^2 ·log n). This leads to an overall running time of O(∑_v ∈ V d_G(v)^2 log n), which is Õ(mn). We leave the proof of Theorem <ref> to the appendix (see Appendix <ref>). § APPROXIMATION ALGORITHMS We give two new approximation algorithms for p-mean DSG when p ∈ (-∞, 1). The algorithms rely on the fact that S_1^* and S_-∞^* can be found in polynomial time. We show each of these subgraphs are a 1/2-approximation for this regime of p. We complement this result with a family of graphs where 1/2 is the best one can do for each algorithm. We also briefly discuss our iterative heuristics for both p > 1 and p < 1 in Section <ref>. §.§ 1/2-approximation via the maxcore Our algorithm that leverages the standard greedy peeling algorithm for the maxcore is given in Figure <ref>. The algorithm is exactly Charikar's greedy peeling algorithm when p = 1. Let p ∈ [-∞, 1]. Let S_out = (G, p). Then M_p(S_out) ≥1/2 M_p^*. By Proposition <ref>, there exists i ∈ [n] with M_-∞(S_i) = M_-∞^*. By Proposition <ref>, M_-∞(S_i) ≤ M_p(S_i) and by choice of S_out, we have M_p(S_i) ≤ M_p(S_out). Therefore, M_-∞^* ≤ M_p(S_out). Finally, by Proposition <ref>, we have 1/2 M_p^* ≤ M_-∞^*. Combining these two statements, 1/2 M_p^* ≤ M_p(S_out). This concludes the proof. In <cit.>, when p > 1, they show that can perform arbitrarily poorly by constructing the following graph: the disjoint union of the complete bipartite graph K_d,D with d vertices on one side and D vertices on the other and r cliques of size d+2 where d ≪ D. first peels all of the vertices in K_d,D then peels all of the cliques. It is not hard to show that K_d,D is the optimal solution with density proportional to (dD^p-1)^1/p and the highest density suffix finds is the entire graph with density proportional to D^(p-1)/p. Then d^-1/p is the best approximation can achieve. This is a bad example is because the high degree vertices in K_d,D are not prioritized and these vertices contribute significantly to the density of solutions in this graph for p > 1. §.§ 1/2-approximation via the 1-mean densest subgraph We analyze the algorithm that simply returns the 1-mean densest subgraph. Let p ∈ [-∞, 1]. Recall S_1^* = _S ⊆ V M_1(S). We have M_p(S_1^*) ≥1/2 M_p^*. We first argue that M_-∞(S_1^*) ≥1/2 M_1^*. It suffices to show that d_S_1^*(v) ≥E(S_1^*)/S_1^* for every v ∈ S_1^*. Suppose towards a contradiction that there exists v ∈ S_1^* such that d_S_1^*(v) < E(S_1^*)/S_1^*. Using this and observing E(S_1^*) - E(S_1^*- v) = d_S_1^*(v), after rearranging, we have E(S_1^*-v)/S_1^* -v > E(S_1^*)/S_1^*. Multiplying through by 2, we obtain M_1(S_1^* -v) > M_1(S_1^*), contradicting the optimality of S_1^*. We then have M_p(S_1^*) ≥ M_-∞(S_1^*) ≥1/2M_1^* ≥1/2 M_p^* where the first and last inequality are via Proposition <ref> and the second inequality is via (<ref>). This concludes the proof. §.§ Tight examples: showing M_p^* ≈ 2M_-∞^* The goal of this section is to show that 1/2 is tight for the two approximation algorithms from the previous sections. There exists a family of graphs showing the algorithms from Section <ref> and Section <ref> are at best a 1/2-approximation. We want to point out that it is a nontrivial task to devise such a family of graphs, and the construction we give is not immediately obvious. In the following lemma, we show that it suffices in our construction in Theorem <ref> to find a graph H where the degeneracy M_-∞^*(H) is roughly half of M_p^*(H). From Proposition <ref>, we know that M_p^*(H) ≤ M_1^*(H) ≤ 2M_-∞^*(H), so we are essentially trying to find a family of graphs where these inequalities are tight. There is a simple family of graphs, namely complete bipartite graphs where one partition is much larger than the other, where M_1^* ≈ 2M_-∞^*. It is not immediately clear, however, if such a family exists where M_p^* ≈ 2M_-∞^*. One might quickly realize that the path satisfies this condition, which would give a relatively simple construction to prove Theorem <ref>. However, this construction would be brittle in the sense that it leaves open the possibility that the problem gets easier as the degeneracy of the graph increases. To resolve this concern, in Theorem <ref>, we show how one can construct a graph with an arbitrary degeneracy such that M_p^* ≈ 2M_-∞^*. This result is potentially of independent interest as it has some nice graph theoretic connections. Let p ∈ (-∞,1). Fix α∈ [1,2] and an integer d ≥ 1. Assume there exists a graph H where d = M_-∞^*(H) and M_p^*(H) ≥α d. Let G be the disjoint union of (i) H, (ii) r copies of a clique on d+3 vertices K_d+3, and (iii) the complete bipartite graph K_d+1,D (see Figure <ref>). Let n_H be the number of vertices in H. Assume n_H^2 = o(r), d^2 = O(D) and dD = o(r). Then (1) M_p^*(G) ≥α d, (2) letting S_1 = (G), we have lim_r →∞ M_p(S_1) = d+2 and (3) lim_D →∞ M_p(M_1^*(G)) = d+1. In the graph G in Lemma <ref>, the optimal solution is H. When we run on G, H is peeled first and the remainder of the graph has density roughly d+2 when there are sufficiently many cliques. For the approximation algorithm from Section <ref>, the densest 1-mean subgraph of G is K_d+1, D, which satisfies M_p^*(K_d+1,D) ≈ d+1 for sufficiently large D. With more effort, one can argue that, for a fixed p and , it suffices to take a graph G whose size is polynomial in D and r. With Lemma <ref>, all that remains to prove Theorem <ref> is to construct a graph H where d = M_-∞^*(H) and M_p^*(H) ≥ (2 - )d for a given d and . Let ∈ (0,1) and d ≥ 1 be an integer. Assume p ∈ (-1,1). For all integers n ≥2d/, there exists a graph G on n vertices with degeneracy d such that M_p^*(G) ≥ (1-)2M_-∞^*(G). For p ∈ (-∞, -1], we can obtain the same guarantee if n ≥d+12((1 - 1/2d)^p - 1) + d(2^-p - 1)/·p. The proof of the proceeding theorem is constructive, using a result of Bickle <cit.> to aid in constructing a d-degenerate graph where most of the vertices have degree 2d. Because this graph G has most vertices equal to 2d, then we expect that M_p^*(G) ≈ 2d as the M_p^* value of any regular graph is simply the degree of the graph. Theorem <ref> implies that for sufficiently large n, there exists a graph H on n vertices such that d = M_-∞^*(H) and M_p^*(H) ≥ (1 - )2d. Therefore, Lemma <ref> holds with α = (1 - )2d. Thus, there exists an infinite class of graphs where both algorithms from Section <ref> and <ref> are at best a 1/2-approximation. §.§ Iterative heuristics for p > 1 and p < 1 We introduce iterative heuristics for p > 1 and p < 1 with the goal of producing better solutions than the algorithms that only consider a single ordering of the vertices. For p > 1, recall that the algorithm converges to a near-optimal solution for . runs at each iteration, however, this is computationally prohibitive on large graphs. We therefore introduce , which runs at each iteration. We show the benefit of iteration on real-world graphs in Section <ref>. For p < 1, we consider a heuristic that essentially takes the best of both of our approximation algorithms from this section. For the approximation algorithm from Section <ref> that returns the 1-mean densest subgraph, we use the algorithm to compute a near-optimal solution S_1 to 1. Our heuristic runs and finds the largest M_p-density suffix of all orderings produced by . This implies that the first iteration of is exactly . As we use for computing S_1, we have that produces a subgraph that has density at least as good as both of our approximation algorithms. It could even potentially produce a larger density as it considers many more orderings than just the ones that correspond to S_-∞^* and S_1. We run experiments testing the benefit of iteration on real-world graphs in Section <ref>. § HARDNESS OF The goal of this section is to prove the following theorem. is NP-hard for p ∈ (-1/8, 0) ∪ (0, 1/4) and weighted is NP-hard for p ∈ (-3, 0) ∪ (0,1). In this section, we present our hardness results for p > 0. In the appendix, we discuss how one can formally extend the arguments to p < 0 (see Appendix <ref>) and we informally discuss how to extend the argument for more values of p in (-∞, 0) ∪ (0, 1) (see Appendix <ref>). Let p ∈ (0,1). Recall f_p(S) = ∑_v∈ S d_S(v)^p and ρ_p(S) = f_p(S)/S. As p > 0, max_S M_p(S) is equivalent to the problem of max_S ρ_p(S). We therefore focus on the problem of max_S ρ_p(S). We give a reduction from the standard NP-Complete problem . [] In the problem, the input is a family of subsets = {S_1,S_2,…, S_m} each of cardinality 3 over a ground set = {e_1,e_2,…,e_3n} and the goal is to determine if there exists a collection of sets S_i_1, S_i_2,…, S_i_n that form a partition of . If such sets exist, we say that {S_i_1,…, S_i_n} is an exact 3-cover. We give a formal definition of weighted . [weighted ] The input is an edge-weighted graph G = (V,E, c : E →_+). The weighted p-mean DSG problem is the same as the p-mean DSG problem if one defines d_S(v) as the weighted degree ∑_e ∈δ(v) ∩ E(S) c_e where δ(v) is the set of edges leaving v. The reduction from . We first give the reduction for the weighted case. Let = {S_1,S_2,…, S_m} and = {e_1,…, e_3n} be an instance of . We first construct a graph G = (L ∪ A, E) as follows. L has a vertex v_i for each set S_i and A has a vertex u_j for each element e_j. We add a weight 1 edge from every “set vertex" in L to the corresponding “element vertices" in A that the set contains. Formally, for all i ∈ [m], we add the edge set {(v_i u_j , 1) | e_j ∈ S_i} to E where (v_iu_j, 1) is the undirected edge from v_i to u_j of weight 1. Further, we add an edge to E between all pairs of vertices in A with weight d/A - 1 where d := 1.23p + 4.77. Let := 3^p + 3(d+1)^p/4 and let _G = _S ρ_p(S). The reduction constructs this graph and returns TRUE iff _G ≥ρ^*. We provide an illustration of the reduction in Figure <ref>. The weights of the edges in our reduction are rational numbers and could be made integral by scaling the edge weights by an appropriate integer. One could then argue that an unweighted version of the problem defined with multigraphs is NP-Hard. For the reduction for the unweighted case, we again construct the graph G = (L∪ A, E). All weight-1 edges from L to A remain but now they are unweighted. Instead of G[A] being a clique with equal weight edges, we let G[A] be a connected d-regular graph where d = 5. (We can assume n is even so that such a d-regular graph must exist.) Again, the reduction constructs this graph G and returns TRUE iff _G ≥. Proof outline. The degree of the vertices in A in the subgraph G[A] is d = 1.23p + 4.77 for the weighted case and d = 5 for the unweighted case, which are both larger than the degree-3 vertices in L. d is chosen to be large enough in both cases so that any optimal solution will necessarily take all of A, but small enough so that optimal solutions still need to take vertices in L. Considering only subsets S⊆ L of a fixed size, the objective function ρ_p(S ∪ A) favors solutions S∪ A with more uniform degrees. We take advantage of this to show that an exact 3-cover, which will add exactly one to the degree of each vertex in A, attains the largest possible density . Now suppose contains an exact 3-cover S_i_1, …, S_i_n. Let S = {v_i_1, …,v_i_n} be the subset of L corresponding to the exact 3-cover. Then as S = n = A/3, _G ≥f_p(S ∪ A)/S ∪ A = 3^p ·S + (d+1)^p A/S + A = 3^p ·A/3 + (d+1)^p A/A/3+A = . Now assume does not contain an exact 3-cover. Letting S ⊆ L and A' ⊆ A, we want to show that ρ_p(S ∪ A') <. We focus on the case where A' = A. Suppose 3·S = αA for some α∈_≥ 0. We then have ρ_p(S ∪ A) =3^p ·S + ∑_v ∈ A d_S∪ A(v)^p/S + A =3^p ·S + ∑_v ∈ A (d + d_S+v(v))^p/S + A ≤3^p ·S + A· (d + 3 S/A)^p/S + A =3^p ·α + 3· (d + α)^p/α +3, where the first equality holds as G[A] is d-regular, the inequality uses the fact that ∑_i=1^n (d+x_i)^p is a concave function in x and, subject to the constraint that ∑_i x_i = s, is maximized when each x_i = s/n, and the final equality holds as 3 ·S = αA. Define g(α) := 3^p ·α + 3· (d + α)^p/α +3. Note that g(1) =. We claim it suffices to choose d such that g(α) is uniquely maximized at α =1. So if α 1, then ρ_p(S ∪ A) < g(1) by the choice of d. Now consider the case when α =1. As does not contain an exact 3-cover, it must be the case that there exists u,v∈ A such that d_S ∪ A(u) d_S ∪ A(v). This implies that Inequality (<ref>) is strict, which again implies ρ_p(S ∪ A) < g(1). A natural direction for the proof would then be to choose d so that g(α) is uniquely maximized at α = 1. The main issue with this approach is that it leads to solutions where d is irrational even when p is rational. One might think we should then consider reducing from l where ℓ>3 is an integer. However, it is a challenging problem to choose such an ℓ and d. To remedy these issues, in Lemma <ref>, we consider a tighter bound than the one in Inequality (<ref>), which utilizes the fact that we are optimizing the function of interest over the integers. We provide a proof of the lemma in Appendix <ref>. Let c ∈_≥ 0 and p > 0. Let f : _≥ 0^n → be defined as f(x) = ∑_i=1^n (c+x_i)^p where x ∈^n and x_i is the i-th coordinate of x. Consider the program parameterized by s∈: maximize f(x) over x∈^n, x ≥ 0 s.t. ∑_i=1^n x_i = s. The program is uniquely maximized at the vector of integers with sum equal to s and each entry is in {s/n, s/n - 1}. If p < 0 and the optimization problem is a minimization problem, the program is uniquely minimized at the vector of integers with sum equal to s and each entry is in {s/n, s/n-1}. Now consider again the case when the input does not contain an exact 3-cover. From the proof outline above, we want to argue that the quantity in (<ref>) is strictly less than g(1), which is equal to ρ^*. Reevaluating (<ref>) with Lemma <ref> in hand, we have a tighter bound on ∑_v ∈ A (d + d_S+v(v))^p compared to the bound in (<ref>) as d_S+v(v) is integral for all v ∈ A. After carefully choosing the value of d, this improved bound allows us to prove that the quantity in (<ref>) is strictly less than g(1). We use the proof outline above to prove the following theorem, which is exactly Theorem <ref> when assuming p > 0. is NP-hard for p ∈ (0, 1/4) and weighted is NP-hard for p ∈ (0,1). We note again that we prove NP-hardness for values of p < 0 in the appendix (see Appendix <ref>). § EXPERIMENTS We evaluate the algorithms described in previous sections on ten real-world graphs. The graphs are publicly accessible from SNAP <cit.> and SuiteSparse Matrix Collection <cit.>. The size of each graph is given in Table <ref>. The graphs we consider are a subset of the graphs from <cit.>. We focused on the graphs in <cit.> for which they reported raw statistics, which was done purposefully to facilitate direct comparisons. Some graphs vary slightly from the graphs in <cit.> due to differences in preprocessing as we remove all isolated vertices. All algorithms are implemented in C++. We use the implementation of from <cit.>, which serves as our implementation for and . We did not use the implementation for of <cit.> as it was written in Julia. The reported running times of in <cit.> are roughly twice as fast as our implementation. This difference largely does not impact our results and we mention why this is the case in each section. We do not measure the time it takes to read the graph in order to reduce variability (the time it takes to read the largest graph is 6 seconds). The experiments were done on a Slurm-based cluster, where we ran each experiment on 1 node with 16 cores. Each node/core had Xeon PHI 5100 CPUs and 16 GB of RAM. In each section, we present partial results for a few graphs but include all results in the appendix (see Appendix <ref>). We give a few highlights here and discuss details in the following sections. * For p > 1, is much faster than while returning subgraphs with comparable density. * For with p > 1, multiple iterations improve upon the density of for some graphs. * For p < 1, returns solutions with similar density to the algorithms we compare it against but runs significantly faster. §.§ outperforms We find that outperforms in terms of running time and typically returns a solution with similar density. We run experiments for p ∈{1.05, 1.25, 1.5,1.75, 2}. Note that we stop at p=2 as the objective starts rewarding subgraphs with large degrees and therefore the returned subgraphs are often a large fraction of the graph. We evaluate with values of ∈{0.01,0.1,1}, all of which result in similar running times and densities. In Table <ref>, we present the running times and densities of returned solutions for and on four of the ten real-world graphs and for two values of p. The full table of the results is given in Appendix <ref>. Ignoring the road networks roadCA and roadTX, is roughly at least twice as fast as . The road networks have a maximum degree of 12, so the savings from are marginal. On graphs webG and webBS, is roughly at least 4 times faster. Since webG and webBS have many vertices of large degree, this was expected as we know from the analysis of , the running time of depends on the sum of the squared degrees. We therefore suggest the use of over , especially for graphs with large degrees, which we expect in many real-world graphs. We also point out that the reported running time difference with <cit.> does not impact these results as this is a relative comparison; we use the same code for and where the only difference is when the f_p(v | S - v) values in the min-heap are updated. §.§ Finding near-optimal solutions for p > 1 We compare , , and the Frank-Wolfe method in terms of running time and solution quality. In <cit.>, Harb et al. show how one can use Frank-Wolfe to solve an appropriate convex programming relaxation of . We provide details of the algorithm in the appendix (see Appendix <ref>). We run and for 100 iterations and Frank-Wolfe for 500 iterations. We only run on the five smallest graphs for even if we used the faster implementation of in <cit.>, the running time on large graphs could reach roughly around ten hours. Thus, for large graphs, this points to and Frank-Wolfe being better options than . For , we set =1. We run experiments for p ∈{1.05, 1.25, 1.5, 1.75, 2}. In Figure <ref>, we present plots showing the rate of convergence for four of the ten real-world graphs with p = 1.5. For each plot, we truncated the data to the first point at which all algorithms reached at least 99% of the optimal density achieved in order to more easily see trends in the data. The full collection of results is given in Appendix <ref>. We find there is a benefit to iteration; produces solutions with increasing density for multiple iterations for around half of the graphs (e.g., see roadCA and Astro plots in Figure <ref>). We also note that the rate of convergence of is faster than Frank-Wolfe on almost every single graph. §.§ Approximation algorithms for p < 1 For p < 1, we evaluate our two approximation algorithms from Section <ref>, , and the heuristic described in Section <ref>. We run all four algorithms on all ten real-world graphs and for p ∈{0.25, 0.5, 0.75}. We only run our approximation algorithms and for p ∈{-1, -0.5} as is not well-defined for p < 0. We run for 100 iterations. For our approximation algorithm returning the 1-mean DSG, as noted in Section <ref>, we use to compute a near-optimal solution. In Table <ref>, we present the results for four of the ten graphs and for two values of p. The full table of results is given in Appendix <ref>. We first want to highlight the fact that the orderings produced by and are not dependent on p. One can therefore run these algorithms once to compute approximations to for many different values of p < 1. For values of p ∈ (0,1), all four algorithms return graphs with roughly the same density, however, is much faster than the other three algorithms. As is only 1 iteration of and we run for 100 iterations, we find that is roughly around 100 times faster than and the algorithm finding the 1-mean densest subgraph. Furthermore, is anywhere from 2 to 20 times as fast as . For values of p < 0, again performs the best as it returns subgraphs of comparable density and is also the fastest algorithm. We also want to point out that although sometimes produces subgraphs with larger density than all algorithms tested, the benefit of iteration with is marginal. § CONCLUSION In this paper, we provide a deeper understanding of for the entire range of p. For p ∈ (-1/8, 0) ∪ (0, 1/4), we show is NP-Hard via a nontrivial reduction and we show how to extend this NP-hardness proof to p ∈ (-3, 0) ∪ (0,1) for the weighted version of the problem. This mostly resolves the open question regarding the complexity of for p < 1. We presented two approximation algorithms for p < 1, which are the first algorithms with any provable approximation ratio for this regime of p. We provided a faster implementation of for p > 1 called . We developed a heuristic based on iterative algorithms for finding near-optimal solutions to that utilized . This algorithm is ideal when does not find a near-optimal solution after a single iteration. Experiments show that our approximation algorithms for both p > 1 and p < 1 are highly effective and scalable. We outline a few future directions. We established NP-Hardness for where p ∈ (-1/8,0) ∪ (0,1/4) and for weighted where p∈ (-3, 0)∪(0, 1) and also provided simple 1/2-approximation algorithms. We believe that our NP-Hardness proof can be strengthened to show that it is APX-Hard to approximate the optimum, that is, there is some fixed δ > 0 such that it is NP-Hard to obtain a (1-δ)-approximation. The resulting δ is likely to be quite small. It is important to obtain tight approximation bounds for that closes that gap between 1/2 and (1-δ). This effort is likely to lead to the development of more sophisticated approximation algorithms and heuristics for the problem. Acknowledgements. The authors would like to thank Farouk Harb for helpful discussions and for providing code for the Frank-Wolfe algorithm. alpha § OMITTED PROOFS FROM SECTION <REF> Proposition <ref> is folklore and we only include the following proof for the sake of completeness. It suffices to show that M_p(S) is an increasing function in p for a fixed S. Fix S ⊆ V and assume p ∉{-∞, 0, ∞}. We have d/dp M_p(S) = M_p(S) ·d/dp ln M_p(S) ≥d/dp ln M_p(S) = -1/p^2lnρ_p(S) + 1/p· f_p(S)·d/dp f_p(S) = -1/p^2lnρ_p(S) + 1/p·∑_v ∈ S d_S(v)^pln d_S(v)/∑_v ∈ S d_S(v)^p where (<ref>) is at least 0 if and only if lnρ_p(S) ≤∑_v ∈ S d_S(v)^pln d_S(v)^p/∑_v ∈ S d_S(v)^p. Rearranging, (<ref>) is equivalent to ln(1/S) ≤∑_v ∈ S x_v ln x_v where x_v := d_S(v)^p/∑_v ∈ S d_S(v)^p. As x is a probability distribution over S, the final inequality holds by recognizing that the uniform distribution maximizes the entropy function. Thus, M_p(S) is an increasing function in p over (-∞, 0) and (0, ∞). One can extend the proof for p ∈{-∞, 0, ∞} by recognizing that the definition for M_p(S) in these cases is derived by taking the limit (e.g., lim_p→∞ M_p(S) = M_∞(S)). This concludes the proof. § OMITTED PROOFS FROM SECTION <REF> We have that f_p is supermodular for p ≥ 1 <cit.>. Let S^* be an optimal subset and ρ^* = ρ_p(S^*). In the proof of Theorem 3.1 of <cit.>, it is shown that for all v ∈ S^*, f_p(v | S^* - v) ≥ρ^*. Let v_i be the first element of S^* that is peeled. Let S_i be the set of vertices including v_i and all subsequent vertices in the peeling order. By the update rule in (<ref>), we have f_p(v_i | S_i - v_i) ≤ (1+)min_u ∈ S_i f_p(u | S_i - u). In Proposition 3.2 of <cit.>, it is shown that for all S ⊆ V, we have ∑_v∈ S f_p(v | S - v) ≤ (p+1)f(S). Repeating the argument in Theorem 3.1 in <cit.> only with a small change, we have f_p(S_i)/S_i = f_p(S_i)/∑_u ∈ S_if(u| S_i - u)·∑_u ∈ S_if(u | S_i - u)/S_i ≥1/p+1·S_i· f(v_i | S_i - v_i)/(1+)S_i ≥ρ^*/(1+)(p+1) ≥(1-)ρ^*/p+1. Raising both sides of the inequality to the 1/p-th power, we have M_p(S_i) ≥ (1-/p+1)^1/p· M_p^*. This concludes the proof. Note that it suffices to consider α = 1+/p as the left-hand side is increasing in α. Consider the function (α d)^p - (α d - 1)^p/e^. Suppose we can show this function is decreasing in . This would imply (α d)^p - (α d-1)^p/e^≤ d^p - (d-1)^p, which would prove the statement as e^≤ 1 + 2. We prove that the function is decreasing in by arguing that the derivative of (<ref>) with respect to is non-positive. Note that d/dα d = d/p. We have d/d(α d)^p - (α d - 1)^p/e^ = d[(α d)^p-1 - (α d - 1)^p-1]e^ - [(α d)^p - (α d - 1)^p] e^/^2. Since we need this quantity to be non-positive, it suffices to show d ≤(α d)^p - (α d - 1)^p/(α d)^p-1 - (α d - 1)^p-1 We have (α d)^p - (α d - 1)^p/(α d)^p-1 - (α d - 1)^p-1 = α d ·1 - (1 - 1/α d)^p/1 - (1 - 1/α d)^p-1≥α d ≥ d, where the first inequality holds as (1 - 1/α d)^p ≤ (1 - 1/α d)^p-1. This concludes the proof. The initialization of D and D' takes O(m) time and the initialization of A takes O(m + nlog n) time. At each iteration, as A is a min-heap, finding the minimum and removing it only takes O(1) time and O(log n) time, respectively. Suppose v_i is the vertex chosen at the i-th iteration. We consider the first three lines of the inner for loop. The number of iterations of the inner for loop is O(d_G(v_i)) and the first three lines take O(log n) time. Thus, the total time for these three lines throughout the entire run of the algorithm is O(m log n) time. The only part of the algorithm unaccounted for is the conditional statement in the inner for loop. In this if-statement, for a vertex u, we update A[w] for all w ∈ N(u) ∩ S_i+1 only when the approximate degree, D'[u], exceeds (1 + /p)D[u], where D[u] is the exact current degree. u is therefore updated only when the actual degree drops by at least a 1+/p factor. Since D[u] ≤ n, the maximum number of times we update any vertex is O(log_1+/p(n)). If p ≥, this is at most O(plog n/). The time it takes to update A is O(d_G(u)log n). Thus, the total time spent on a single vertex u throughout the entire run of the algorithm is O(d_G(u) ·log^2n/log(1+/p)). This concludes the proof. Running time is analyzed in Lemma <ref>, so we focus on the approximation guarantee. Consider an arbitrary iteration i of the algorithm. By Lemma <ref>, it suffices to show that f_p(v_i | S_i - v_i) ≤ (1+)min_v ∈ S_i f_p(v | S_i - v). We show something stronger. Let A_i be the state of the min-heap A at the start of iteration i. We show that for all v ∈ S_i, we have A_i[v] ≤ (1+)f_p(v | S_i - v). To prove this, we first note that this holds on the first iteration as we compute the values f_p(v | V - v) exactly. Let D_i' be the state of D' at the beginning of iteration i. As D maintains exact degrees, we use d_S_i(v) in place of D[v]. Note that the algorithm maintains A to ensure A_i[v] = d_S_i(v)^p + ∑_u ∈ N(v) ∩ S_i D_i'[u]^p - (D_i'[u]-1)^p. We see that D'[u] ≤ (1 + /p)d_S_i(u) is guaranteed for all u ∈ S_i by the condition in the inner for loop of iteration i-1 (if i = 1, then this statement holds as D_1'[v] = d_G(v)). Therefore, A_i[v] ≤ d_S_i(v) + (1+)∑_u ∈ N(v) ∩ S_i d_S_i(u)^p - (d_S_i(u)-1)^p by Lemma <ref>. Using the rewriting of f_p(v | S_i - v) given in Equation (<ref>), it follows that A_i[v] ≤ (1+)f_p(v | S_i - v). This concludes the proof. § OMITTED PROOFS FROM SECTION <REF> We start with (1), which follows from the fact that M_p^*(H) ≥α d. We next prove (2). The algorithm (from Section <ref>) first peels H as M_-∞^*(H) = d, then it peels K_d+1,D, and it finally peels all of the cliques. We have to analyze the density of all suffixes of this ordering and show that all have density at most d+2 as r →∞. We start by analyzing the suffixes while peeling H. Let H_i be the remaining vertices from H after peeling i vertices for i = 0,1,…, n_H. Let T be the vertices of G without H, so T = (d+1+D) + r(d+3). Thus, on the suffixes while peeling H, the algorithm obtains value max_i=0,1,…,n_H(∑_v ∈ H_i d_H_i(v)^p + f_p(T)/H_i + (d+1+D) + r(d+3))^1/p. We have f_p(T) = (d+1)D^p + D(d+1)^p + r(d+3)(d+2)^p, so f_p(T) ≤ 2(d+1)D + r(d+3)(d+2)^p and note that dD = o(r). As ∑_v ∈ H_i d_H_i(v)^p ≤ n_H^1+p and n_H^1+p = o(r), lim_r→∞∑_v ∈ H_i d_H_i(v)^p + f_p(T)/H_i + (d+1+D)+ r(d+3) = (d+2)^p. Thus, as r →∞, the maximum density attained while peeling H is at most d+2. We next analyze the suffixes while peeling K_d+1,D. We note that the density achieved while peeling K_d+1,D is at most max_ℓ∈ [1,d+1] q ∈ [1,D](ℓ q^p + q ℓ^p + r(d+3)(d+2)^p/ℓ + q + r(d+3))^1/p. As ℓ q^p + qℓ^p ≤ 2 (d+1)D and dD = o(r), we have lim_r →∞ℓ q^p + q ℓ^p + r(d+3)(d+2)^p/ℓ + q + r(d+3) = (d+2)^p. Therefore, lim_r →∞ M_p(S_1) = d+1. We now prove (3). We start by showing that M_1^*(G) is K_d+1,D. We have M_1(K_d+1,D) = 2(d+1)D/d+1 +D > 2d for D ≥ 2d^2. Furthermore, as M_-∞^*(H) = d, Proposition <ref> implies that M_1^*(H) ≤ 2d. We also have that M_1^*(K_d+3) = d+2 and M_1(K_d+1,D) > d+2 for D ≥ d+6. Therefore, M_1^*(G) = K_d+1,D. All that remains is to upper bound M_p(K_d+1,D). We have M_p(K_d+1,D) = ((d+1)D^p + D(d+1)^p/d+1+D)^1/p. As p < 1, we have D^p = o(D), which implies lim_D →∞(d+1)D^p + D(d+1)^p/d+1+D = (d+1)^p. Therefore, lim_D →∞ M_p(K_d+1,D) = d+1. To prove Theorem <ref>, we construct a graph whose degeneracy is roughly half of the density of an optimal solution to p. Note that Proposition <ref> shows M_p^* ≤ 2M_-∞^*. We therefore attempt to construct a d-degenerate graph where as many vertices as possible have degree 2d. Towards this end, we say that a d-degenerate graph is edge maximal if adding any edge to the graph makes the graph (d+1)-degenerate. The following theorem gives a characterization of edge-maximal d-degenerate graphs via degree sequences. Let d be a positive integer. A sequence of positive integers d_1 ≥ d_2 ≥⋯≥ d_n is the degree sequence of an edge-maximal d-degenerate graph if and only if * d ≤ d_i ≤min{n-1, n + d - i} for all i ∈ [n], and * ∑_i=1^n d_i = 2(dn - d+12). We use this theorem of Bickle in the following proof. Consider the following sequence: d, d+1, d+2, …, 2d- 1, 2d-1, 2d-1, …, 2d-1_d+12 copies, 2d, 2d, …, 2d_n - d - d+12 copies It is easy to check that both conditions (1) and (2) of Theorem <ref> hold. Thus, given such integers n and d, we can construct a d-degenerate graph on n vertices G with the given degree sequence. We now argue that M_p(G) ≥ (1-)2d, which would prove the theorem. For any p ∈ (-∞, 1) ∖{0}, we have that M_p(G) is ((n-d-d+12)(2d)^p + d+12(2d-1)^p +∑_i=0^d-1 (d + i)^p/n)^1/p ≥(n +d+12((1 - 1/2d)^p - 1) +d(2^-p - 1)/n)^1/p· 2d, where we use the fact that (a+bx^p/c)^1/p is an increasing function in x when x, a, b, and c are all positive. For p ∈ (0,1), using the rewriting of M_p(G) above in (<ref>), we have M_p(G)/2d≥(1 - p(d+1)/2 + dp/n)^1/p≥(1 - (p(d+1)/2 + dp)/2d)^1/p≥ 1 - ·d+1/2 + d/2d≥ 1-, where the first inequality holds as 1 - (1 - 1/2d)^p ≤p/d and 1 - 2^-p≤ p, the second inequality uses the fact that n ≥2d/, and the third inequality holds as (1+x)^1/p≥ 1+x/p for x ≥ 0. Now for p < 0, again using the rewriting of M_p(G) in (<ref>), we have M_p(G)/2d≥(1 + d+12((1 - 1/2d)^p - 1) + d(2^-p - 1)/n)^1/p≥(1 + p·)^1/p≥ 1 - , where the first inequality holds as n ≥d+12((1 - 1/2d)^p - 1) + d(2^-p - 1)/·p, and the last inequality holds as 1 - p·≤ (1 - )^p. If we were to assume p ∈ (-1,0), from (<ref>) above, we have d+12((1 - 1/2d)^p - 1) + d(2^-p - 1)/·p≤d+12p/d + 2dp/·p = d+1/2 + d/≤2d/, where the first inequality holds as (1 - 1/2d)^p - 1 ≤p/d and 2^-p - 1 ≤p. Thus, for the case of p ∈ (-1,0), it suffices to take n ≥2d/. This concludes the proof. § OMITTED PROOFS FROM SECTION <REF> §.§ Hardness for p ∈ (0,1) We start by providing the proof of Lemma <ref>. Fix s ∈. Let x^* be a vector such that x_i^* ∈{s/n, s/n-1} for all i ∈ [n] and ∑_i x_i^* = s. Let x ∈^n and x ≥ 0 such that ∑_i x_i = s and at least one entry of x is not in {s/n, s/n-1}. Let 2r = ∑_i=1^n x_i - x_i^*. We prove that f(x^*) > f(x) by induction on r. For the base case r = 1, there are three cases to consider. We can handle the first two cases together. In the first case, we have (x_1^*, x_2^*) = (s/n, s/n) and (x_1,x_2) = (s/n+1, s/n-1) and x_i = x_i^* for all i = 3,4,⋯,n. In the second case, we have (x_1^*, x_2^*) = (s/n-1,s/n-1) and (x_1,x_2) = (s/n, s/n-2) and x_i = x_i^* for all i = 3,4,⋯,n. For both cases, all we have to show is that (c+x_1)^p + (c+x_2)^p < (c+x_1^*)^p + (c+x_2^*)^p. As x_1^* = x_2^*, this follows immediately from the fact that the function ∑_i=1^n (c + y_i)^p is concave in y and, subject to the constraint that ∑_i y_i = s, is maximized when all terms are the same (i.e. y_i = s/n). For the third case, x_1 = s/n + 1 and x_1^* = s/n, x_2 = s/n - 2 and x_2^* = s/n-1, and x_i = x_i^* for all i =3,4,…,n. Again, we have to show (c+x_1)^p + (c+x_2)^p < (c+x_1^*)^p + (c+x_2^*)^p. We have x_1 > x_1^* > x_2^* > x_2. As (c+x)^p is concave in x and x_1 - x_1^* = x_2^* - x_2, we have (c+x_1)^p - (c+x_1^*)^p < (c+x_2^*)^p - (c+x_2)^p. This gives us the inequality we want up to rearranging. For the inductive step, we assume that the statement holds for r ≥ 1 and prove that it holds for r+1. Suppose that x ∈^n and x ≥ 0 such that ∑_i x_i = s, at least one entry of x is not in {s/n,s/n-1}, and 2(r+1) = ∑_i=1^n x_i - x_i^*. Using a similar argument as for the base case, we can reduce ∑_i=1^n x_i-x_i^* by 2. This concludes the proof for p > 0. For the case of p < 0, it is easy to see that everything still holds as ∑_i=1^n(c+y_i)^p is convex in y. In order to prove Theorem <ref>, we first need a few helpful technical lemmas. Let d ≥ 1 be an integer and p ∈. Define h : [0,1] → as h(β) := 3^pβ + 3β (d+1)^p + 3(1-β) d^p/β + 3. Let h' denote the derivative of h with respect to β. Fix β∈ [0,1]. Then h'(β) > 0 if and only if 3^p + 3(d+1)^p > 4d^p. We have that d/dβ h(β) is equal to (3^p + 3(d+1)^p - 3 d^p)(β + 3) - (3^pβ + 3β (d+1)^p + 3(1-β) d^p)/(β + 3)^2. Simplifying the numerator, h'(β) > 0 if and only if 3^1+p + 3^2(d+1)^p - 3^2 d^p - 3 d^p > 0. The lemma follows after dividing through by 3 and rearranging. Let d,t ≥ 1 be integers, p ∈, and β∈ [0,1]. Define g:[0,1] → as g(β) := 3^p (t+β) + 3β(d+t+1)^p + 3(1-β) (d+t)^p/t +β + 3. 0pt * Assume p ∈ (0,1). If 4(d+2)^p + 3^p < 5(d+1)^p, then g is strictly decreasing. * Assume p < 0. If 4(d+2)^p + 3^p > 5(d+1)^p, then g is strictly increasing. We have d/dβ g(β) is (3^p + 3(d+t+1)^p - 3(d+t)^p)(t+β+3)/(t+β+3)^2 - (3^p (t+β) + 3β(d+t+1)^p + 3(1-β) (d+t)^p)/(t+β+3)^2. Simplifying the numerator of both terms, d/dβ g(β) < 0 if and only if 3t(d+t+1)^p - 3t(d+t)^p + 3^1+p + 3^2(d+t+1)^p -3^2(d+t)^p - 3 (d+t)^p < 0. Dividing through by 3 and rearranging, this is equivalent to (4+t)(d+t)^p - (3+t)(d+t+1)^p - 3^p > 0. To prove (<ref>), it suffices to show the left-hand side of (<ref>) is increasing in t for p ∈ (0,1). To prove (<ref>), it suffices to show the left-hand side of (<ref>) is decreasing in t for p < 0. Assume p > 0. The derivative of the left-hand side of (<ref>) with respect to t is (d+t)^p + p(4+t)(d+t)^p-1 - (d+t+1)^p - p(3+t)(d+t+1)^p-1 ≥ -p(d+t)^p-1 + p(4+t)(d+t)^p-1 - p(3+t)(d+t+1)^p-1 = p(3+t)[(d+t)^p-1 - (d+t+1)^p-1] ≥ 0, where the first inequality follows from the fact that (x+1)^p - x^p ≤ px^p-1 for all x > 0 as x^p is concave and the second inequality follows from the fact that x^p-1 is a decreasing function in x as p <1. This proves (<ref>). Now assume p < 0. As x^p is convex in x, we have (x+1)^p - x^p ≥ px^p-1 for all x >0. Thus, one can use the same argument as for p > 0 to show that the derivative of (<ref>) is non-positive. This proves (<ref>). The following lemma considers the functions 3^p + 3(d+1)^p - 4d^p and 3^p + 4(d+2)^p - 5(d+1)^p, where d = 1.23p +4.77 for the weighted case and d = 5 for the unweighted case. The lemma states that for both the weighted and unweighted cases, (<ref>) is strictly positive and (<ref>) is strictly negative. This is easy to compute explicitly for specific values of p; for example, when p = 1/2 and d = 1.23p+4.77, the first function has value ≈ 0.0303 and the second function has value ≈ -0.032. One approach to proving the lemma for the weighted case would be to show that the derivative of (<ref>) only has one zero on the interval [0,1] and one can show this critical point is a maximum. Since (<ref>) is 0 when p ∈{0,1}, this would prove (<ref>) is strictly positive when p ∈ (0,1). A similar approach can be taken for (<ref>). For the unweighted case, the same proof idea could be used for verifying (<ref>). For (<ref>), one would only have to show the function is strictly increasing over (0, 1/4). Formal verification of these facts would be quite lengthy and not necessarily illuminating, so we therefore verified these inequalities analytically via MATLAB. To give an idea of what these functions look like, we plot both in Figure <ref>. Let f_1 : (0, 1) → be defined f_1(p) = 1.23p + 4.77 and let f_2 : (0,1/4) → be defined f_2(p) = 5. For i ∈{1,2} and all p in the domain of f_i, we have 3^p + 3(f_i(p)+1)^p > 4f_i(p)^p and 4(f_i(p)+2)^p + 3^p < 5(f_i(p)+1)^p. We need one more lemma regarding the general form of the density when solutions in the reduction do not take all of A. Let G = (L∪ A, E) be the graph of the reduction for either the weighted or unweighted case. For the weighted case, assume p ∈ (0,1) and d = 1.23p+4.77. For the unweighted case, assume p ∈ (0, 1/4) and d = 5. Fix S ⊆ L and A' ⊆ A. Then ρ_p(S ∪ A') = f_p(S ∪ A')/S ∪ A'≤3^p ·S + ∑_v ∈ A' (d + d_S+v(v))^p/S + A' If A' < A, the inequality above is strict. As the degree of each vertex in S is exactly 3 in both the weighted and unweighted cases, ρ_p(S∪ A') = f_p(S ∪ A')/S ∪ A' = 3^p ·S + ∑_v ∈ A' d_S∪ A'(v)^p/S + A'. In the weighted case, for all v ∈ A', d_S∪ A'(v)^p = (d·A'-1/A-1 + d_S+v(v))^p. As x^p is a strictly increasing function in x, we have d_S∪ A'(v)^p ≤ (d + d_S+v(v))^p. If A' < A, then the inequality is strict. In the unweighted case, as G[A] is d-regular, then d_S∪ A'(v)^p ≤ (d + d_S+v(v))^p for all v ∈ A'. If A' < A, as G[A] is connected, there must exist a vertex u ∈ A' such that d_A'(u) ≤ 4 < d. Thus, d_S∪ A'(u)^p < (d + d_S+u(u))^p. Therefore, in both the weighted and unweighted cases, we have ∑_v ∈ A' d_S∪ A'(v)^p ≤∑_v ∈ A'(d + d_S+v(v))^p, where the inequality is strict if A' < A. This concludes the proof. We are now ready to prove Theorem <ref>. Consider an instance of the problem: let = {S_1,…,S_m} be a family of subsets of the ground set = {e_1,e_2,…, e_3 n}. We use the reductions given in Section <ref> for the weighted and unweighted cases. Recall that both reductions construct the graph G = (L ∪ A, E) and return TRUE iff _G ≥ where = 3^p + 3(d+1)^p/4. Note d = 1.23p + 4.77 for the weighted case and d = 5 for the unweighted case. If contains an exact 3-cover, we showed in Equation (<ref>) this implies _G ≥ for both the weighted and unweighted cases. Assume does not contain an exact 3-cover. Let S ⊆ L and A' ⊆ A. We proceed via cases based on the value of the ratio S/A'. Case 1: 3S = A'. We have f_p(S ∪ A')/S ∪ A' ≤3^p ·S + ∑_v ∈ A' (d + d_S+v(v))^p/S + A' ≤3^p ·S + A'·(d + 3·S/A')^p/S + A' = . where (<ref>) holds by Lemma <ref>, (<ref>) holds as ∑_i=1^n (c+x_i)^p is concave in x and therefore is maximized when all terms are the same, and (<ref>) follows from S = 1/3·A'. As S does not correspond to an exact cover, either A' < A or there exists u,v ∈ A such that d_S∪ A(u) d_S∪ A(v). In the former case, Inequality (<ref>) is strict by Lemma <ref>. In the latter case, Inequality (<ref>) is strict. So in either case, we have that the density of S ∪ A' is strictly smaller than . Case 2: 3·S = βA' for β∈ [0,1). By Lemma <ref>, ρ_p(S∪ A') ≤3^p ·S + ∑_v ∈ A' (d + d_S+v(v))^p/S + A'. Thus, by Lemma <ref>, the density ρ_p(S ∪ A') is maximized when the vertices in A' have degree either d or d+ 1. We have ρ_p(S∪ A') ≤3^p ·S + β·A' (d+1)^p + (1-β)A' d^p/S+A' =3^pβ + 3β (d+1)^p + 3(1-β) d^p/β + 3, where the equality uses the fact that S = βA'/3. By differentiating (<ref>) with respect to β, it is easy to show that this function is strictly increasing in β if and only if 3^p + 3(d+1)^p > 4d^p. (We give a proof in Lemma <ref>.) By the choice of d = 1.23p + 4.77 for the weighted case and d = 5 for the unweighted case, Lemma <ref> shows the inequality is satisfied. Thus, the density ρ_p(S∪ A') is strictly less than the value of the function in (<ref>) when β = 1, which is exactly . Case 3: 3·S = αA' for α > 1. We reparameterize such that α = t + β where t≥ 1 is an integer and β∈ [0,1]. Note that we can assume that (t, β) (1, 0) as this was already handled in Case 1. By Lemma <ref>, ρ_p(S∪ A') ≤3^p ·S + ∑_v ∈ A' (d + d_S+v(v))^p/S + A'. By Lemma <ref>, the density ρ_p(S ∪ A') is maximized when the vertices in A' are either d + t or d+ t+1. We have ρ_p(S∪ A') ≤3^p ·S + β·A' (d+t+1)^p + (1-β)A' (d+t)^p/S+A' = 3^p (t+β) + 3β(d+t+1)^p + 3(1-β) (d+t)^p/t +β + 3, where the equality uses the fact that S = (t+β)A'/3. By differentiating (<ref>) with respect to β, it is easy to show that this function is strictly decreasing in β if 4(d+2)^p + 3^p < 5(d+1)^p. (We give a proof in Lemma <ref>.) By the choice of d = 1.23p + 4.77, Lemma <ref> shows the inequality is satisfied. Therefore, (<ref>) is uniquely maximized at (t,β) = (1,0), which has value . This implies ρ_p(S ∪ A') <. §.§ Hardness for p ∈ (-3,0) As p < 0, we have max_S M_p(S) is equivalent to the problem of min_S ρ_p(S) since x^p is a decreasing function in x. We therefore focus on the problem of min_S ρ_p(S).[Note that, although it is not explicitly stated, we are optimizing over all sets S with no vertices of degree 0, for such sets have undefined density when p < 0.] The reductions for p < 0 are nearly identical to that of the case of p >0. For the reduction for the weighted case, we change the value of d from 1.23p + 4.77 to p/2 + 5. The value of d stays the same for the unweighted case. For both the weighted and unweighted case, we change the reduction by returning TRUE iff _G ≤ρ^*. These changes only minimally alter the analyses. The same reasoning in (<ref>) holds, so when the given instance contains an exact 3-cover, we have _G ≤ρ^*. Now suppose the given instance does not contain an exact 3-cover. As p < 0, we have (c + x)^p is a convex function in x. The same general outline for the proof given in (<ref>)-(<ref>) therefore holds for p < 0. In particular, for S ⊆ L and assuming 3·S = α·A, ρ_p(S ∪ A) = 3^p ·S + ∑_v ∈ A d_S∪ A(v)^p/S + A ≥3^p ·S + A· (d + 3 S/A)^p/S + A = 3^p ·α + 3· (d + α)^p/α +3, where the inequality now uses convexity instead of concavity. Our goal is now to choose d such that this function is minimized when α =1. As is the case with p ∈ (0,1), we use a stronger bound on ∑_v ∈ A d_S∪ A(v)^p. In particular, we note Lemma <ref> applies also when p < 0. With this lemma in hand and using this general outline, we can prove the following theorem. is NP-hard for p ∈ (-1/8, 0) and weighted is NP-hard for p ∈ (-3,0). The proof of Theorem <ref> is nearly identical to that of Theorem <ref> except for a few small differences, which is why we do not rewrite all of the details. We point out these small differences here. We first note that we need an analogous lemma to Lemma <ref> for the case of p < 0. Let f_1 : (-3, 0) → be defined f_1(p) = p/2 + 5 and let f_2 : (-1/8, 0)→ be defined f_2(p) = 5. For i ∈{1,2} and all p in the domain of f_i, we have 3^p + 3(f_i(p)+1)^p < 4f_i(p)^p and 4(f_i(p)+2)^p + 3^p > 5(f_i(p)+1)^p. We consider the following functions: 3^p + 3(d+1)^p - 4d^p and 4(d+2)^p + 3^p - 5(d+1)^p, where d=p/2+5 for the weighted case and d = 5 for the unweighted case. To prove Lemma <ref>, we need to show that (<ref>) is negative and (<ref>) is positive for both cases. For the weighted case, in one approach to prove (<ref>) is positive, we could first show the derivative only has one zero on the interval [-3,0] and that this critical point is a maximum. Since (<ref>) is non-negative at p=-3 and p=0, this would prove (<ref>) is positive. A more straightforward analysis for showing (<ref>) is negative is to argue that the derivative of (<ref>) is positive over the interval [-3,0], and therefore (<ref>) is strictly increasing over the interval. Since (<ref>) is 0 at p = 0, this would prove (<ref>) is negative over the interval. For the unweighted case, we could show (<ref>) is negative by showing that the single critical point on the interval (-1/8, 0) is a minimum and the function at the endpoints of the interval is non-positive. To show (<ref>) is positive, one could simply show the function is strictly decreasing on the interval (-1/8,0) and the function is non-negative at 0. As was the case for Lemma <ref>, the formal verification of these facts would be lengthy and not necessarily useful, so we again verified these inequalities analytically via MATLAB. We plot these functions over the respective domains in Figure <ref>. We also need an analogous lemma to Lemma <ref> for the case of p < 0. It is easy to argue that since x^p is now a strictly decreasing function in x, ρ_p(S ∪ A') ≥3^p ·S + ∑_v ∈ A' (d + d_S+v(v))^p/S + A'. The inequality is strict when A' < A. Proof sketch for Theorem <ref>. We are now ready to discuss the few differences in the proof of Theorem <ref> compared to the proof of Theorem <ref>. At the beginning of this section, we discussed the changes to the reduction for the case of p < 0. We also noted that a similar argument to that in Equation <ref> would hold for p < 0, showing that _G ≤ρ^* when the given instance (, ) contains an exact 3-cover. Now suppose the instance does not contain an exact 3-cover. We discuss how the three cases in the proof of Theorem <ref> would change for p < 0. Recall that we take arbitrary sets S ⊆ L and A' ⊆ A and the goal is to show that ρ_p(S ∪ A') > ρ^*. For case 1 (corresponding to 3S = A'), the only difference is that the function ∑_i=1^n (c+x_i)^p is convex instead of concave. The reasoning for this case still remains the same. Now we consider case 2 (corresponding to 3S = βA' for some β∈ [0,1)). There are two differences here. We first see that Lemma <ref> holds for p < 0, implying that ρ_p(S ∪ A') is at least the quantity in (<ref>). The second difference is in guaranteeing that the function in (<ref>) is strictly decreasing in β if and only if 3^p + 3(d+1)^p < 4d^p. This inequality holds because of Lemma <ref> (instead of Lemma <ref>). Now we consider case 3 (corresponding to 3S = αA' for α > 1). There are again two differences. Lemma <ref> holds for p < 0, so we have that ρ_p(S∪ A') is at least the quantity in (<ref>). Then we have that the function in (<ref>) is strictly increasing in β if 4(d+2)^p + 3^p > 5(d+1)^p. This holds because of Lemma <ref>. §.§ Discussion on extending hardness results to all p ∈ (-∞, 0) ∪ (0,1) Weighted graphs. Suppose one wants to prove that the weighted is NP-hard for some p ≤ -3. All one would have to do is define d in terms of p such that the inequalities in Lemma <ref> hold. For example, take p = -10. It would suffice to set d = 0.2p + 5 in order to prove the inequalities in Lemma <ref> hold for p ∈ [-10 - δ, -10 + δ] for some small δ. (It is important to note that this choice of d would not work for p ∈ [-3, -0.5], for example.) Once we have a definition of d that allows us to prove a version of Lemma <ref> for our new choice of p, the proof of Theorem <ref> will still work with this new version of Lemma <ref>. Unweighted graphs. In the unweighted case, we have less flexibility than we do for the weighted case. In this case, one potential approach to proving NP-hardness for values of p in (-∞, -1/8) ∪ (1/4, 1) is to reduce from ℓ for some ℓ > 3 instead of and carefully choose the value of d, the degree of the regular graph G[A]. For example, the analysis goes through as above for p = 1/2 if one reduces from 6 and sets d =11. The difficulty in this approach is being able to choose a value for d and ℓ for every given p. § EXPERIMENTS §.§ Background on Frank-Wolfe The Frank-Wolfe method is used to solve constrained convex optimization problems. Suppose we want to minimize a differentiable, convex function f over a convex set S. The Frank-Wolfe method critically relies on the ability to efficiently optimize linear functions over S. The algorithm proceeds as follows. We start with an arbitrary point x_0 ∈ S. We start iteration k by first solving y_k := _z ∈ S z^T ∇ f(x_k). Then we let x_k+1 = (1 - α_k) x_k + α_k y_k where α_k is the step size. The standard Frank-Wolfe method sets α_k = 2/k+2. We now discuss how we use the Frank-Wolfe method to solve . We use the idea from <cit.> where they take the same approach for solving 1. Harb et al. introduce the following convex program for a supermodular function f: minimize ∑_v ∈ V b_v^2 subject to b ∈ B_f where B_f = {x ∈^V | x ≥ 0; x(S) ≥ f(S), ∀ S ⊆ V; x(V) = f(V)}. Note that B_f is the base contrapolymatroid associated with f (see, e.g., <cit.>). <cit.> shows that if one could obtain an optimal solution b to (<ref>), then the vertices with the largest value in b will form an optimal solution to (i.e. max_S f(S)/S). We can then use the Frank-Wolfe method to obtain near-optimal solutions to (<ref>). For rounding a near-optimal solution b to (<ref>), we use the same heuristic approach that <cit.> and <cit.> used for 1, which is to sort the vertices in increasing order of b and output the suffix with the largest density. We address a couple of implementation aspects of using the Frank-Wolfe method to find solutions to (<ref>). We use Frank-Wolfe to optimize over the base contrapolymatroid associated with f_p, B_f_p. The Frank-Wolfe method requires an initial point in the given convex set B_f_p. In our experiments, we start with a simple point in B_f_p: for all v ∈ V, we set x_v = d_G(v)^p. We experimented with other starting points, such as those based on the ordering produced by , but this simple solution worked best and was much faster. Furthermore, we note that when applying Frank-Wolfe to solve the convex program (<ref>), the optimization problem at each iteration is essentially min_z ∈ B_f z^T x_k where x_k is our current solution. We can easily solve this optimization problem as algorithms for optimizing linear functions over the base contrapolymatroid are well-understood <cit.>. §.§ Full reporting of results We include the full table of results in Table <ref> for the experiments comparing and on all ten real-world graphs and all values of p ∈{1.05, 1.25, 1.5, 1.75, 2}. For , we report results for values of ∈{0.01, 0.1, 1}. We include results for the experiments comparing , , and Frank-Wolfe on all ten real-world graphs and all values of p ∈{1.05, 1.25, 1.5, 1.75, 2}. We include separate figures for each value of p in Figures <ref>, <ref>, <ref>, <ref>, <ref>. We include the full table of results in Table <ref> for the experiments for for p < 1. We consider p ∈{-1, -0.5, 0.25, 0.5, 0.75}.
http://arxiv.org/abs/2306.13111v1
20230621183934
Relationships between the Phase Retrieval Problem and Permutation Invariant Embeddings
[ "Radu Balan", "Efstratos Tsoukanis" ]
math.FA
[ "math.FA", "cs.IT", "math.IT" ]
Polynomial Logical Zonotopes: A Set Representation for Reachability Analysis of Logical Systems Radu Balan and Efstratos Tsoukanis Department of Mathematics, University of Maryland, College Park, MD 20742 Emails: [email protected] , [email protected] July 31, 2023 ==================================================================================================================================================================== This paper discusses the connection between the phase retrieval problem and permutation invariant embeddings. We show that the real phase retrieval problem for ^d/O(1) is equivalent to Euclidean embeddings of the quotient space ^2× d/S_2 performed by the sorting encoder introduced in an earlier work. In addition, this relationship provides us with inversion algorithms of the orbits induced by the group of permutation matrices. § INTRODUCTION The phase retrieval problem has a long and illustrious history involving several Nobel prizes along the way. The issue of reconstruction from magnitude of frame coefficients is related to a significant number of problems that appear in separate areas of science and engineering. Here is an incomplete list of some of these applications and reference papers: crystallography <cit.>; ptychography <cit.>; source separation and inverse problems <cit.>; optical data processing <cit.>; mutually unbiased bases <cit.>, quantum state tomography <cit.>; low-rank matrix completion problem <cit.>; tensor algebra and systems of multivariate polynomial equations <cit.>; signal generating models <cit.>, bandlimited functions <cit.>, radar ambiguity problem <cit.>, learning and scattering networks <cit.>. In <cit.>, this problem was shown to be a special form of the following setup. Let H denote a real or complex vector space and let A={a_i}_i∈ I be a frame for H. The phase retrieval problem asks whether the map H∋ x↦α_A(x)={|xa_i|}_i∋ I∈ l^2(I) determines x uniquely up to a unimodular scalar. In this paper we focus on the finite dimensional real case of this problem (see also <cit.>), namely when H=^d. In this case, a frame 𝒜={a_1,…,a_D}⊂^d is simply a spanning set. The group O(1)={-1,+1} acts on H by scalar multiplication. Let Ĥ=H/O(1) denote the quotient space induced by this action, where the equivalence classes (orbits) are [x]={x,-x}  ,  for x≠ 0  ,  [x]={0}  ,  for x=0. The analysis operator for this frame is T_A:H→^D   ,   T_A(x) = (xa_k)_k=1^D. The relevant nonlinear map α_A is given by taking the absolute value of entries of T_A: α_A: H→^D  ,  α_A(x)=(|xa_k|)_ k=1^ D. Notice α_A produces a well-defined map on Ĥ, which, with a slight abuse, but for simplicity of notation, will be denoted also by α_A. Thus α_A([x])=α_A(x). Another customary notation that is often employed: a frame is given either as an indexed set of vectors, 𝒜={a_1,…,a_D}, or through the columns of a d× D matrix A. The matrix notation is not canonical, but this is not an issue here. We always identify H=^d with its columns vector representation in its canonical basis. We say that (the columns of a matrix) A ∈^d × D form/is a phase retrievable frame, if α_A : ^d →^D, α_A(x)= (|xa_k|)_k=1^D is an injective map (on the quotient space). In a different line of works <cit.> it was recognized that the phase retrieval problem is a special case of Euclidean representations of metric spaces of orbits defined by certain unitary group actions on Hilbert spaces. Specifically, the setup is as follows. Let V denote a Hilbert space, and let G be a group acting unitarily on V. Let V̂=V/G denote the metric space of orbits, where the quotient space is induced by the equivalence relation x,y∈ V, x∼ y iff y=g.x, for some g∈ G. Here g.x represents the action of the group element g∈ G on vector x. For the purposes of this paper we specialize to the finite dimensional real case, V=^n× d and G=S_n, is the group of n× n permutation matrices acting on V by left multiplication. Other cases are discussed in aforementioned papers. In particular, in <cit.> the authors have shown a deep connection to graph deep learning problems. In <cit.>, the authors linked this framework to certain graph matching problems and more. The bi-Lipschitz Euclidean embedding problem for the finite dimensional case is as follows. Given V̂=V/G, construct a map β:V→^m so that, (i) β(g.x)=β(x) for all g∈ G, x∈ V, and (ii) for some 0<A≤ B<∞, and for all x,y∈ V, A ([x],[y]) ≤β(x)-β(y) ≤ B ([x],[y]) where ([x],[y])=inf_g∈ Gx-g.y_V is the natural metric on the quotient space V̂. In <cit.> the following embedding was introduced. Let A∈^d× D be a fixed matrix (termed as key) whose columns are denoted by a_1,…,a_D. The induced encoder β_A:V→^n× D is defined by β_A(X) = ↓(XA) = [ [ Π_1 X a_1 ⋯ Π_D X a_D ]] where Π_k∈ S_n is the permutation matrix that sorts in decreasing order the vector Xa_k. It was shown in <cit.> that, for D large enough, β_A provides a bi-Lipschitz Euclidean embedding of V̂. This motivates the following definition. We say that A∈^d× D is a universal key for ^n× d if β_A:^n× d→^n× D, β_A(X)=↓(XA) is an injective map (on the quotient space). The purpose of this paper is to show the equivalence between the real phase retrieval problem, specifically the embedding α_A, and the permutation invariant embedding β_A defined above, in the special case n=2. § MAIN RESULTS Recall the Hilbert spaces H=^d and V = ^2 × d. For A ∈^d × D recall also the encoders α_A:Ĥ→^D and β_A :V̂→^2 × D given respectively by α_A(x)=(|xa_k|)_k∈ [D], and β_A(X)=↓(XA). Our main result reads as follows. In the case n=2, the following are equivalent. * α_A is injective, hence the columns of A form a phase retrievable frame; * β_A is injective, hence A is a universal key. Perhaps it is not surprising that, if an equivalence between the phase retrieval problem and permutation invariant representations is possible, then this should occur for n=2. This statement is suggested by the observation that O(1) is isomorphic with S_2, the group of the 2× 2 permutation matrices. What is surprising that, in fact, the two embeddings are intimately related, as the proof and corollaries show. Let X∈ V=^2× d. Denote by x_1,x_2∈^d its two rows transposed, that is X = [ [ x_1^T; x_2^T ]]. Notice that, for each k∈ [D], the k^th column of β_A(X) is given by ↓ (Xa_k)=[ max(x_1a_k,x_2a_k); min(x_1a_k, x_2a_k) ]. The key observations are the following relationships between min, max, and the absolute value |·|: |u-v| = max(u,v) - min(u,v) u+v = max(u,v) + min(u,v) max(u,v) = 1/2 (u+v+|u-v|) min(u,v) = 1/2 (u+v-|u-v|) | |u|-|v| | = min(|u-v|,|u+v|) In particular, these show that: [ 1 -1; 1 1 ]β_A(X)= [ 1 -1; 1 1 ]·↓ (XA)= =[ |x_1-x_2a_1|, … ,|x_1-x2a_D|; x_1+x_2a_1 , … , x_1-x_2a_D ] = [ (α_A(x_1-x_2))^T; (T_A(x_1+x_2))^T ] Where, T_A was introduced in equation (<ref>). (1) → (2): Suppose that α_A is injective. Let X=[ x_1^T; x_2^T ] and Y=[ y_1^T; y_2^T ], such that β_A(X)=β_A(Y). Then [ 1 -1; 1 1 ]β_A(X)= [ 1 -1; 1 1 ]β_A(Y) [ (α_A(x_1-x_2))^T; T_A(x_1+x_2)^T ] = [ (α_A(y_1-y_2))^T; T_A(y_1+y_2)^T ]. But now, α_A(x_1-x_2)=α_A(y_1-y_2)) x_1-x_2=y_1-y_2 or x_1-x_2=y_2-y_1 and T_A(x_1+x_2)=T_A(y_1+y_2)^T x_1+x_2=y_1+y_2 Thus we have that {[ x_1 = y_1; x_2 = y_2 ]} or {[ x_1 = y_2; x_2 = y_1 ]} Either case means X=Y or X= [ 0 1; 1 0 ]Y [X] =[Y] So, β_A is injective. (2) → (1): Suppose that β_A is injective. Let x,y ∈^d such that α_A(x)=α_A(y), i.e. |xa_k|=|ya_k|, ∀ k ∈ [D]. Let X=[ x^T; -x^T ] and Y=[ y^T; -y^T ]. Then, [ 1 -1; 1 1 ]β_A(X)=[ α_A(2x)^T; T_A(0)^T ] = 2 [ α_A(2x)^T; 0 ] and [ 1 -1; 1 1 ]β_A(X)=[ α_A(2y)^T; T_A(0)^T ] = 2 [ α_A(2y)^T; 0 ] Thus β_A(X)= β_A(Y). Since β_A is assumed injective, it follows that X=Y or X= [ 0 1; 1 0 ]Y. So, x=y or x=-y. We conclude that [x]=[y], so α_A is injective. If β_A is injective, then D ≥ 2d-1. If D=2d-1, then β_A is injective if and only if A is a full spark frame. Both results follow necessary and sufficient conditions established in, e.g. <cit.>. Recall that a frame in ^d is said full spark if any subset of d vectors is linearly independent (hence basis). Assume D=2d-1. Note the embedding dimension for V̂=^2× d is m=2(2d-1)=4d-2=2 dim(V)-2. In particular this shows the minimal dimension of bi-Lipschitz Euclidean embeddings may be smaller than twice the intrinsic dimension of the Hilbert space where the group acts on. Both papers <cit.> and <cit.> present (bi)Lipschitz embeddings into ^2 dim(V). As was derived in the proof, α_A, β_A and T_A are intimately related: β_A ( [ [ x_1^T; x_2^T ]] ) = 1/2[ [ 1 1; -1 1 ]] [ [ α_A(x_1-x_2)^T; T_A(x_1+x_2)^T ]] In particular, any algorithm for solving the phase retrieval problem solves also the inversion problem for β_A. Let ω_A:^D→^d denote a left inverse of α_A on the metric space ^d. This means ω_A(α_A(x))∼ x in ^d/O(1). Denote by T_A^† a left inverse of the analysis operator (e.g., the synthesis operator associated to the canonical dual frame). Thus T_A^†T_A=I_d. Then an inverse for β_A is: β_A^-1(Y) = 1/2[ [ T_A^† (y_2) + ω_A(y_1); T_A^† (y_2) - ω_A(y_1) ]] where Y=[ [ y_1^T; y_2^T ]]. Equations (<ref>) suggest a lower dimensional embedding than β_A. Specifically, first we compute the average y_1 = 1/2(x_1+x_2) which is of size ^d, and then encode the difference x_1-x_2 using α_A, y_2=α_A(x_1-x_2). We obtain the following modified encoder, β̃_A:^2× d→^d+D: β̃_A(x)=[ [ 1/2(x_1+x_2)^T α_A(x_1-x_2)^T ]]. With the ω_A left inverse of α_A, the inverse of β̃_A is given by: β̃_A^-1(Y) = [ [ y_1 + 1/2ω_A(y_2); y_1 - 1/2ω_A(y_2) ]] where y_1=Y(1:d) and y_2=Y(d+1:d+D). In the case when D=D_min=2d-1, the minimal embedding dimension is m=d+D=3d-1 (instead of 4d-2 or 4d=2 dim(V)). Reference <cit.> shows that an upper Lipschitz bound for embedding β_A is σ_1(A), where σ_1(A) is the largest singular value of A. Same reference shows that if β_A is injective then there is also a strictly positive lower Lipschitz bound without providing a formula. Using <Ref> we provide explicit estimates of these bounds. Assume A∈^d× D is a universal key for ^2× d (i.e., β_A:^2× d→^2× D is injective), or, equivalently (according to Theorem <ref>), the columns of A form a phase retrievable frame in ^d (i.e., α_A : ^d→^D is injective). Then both α_A and β_A are bi-Lipschitz with same Lipschitz constants, where distances are given by ([x],[y])= min(x-y,x+y) on Ĥ, and ([X],[Y])= min_P ∈ S_2X-PY on V̂, respectively. The optimal upper and lower Lipschitz constants are given by: A_0 = min_I⊂[D]√(σ_d^2(A[I]) + σ_d^2(A[I^c]))  ,  B_0=σ_1(A) where σ_1(A) is the largest singular value of A (equals the square-root of upper frame bound) and σ_d(A[J]) is the d^th singular value of submatrix of A indexed by J. Furthermore, these bounds are achieved by the following vectors. Let I_0 denote a optimal partition in (<ref>) and let u_1, u_2 denote the normalized left singular vectors of A[I_0] and A[I_0^c], respectively, each associated to the d^th singular value. Let u be the normalized principal left singular vector associated to A (i.e., associated to the largest singular value). Then: * The upper Lipschitz constant B_0 is achieved as follows: (i) for map α_A by vectors x_max=u and y_max=0; (ii) for map β_A by vectors X_max=[ [ u^T; 0 ]] and Y_max=0. * The lower Lipschitz constant A_0 is achieved as follows: (i) for map α_A by vectors x_min=u_1+u_2 and y_min=u_1-u_2; (ii) for map β_A by vectors X_min=[ [ (u_1 + u_2)^T; 0 ]] and Y_min=[ [ u_1^T; u_2^T ]]. The optimal Lipschitz constants for the map α_A were obtained in <cit.>, including the optimizers. However, for reader's convenience, we prefer to give direct proofs of these results. * Upper Lipschitz constants. (i) Let x,y ∈^d. Then α_A(x)-α_A(y)^2= ∑_i=1^D||a_ix|-|a_iy||^2 = = ∑_i=1^D min(|a_ix-y|^2,|a_ix+y|^2) ≤ ≤min(∑_i=1^D |a_ix-y|^2,∑_i=1^D |a_ix+y|^2 ) ≤σ_1^2(A) ([x],[y])^2. So σ_1(A) is an upper Lipschitz bound for the map α_A. Now for x_max=u,y_max=0 notice that α(x_max)-α(y_max)^2= ∑_i=1^D|a_iu|^2 = =σ_1^2(A) u^2 =σ_1^2(A) ([x_max],[y_max])^2. Thus, the upper Lipschitz constant σ_1(A) is in fact optimal (tight). (ii) Map β_A. Let X,Y ∈^2 × D and P_0 ∈ S_2 be a permutation that achieves the distance between X and Y, i.e. X-P_0Y= ([X],[Y]). Note that β_A(X)-β_A(Y)^2 = ∑_k=1^D (Π_k X -Ξ_kY)a_k^2= =∑_k=1^D (Ξ_k^TΠ_k X -Y)a_k^2 for some Π_k,Ξ_k ∈ S_2 that align the vectors. From rearrangement lemma we have that (Π_k X -Ξ_kY)a_k≤(X-P_0Y)a_k, ∀ k ∈ [D] so, ∑_k=1^D (Ξ_k^TΠ_k X -Y)a_k^2 ≤ A^2X-P_0Y^2 = σ_1^2(A) ([X],[Y])^2. Therefore, we conclude that σ_1(A) is an upper Lipschitz constant for map β_A. We still need to show that this bound is achieved (i.e., it is optimal). For X_max and Y_max defined in part 1) of <ref>, β_A(X_max)-β_A(Y_max)^2= β_A(X_max)^2 =∑_k=1^D ua_k^2 = σ_1^2(A). and (X_max,Y_max)=1. Thus B_0 is the optimal Lipschitz constant both for α_A and for β_A. * Lower Lipschitz constants. (i) Let x,y∈^d and define the auxiliary set S = S(x,y) := {j∈ [D] :| x-ya_j| ≤ |x+ya_j| } Then α(x)-α(y)^2= ∑_i=1^D| |a_ix|-|a_iy| |^2 = = ∑_i ∈ S|a_ix-y|^2+∑_i ∈ S^c|a_ix+y|^2 ≥ σ_d^2(A[S])+σ_d^2(A[S^c]) ([x],[y])^2 ≥ A_0^2 ([x],[y])^2 . So A_0 is a lower Lipschitz bound for α_A, but we still need to show that it is optimal. Let I_0 be the optimal partition, and let u_1, u_2 be normalized left singular vectors as in the statement of <Ref>. Then: α_A(u_1+u_2)-α_A(u_1-u_2)^2= = ∑_i=1^D||a_iu_1+u_2|-|a_iu_1-u_2||^2 = = ∑_i=1^D min( |a_i2u_2|^2,|a_i2u_1|^2 ) ≤ ≤ 4 ( ∑_i∈ I_0 |a_iu_1|^2 + ∑_i∈ I_0^c |a_iu_2|^2 ) =4 (σ_d^2(A[I_0]) + σ_d^2(A[I_0^c])) = A_0^2 ([u_1+u_2],[u_1-u_2])^2, where we used again that | |a|-|b| |=min(|a-b|,|a+b|) for any two real numbers a,b∈, and, for the inequality, at every i∈[D] we made a choice between the two terms. Since the reverse inequality is also true, it follows that x_min=u_1+u_2 and y_min=u_1-u_2 achieve the lower bound A_0 for α_A. (ii) Consider now the map β_A. Let X,Y∈^2× d and define the auxiliary set S = S(X,Y) := { j∈ [D] :| x_1-x_2-y_1+y_2a_j| ≤ ≤ |x_1-x_2+y_1-y_2a_j| } Then, using <Ref> we have that β_A(X)-β_A(Y)^2= 1/2(α_A(x_1-x_2)-α_A(y_1-y_2)^2+T_A(x_1+x_2-y_1-y_2)^2) = 1/2∑_j ∈ S|x_1-x_2 - y_1+y_2a_j|^2 +|x_1+x_2 - y_1-y_2a_j|^2 + +1/2∑_j∈ S^c |x_1-x_2 + y_1-y_2a_j|^2+ |x_1+x_2 -y_1-y_2a_j|^2 = = ∑_j∈ S |x_1-y_1a_j|^2 + |x_2-y_2a_j|^2 + + ∑_j∈ S^c |x_1-y_2a_j|^2 + |x_2-y_1a_j|^2 ≥ ≥σ_d^2(A[S])(x_1-y_1^2+x_2-y_2^2) + + σ_d^2(A[S^c]) (x_1-y_2^2+x_2-y_1^2) ≥ ≥ A_0^2 ([X],[Y])^2. Therefore A_0 is a lower Lipschitz constant for β_A. It remained to prove that this bound is tight, i.e., it is achieved. Let X_min and Y_min be as in the statement of <Ref>. Then β_A(X_min)-β_A(Y_min)^2= 1/2(α_A(u_1+u_2)-α_A(u_1-u_2)^2+T_A(u_1+u_2-u_1-u_2)^2) = 1/2(α_A(u_1+u_2)-α_A(u_1-u_2)^2 ) = A_0^2 ([X_min],[Y_min])^2 where the last equality follows from the fact that the lower Lipschitz constant of α_A is achieved by u_1+u_2 and u_1-u_2, and the fact that ([X_min],[Y_min])^2=2. So A_0 is indeed the optimal lower Lipschitz constant for β_A. § CONCLUSION In this paper we analyzed two representation problems, one arising in the phase retrieval problem and the other one in the context of permutation invariant representations. We showed that the real phase retrieval problem in a finite dimensional vector space H is entirely equivalent to the permutation invariant representations for the space V=^2× dim(H). Our analysis proved that phase retrievability is equivalent to the universal key property in the case of encoding 2× d matrices. This result is derived based on the lattice space structure (,+,min,max). It is still an open problem to understand the relationship between α_A and β_A in the case n>2. A related problem is the implementation of the sorting operator using a neural network that has ReLU as activation function (or, even the absolute value |·|). Efficient implementations of such operator may yield novel relationships between α_A and β_A, in the case n≥ 3. § ACKNOWLEDGMENT The authors have been supported in part by a NSF award under grant DMS-2108900 and by the Simons Foundation. IEEEtran.bst
http://arxiv.org/abs/2306.02524v1
20230605011838
Kinodynamic FMT* with Dimensionality Reduction Heuristics and Neural Network Controllers
[ "Dongliang Zheng", "Panagiotis Tsiotras" ]
cs.RO
[ "cs.RO" ]
Experimentally Realizable Continuous-variable Quantum Neural Networks George Siopsis July 31, 2023 ===================================================================== This paper proposes a new sampling-based kinodynamic motion planning algorithm, called FMT*PFF, for nonlinear systems. It exploits the novel idea of dimensionality reduction using partial-final-state-free (PFF) optimal controllers. With the proposed dimensionality reduction heuristic, the search space is restricted within a subspace, thus faster convergence is achieved compared to a regular kinodynamic FMT*. The dimensionality reduction heuristic can be viewed as a sampling strategy and asymptotic optimality is preserved when combined with uniform full-state sampling. Another feature of FMT*PFF is the ability to deal with a steering function with inexact steering, which is vital when using learning-based steering functions. Learning-based methods allow us to solve the steering problem for nonlinear systems efficiently. However, learning-based methods often fail to reach the exact goal state. For nonlinear systems, we train a neural network controller using supervised learning to generate the steering commands. We show that FMT*PFF with a learning-based steering function is efficient and generates dynamically feasible motion plans. We compare our algorithm with previous algorithms and show superior performance in various simulations. § INTRODUCTION Motion planning, as a fundamental component of robot autonomy, has been studied extensively in the last three decades to increase its efficiency and capability. Efficiency means faster convergence to better solutions. Efficient planning algorithms are crucial for robots with limited computation power and for replanning in changing environments. Capability means dealing with more complicated planning problems. High-dimensional state space, nonlinear system dynamics, and cluttered environments with nonconvex obstacles still pose great challenges for efficient kinodynamic planning despite recent advances. Sampling-based motion planning algorithms, such as PRM <cit.> and RRT <cit.>, have been developed to solve planning problems in high-dimensional continuous state spaces by incrementally building a graph/tree through the search space. The optimal sampling-based planning algorithm RRT* <cit.> almost surely converges asymptotically to the optimal solution. RRT* is well-suited for planning in high-dimensional spaces and obstacle-rich environments. Many applications of RRT* have been studied in recent years <cit.>. For motion planning for dynamical systems, sampling-based optimal kinodynamic planning algorithms (SBKMP) such as Kinodynamic RRT* <cit.> and Kinodynamic FMT* <cit.> have been developed to consider differential constraints. SBKMP requires any two points sampled in the planning space to be connected with an optimal trajectory. For robots with differential constraints, the optimal trajectory between two states is obtained by solving a two-point boundary value problem (TPBVP), which is a non-trivial undertaking for complex nonlinear systems. The solution to this local TPBVP is also referred to as the steering function. Simulation-based methods such as SST <cit.> avoid solving the TPBVP by using random control sampling and simulation. However, without the local optimal edges (trajectories) provided by the steering function, the convergence of SST to a good solution is slow. Solving TPBVPs efficiently for nonlinear systems is one of the bottlenecks of kinodynamic RRT*. Thus, researchers have looked into more efficient ways to solve these TPBVPs. A steering function based on LQR is used in <cit.>. A fixed-final-state free-final-time controller for linear systems that optimally connects any pair of states is introduced in <cit.>. While an analytical solution of the TBPVP for linear systems is available, considering general nonlinear system dynamics is difficult. Learning-based methods have the potential for solving TPBVP efficiently. To deal with nonlinear dynamics, steering functions based on supervised learning and reinforcement learning are developed in <cit.> and <cit.> respectively, and integrated with a kinodynamic RRT* algorithm. In <cit.>, a goal-conditioned state-feedback neural network controller for nonlinear systems is trained and used for solving the TPBVPs in RRT*. Another limitation of RRT* is the slow convergence rate of the solution to the optimal one, which is especially evident for the kinodynamic planning case where the sampling space is not just the configuration space but the full state space. Heuristic and informed sampling methods have been developed to improve the convergence rate. Informed RRT* <cit.> focuses sampling to an informed subset that could potentially provide a better solution. Exploiting the benefit of ordered search, FMT* <cit.> and BIT* <cit.> are shown to find better solutions faster than RRT*. However, most of these methods only consider the geometric planning problem. Existing work on heuristics for improving the convergence of kinodynamic motion planning is rather limited <cit.>. In our previous work <cit.>, the Kino-RRT* with a dimensionality reduction heuristic is developed, in which the heuristic is obtained by solving a partial final-state free (PFF) optimal control problem. Instead of sampling the full state space, Kino-RRT* only samples part of the state space while the rest of the states are selected by the PFF optimal controller. By sampling in the reduced state space and utilizing the PFF optimal controller, Kino-RRT* shows faster convergence. An analytical solution for the PFF optimal control problem for linear systems is also derived in <cit.>. In this paper, we propose the FMT*PFF, which is built on our previous works <cit.> and <cit.>. We extend the dimensionality reduction heuristic to learning-based planners and train neural network controllers to solve the PFF optimal control problem. Compared to <cit.>, the dimensionality reduction heuristic is used in FMT*PFF to improve the convergence rate of the algorithm. Also, training a PFF neural network controller is simpler than the set-to-set controller in <cit.>. Compared to <cit.>, solving PFF using supervised learning allows us to deal with nonlinear system dynamics efficiently. Furthermore, while <cit.> and <cit.> are based on the RRT* algorithm, FMT*PFF is based on the FMT* algorithm to benefit from ordered search. The contributions of the paper are: * A dimensionality reduction heuristic for accelerating sampling-based kinodynamic planning for nonlinear systems. * A neural network controller for solving the partial final-state free (PFF) optimal control problem. The proposed PFF neural network controller is used as the steering function in sampling-based kinodynamic motion planning algorithms. * The FMT*PFF algorithm is developed for planning with learning-based steering functions that cannot achieve exact steering. * Extensive simulations and comparison with previous methods show better performance with our method. § PRELIMINARIES §.§ Problem Statement The optimal kinodynamic motion planning problem is given by the following optimal control problem (OCP), min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_s, x(t_f)=x_g, u ∈𝒰, x ∈𝒳_free, ∀ t∈ [0, t_f]. where x ∈𝒳⊂^n_x is the state, u ∈𝒰⊂^n_u is the control input, x_s and x_g are the initial state and goal state, respectively. The free space is denoted by 𝒳_free⊂𝒳, where at each x ∈𝒳_free, the system does not collide with any obstacles in the environment. Finally, c(x, u) is a cost function that we aim to minimize. The goal of the optimal kinodynamic motion planning problem is to find a control trajectory u(t), t ∈ [0,t_f], such that the solution state trajectory x(t) is obstacle-free, reaches the goal state, and minimizes a cost. §.§ Partial-Final-State-Free Optimal Controller Sampling-based motion planning algorithms such as RRT* and FMT* solve the problem (<ref>)-(<ref>) by growing a tree. The state space is approximated by random samples. The transition between samples is achieved using optimal steering functions. The sampled nodes and the connections between nodes define a graph. A trajectory tree is obtained by searching over this graph. In kinodynamic RRT* and FMT*, the edge between two sampled states, x_a and x_b, is constructed using a steering function which is the solution of a TPBVP given by min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_a, x(t_f)=x_b, u ∈𝒰, ∀ t∈ [0, t_f]. Note that the obstacle constraint in (<ref>) is removed in TPBVP. In our proposed FMT*PFF, instead of sampling the full state x from the full state space 𝒳, we sample a partial state x̅ from a state space of reduced dimensionality 𝒳̅. Let x = [x_1^⊤ x_2^⊤]^⊤, where x_1 ∈ℝ^n_1, x_2 ∈ℝ^n_2, and n_1 + n_2 = n_x. We introduce the partial-final-state-free (PFF) optimal control problem as follows min_u, t_f J = ∫_0^t_f c(x,u) dτ, s.t. ẋ = f(x,u), x(0) =x_a, x_1(t_f)=x̅_c, u ∈𝒰, ∀ t∈ [0, t_f]. Compared to (<ref>), instead of fixing the state x(), only x_1() is fixed and x_2() is free in (<ref>). Note that, after solving the problem (<ref>)-(<ref>), we obtain the full state trajectory which includes the full final state x(t_f). Thus, the PFF optimal controller chooses the remaining free final state to minimize the cost. If we set this full final state x(t_f) as the terminal state in the terminal constraint in problem (<ref>)-(<ref>) and solve the problem (<ref>)-(<ref>), we will get the same trajectory as in problem (<ref>)-(<ref>). Therefore, the partial state sampling and the PFF optimal controller work as an intelligent heuristic for state-space sampling. For linear systems and quadratic cost functions, the analytical solution of the problem (<ref>)-(<ref>) is derived in <cit.>. For general nonlinear systems and cost functions, solving (<ref>)-(<ref>) efficiently is the bottleneck for sampling-based kinodynamic planning algorithms. Methods that integrate a numerical solver within a sampling-based planner to solve (<ref>)-(<ref>) have been studied <cit.>. However, the long computation time makes them unacceptable for practical use. In this paper, we use the PFF optimal controller as the steering function. We train a neural network controller to solve the PFF optimal control problem. We show that the proposed FMT*PFF algorithm with a neural network controller is efficient and generates dynamically feasible motion plans. § THE FMT*PFF ALGORITHM The main differences between FMT*PFF and the original kinodynamic FMT* are: 1) state sampling in a reduced state space; 2) use of the PFF optimal controller as the steering function. In our case, the PFF optimal controller is approximated using a neural network. .5em .5em The FMT*PFF algorithm is given by Algorithm <ref> and a graphical illustration is given in Figure <ref>. We use x̅ to represent a partial state in the reduced state space and X̅ to represent the partial state set. We use x to represent a (full)) state in the (full) state space, and X to represent the set of (full) states. Some primitive procedures are given as follows. Sampling: The sampling procedure 𝖲𝖺𝗆𝗉𝗅𝖾𝖯𝖥𝖥(m) randomly samples m partial states in the reduced state space. The sampled partial states are collision-free in the corresponding reduced state space. For example, for a robot whose state space includes the position space and the velocity space, 𝖲𝖺𝗆𝗉𝗅𝖾𝖯𝖥𝖥 samples positions of the robot that are collision-free. Near Nodes: The function 𝖭𝖾𝖺𝗋(x,X̅) returns all the partial states in X̅ that are contained in a ball of radius r centered at x. The function 𝖭𝖾𝖺𝗋(X,x̅) returns all the states in X that are contained in a ball of radius r centered at x̅. One simple implementation of the distance function is the Euclidean distance in the partial state space. Collision Checking: The function 𝖢𝗈𝗅𝗅𝗂𝗌𝗂𝗈𝗇𝖥𝗋𝖾𝖾(τ) takes a trajectory τ (an edge segment) as an input and returns true if and only if τ lies entirely in the collision-free space. Cost: The procedure 𝖢𝗈𝗌𝗍(x) returns the cost-to-come from the root node to x. Segment Cost: The procedure 𝖲𝖾𝗀𝖢𝗈𝗌𝗍(x_i,x̅_j) returns the cost to go from x_i to x̅_j. This cost is obtained by solving the PFF optimal control problem with boundary conditions x_i and x̅_j. We can train a neural network to predict this edge cost. Steering: The procedure 𝖲𝗍𝖾𝖾𝗋𝖯𝖥𝖥(x_i,x̅_j) solves the TPBVP using the PFF optimal controller, and it returns a trajectory τ that starts from x_i and ends at x̅_j. In Algorithm <ref>, every vertex v is associated with a state v.x and the corresponding partial state v.x̅. We initialize the tree in Line 1-2. In Line 3, the m partial states and the goal are added to the unvisited set X̅_unvisited, which is a set of partial states that have not been added to the tree. X_open is the set of states that have already been added to the tree. They are the frontier nodes of the tree that will be extended next. x_c is the state in X_open that has the minimum cost-to-come (Line 22) and x̅_c is the partial state associated with x_c. These are initialized in Line 5. If x̅_c = x̅_g, the solution has been found. Otherwise, we try to extend the tree. In Line 8, the neighboring partial states of x_c in X̅_unvisited, X̅_near, are found. For every x̅_near in X̅_near, we try to find its best parent in X_open (Line 9-18). In Line 10, the neighboring state of x̅_near in X_open, X_near, are found. The best parent for x̅_near that results in the minimum cost-to-come for x̅_near is obtained in Line 11. The trajectory τ and the full state x_new are obtained in Line 12 using the PFF steering function. If τ is collision-free, the new vertex v and new edge are added to the tree, and the sets X̅_unvisited and X_open,new are updated (Line 13-18). v(x_parent) denotes the vertex associated with x_parent and v(x_c).x̅ denotes the partial state associated with x_c. §.§ A Neural Network Controller for PFF Optimal Control The proposed FMT*PFF can directly work with the analytical solution of the PFF optimal control problem, a numerical solver for the PFF optimal control problem, and a learning-based steering function for the PFF optimal control problem. In this section, we describe training the neural network controller for PFF optimal control. We first generate the training data. For a dynamical system, we use numerical optimization solvers <cit.> to solve the PFF optimal control problem given by (<ref>)-(<ref>) offline. Since we are interested in steering the system from different initial states to the partial final state, we generate optimal trajectories with different initial states. The initial states are uniformly sampled from an initial state set. We choose the partial final state to be the position, and let the remaining state (heading angle, velocity, etc.) be free. We utilize the translational invariant of the trajectories. If the goal position is not the origin, we can translate the goal position to the origin and translate the position of the starting state accordingly. After solving the optimal control problem with the translated boundary condition, we can get the trajectory of the original problem by translating the solution trajectory back. Therefore, we choose the goal position to be at the origin (zero). Translational invariance regarding the position is common for systems such as double integrator, car, and UAVs. Note that we only need to steer the system to nearby states given by the 𝖭𝖾𝖺𝗋 function in FMT*PFF. Thus, a 'local' PFF optimal controller is sufficient. This makes the learning-based controller using a neural network well-suited for this task since we can only generate finite training data. When using the neural network for prediction, it is important to make sure we are using it for interpolation instead of extrapolation. Extrapolation will diminish the prediction accuracy. For this purpose, the offline training data should cover the neighborhood used in the 𝖭𝖾𝖺𝗋 function. Each trajectory returned by the numerical solver contains a sequence of control inputs and a sequence of states indexed by time. We combine all the data points from all offline trajectories to form the final training dataset. Each data point is a tuple (x_i,u_i). The goal of the neural network is to mimic the structure of the optimal controller. The neural network controller, 𝖭𝖭𝖢𝗈𝗇𝗍𝗋𝗈𝗅𝗅𝖾𝗋: x_i → u_i, is a state feedback controller mapping from the current state x_i to the current control u_i to be applied. After training the neural network controller, it is used for online control and online trajectory generation. The 𝖲𝗍𝖾𝖾𝗋𝖯𝖥𝖥 in Algorithm <ref> is obtained by applying the neural network controller to get a trajectory. Given a novel initial state, which should be inside the initial state set used in training data generation, we repetitively apply the neural network control commands and simulate the system dynamics to obtain the trajectory that steers the system to the goal state, where the goal position is the origin. We also train a cost-to-go neural network that predicts the segment cost, 𝖲𝖾𝗀𝖢𝗈𝗌𝗍: x_1 → J, where x_1 is the initial state, J is the cost of the trajectory from x_1 to the goal returned by the numerical solver. § EMPIRICAL EVALUATION §.§ 2D Double Integrator We first compare FMT*PFF with the kinodynamic FMT* <cit.> and the Kino-RRT* <cit.>. For this purpose, we will consider linear systems and use the analytical solution for (<ref>)-(<ref>) and (<ref>)-(<ref>). The difference between FMT*PFF and the kinodynamic FMT* is that kinodynamic FMT* samples the full state-space and solves (<ref>)-(<ref>) for steering functions while FMF*PFF samples in the reduced state-space and solves (<ref>)-(<ref>) for steering functions. For comparison, we set the tuning parameters of the algorithms, such as the neighborhood radius r, to be the same. Both Kino-RRT* and FMT*PFF use the PFF optimal control, but they are based on RRT* and FMT*, respectively. The state of the 2D double integrator is given by x = [p^⊤ v^⊤]^⊤, where p = [x_1 x_2]^⊤ is the position and v = [x_3 x_4]^⊤ is the velocity. The control input is acceleration. The system dynamics is given by ẋ = A x + B u, where A = [ 0 I_2; 0 0 ], B = [ 0; I_2 ]. The cost function c(x,u) = 1 + u^⊤ R u, where R = I_2. The position is uniformly sampled within the boundary of the environment. The free final state of the PFF controller is the velocity. Thus, FMT*PFF only samples the position space. For the kinodynamic FMT* algorithm, the velocity is uniformly sampled in v ∈ [-2, 2]^2 m/s^2. Note that a larger interval for the velocity essentially requires searching in a larger state space, which will result in slower convergence. However, if the sampling velocity interval is too small, the search is confined to a small state space that may not contain the optimal solution. The planning results of the FMT*PFF algorithm and the Kinodynamic FMT* algorithm are given in Figure <ref>. By sampling in the reduced state space and using a PFF optimal controller, FMT*PFF reduced the dimensionality of the planning problem. Thus, FMT*PPF finds a better trajectory from the beginning and continues to find better solutions given the same amount of planning time. For Kinodynamic FMT*, the probability of sampling good velocities to decrease the cost is low. The solution cost vs planning time comparison is given in Figure <ref>. We can see from Figure <ref>, FMT*PFF also has better convergence performance compared to Kino-RRT*. This is because FMT*PFF uses ordered search while Kino-RRT* uses unordered random search. One drawback of FMT*PFF is that it is not an anytime algorithm. Still, the benefit of FMT*PFF becomes more clear when planning with learning-based steering functions that cannot achieve exact steering, since FMT*PFF does not use a rewiring procedure. §.§ A Simple Car Model The kinematic car model is given by ẋ = vcos(θ), ẏ = vsin(θ), θ̇ = u_1, v̇ = u_2, where (x, y) is the position, θ is the heading angle, v is the speed, u_1 and u_2 are the control inputs. We first generate offline training data by solving PFF optimal control problems using numerical optimization solvers. The offline trajectory examples used for training are shown in Figure <ref>. We sample initial states from an initial state set. The sampling intervals of x, y, θ, and v are x ∈ [-4, 4] m, y ∈ [-4, 4] m, θ∈ [-π, π] rad, and v ∈ [-2, 2] m/s, respectively. The reduced state space is the position space and θ and v are free states. Thus, the goal state is (x, y, θ, v) = [0 0 free free]. For this example, 10,400 trajectories are generated. State-action pairs from the trajectories were used for neural network training. After training the neural network, we tested the neural network controller for randomly sampled initial states. Some example trajectories obtained using the neural network controller are shown in Figure <ref>. Since the goal of the neural network controller is to steer the system to reach the origin, one index is the error between the end position of the trajectories and the origin. 1000 trajectories corresponding to sampled novel initial states are generated using the neural network controller. 98% of the resulting trajectory reached a 0.3 neighborhood of the origin. If this end position error is too large (greater than 0.3), the connection is not successful, and this edge will not be added to the tree. Note that FMT*PFF does not require exact steering, which makes it suitable for learning-based steering functions. The final trajectory obtained from the FMT*PFF algorithm is smooth and satisfies the differential constraint, while the trajectory from <cit.> may have small gaps between edges. The planning results of FMT*PFF using the neural network controller are shown in Figure <ref> and Figure <ref>. Figure <ref> shows the trees with a different number of samples and Figure <ref> gives the cost vs time performance. Finally, we use FMT*PFF to plan trajectories for the car model in various environment settings. We randomly sample the number, size, and location of the obstacles. We also vary the starting point and goal point of the car. The planning results are given in Figure <ref>. The FMT*PFF algorithm finds dynamically feasible trajectories using the neural network controller. § CONCLUSION We propose the FMT*PFF algorithm for optimal kinodynamic motion planning for nonlinear systems. The key idea is the use of a partial-final-state-free (PFF) optimal controller to reduce dimensionality and accelerate kinodynamic motion planning is introduced. FMT*PFF planning in the reduced state space and has faster convergence. By training a neural network model of the PFF optimal controller, FMT*PFF can plan trajectories for nonlinear systems in cluttered environments with nonconvex obstacles. FMT*PFF can deal with learning-based steering functions that can not achieve exact steering because no rewire is needed. We show that FMT*PFF is efficient and generates dynamically feasible motion plans. Through numerical simulations and comparison with previous works, FMT*PFF are shown to have better cost-time performance.
http://arxiv.org/abs/2306.02826v1
20230605122246
Near-Optimal Quantum Coreset Construction Algorithms for Clustering
[ "Yecheng Xue", "Xiaoyu Chen", "Tongyang Li", "Shaofeng H. -C. Jiang" ]
quant-ph
[ "quant-ph", "cs.AI", "cs.DS", "cs.LG", "stat.ML" ]
[ Near-Optimal Quantum Coreset Construction Algorithms for Clustering equal* Yecheng Xueequal,cfcs Xiaoyu Chenequal,eecs Tongyang Licfcs Shaofeng H.-C. Jiangcfcs cfcsCenter on Frontiers of Computing Studies, Peking University, Beijing, China eecsSchool of Electronics Engineering and Computer Science, Peking University, Beijing, China Shaofeng H.-C. [email protected] Tongyang [email protected] Machine Learning, ICML 0.3in ] k-Clustering in ℝ^d (e.g., k-median and k-means) is a fundamental machine learning problem. While near-linear time approximation algorithms were known in the classical setting for a dataset with cardinality n, it remains open to find sublinear-time quantum algorithms. We give quantum algorithms that find coresets for k-clustering in ℝ^d with Õ(√(nk)d^3/2) query complexity. Our coreset reduces the input size from n to poly(kϵ^-1d), so that existing α-approximation algorithms for clustering can run on top of it and yield (1 + ϵ)α-approximation. This eventually yields a quadratic speedup for various k-clustering approximation algorithms. We complement our algorithm with a nearly matching lower bound, that any quantum algorithm must make Ω(√(nk)) queries in order to achieve even O(1)-approximation for k-clustering. § INTRODUCTION Clustering is a fundamental machine learning task that has been extensively studied in areas including computer science and operations research. A typical clustering problem is k-median clustering in ℝ^d. In k-median clustering, we are given a set of data points D ⊆ℝ^d and an integer parameter k, and the goal is to find a set C ⊂ℝ^d of k points, called the center set, such that the following objective is minimized: (D, C) := ∑_x ∈ D(x, C), here (x, y) := x - y_2, (x, C) := min_c ∈ C(x, c). In the classical setting, k-median clustering is shown to be NP-hard <cit.> even in the planar case (i.e., Euclidean ℝ^2), and polynomial time approximation algorithms were the focus of study. Furthermore, even allowing O(1)-approximation, a fundamental barrier is still that the algorithm must make Ω(nk) accesses to the distances between the n input points <cit.>. In this paper, we study quantum algorithm and complexity for k-median clustering (and more generally, (k, z)-clustering as in kzc). This is motivated by quantum algorithms for various related data analysis problems, such as classification <cit.>, nearest neighbor search <cit.>, support vector machine <cit.>, etc. Many of these quantum algorithms are originated from the Grover algorithm <cit.>, which can find an item in a data set of cardinality n in time O(√(n)), a quadratic quantum speedup compared to the classical counterpart. Hence, a natural question is whether quantum algorithms can break the aforementioned classical Ω(nk) lower bound for k-median clustering, with a practical goal of achieving complexity O(√(nk)), while still achieving O(1) or even (1 + ϵ)-approximation. Coresets To this end, we consider constructing coresets <cit.> by quantum algorithms. The coreset is a powerful technique for dealing with clustering problems. Roughly speaking, an ϵ-coreset is a tiny proxy of the potentially huge data set, such that the clustering cost on any center set is preserved within ϵ relative error. For k-median, an ϵ-coreset of size (kϵ^-1) has been known to exist (see e.g., ), which is independent of both the dimension d and size n of the data set. Once such a coreset is constructed, one can approximate k-means efficiently by plugging in existing approximation algorithms (so that the input size is reduced to the size of the coreset which is only (k ϵ^-1)). In addition, coresets can also be applied to clustering algorithms in sublinear settings such as streaming <cit.>, distributed computing <cit.>, and dynamic algorithms <cit.>. Contributions We propose the first quantum algorithm for constructing coresets for k-median that runs in Õ(√(nk)) time,[In this paper, we use Õ to omit poly-logarithmic terms in O.] which breaks the fundamental linear barrier of classical algorithms. There exists an quantum algorithm that given ε > 0 and an n-point data set D ⊂ℝ^d, returns an ε-coreset of size Õ( kd(n)/ε^2) for k-median over D with success probability at least 2/3, with query complexity Õ(√(nk)d^3/2/ε) and additional (kdlog n/ε) processing time. Our coreset construction also yields coresets of a similar size for the related k-median clustering problem (and more generally, (k, z)-clustering, see kzc), using the same order of query and processing time (see coreset). The size bound of our coreset stated in intro_ub may not be optimal, but since it already has size (kϵ^-1d), one can trivially apply the state-of-the-art classical coreset construction algorithm on top of our coreset to obtain improved bounds. For instance, we can obtain coresets for k-means of size O(kϵ^-4) using <cit.>, and an alternative size bound of O(k^1.5ϵ^-2) using <cit.>. These require the same order of query and processing time as in intro_ub. In addition, by a similar argument, our coreset readily implies efficient approximation algorithms for clustering problems. In particular, one constructs an ϵ-coreset S as in intro_ub and applies an existing α-approximate algorithm with input S, then it yields an O((1 + ϵ) α)-approximation to the original problem. The query complexity of the entire process remains the same as in intro_ub, and it only incurs additional T((k ϵ^-1log n)) processing time, provided that the approximation algorithm runs in T(n) time for an n-point dataset. This particularly implies a quantum PTAS (for fixed d) for k-median and k-means using Õ(√(nk)d^3/2/ε) queries, and (k) · f(d, ϵ) processing time for some function f of d and ϵ, by applying <cit.>. Our quantum algorithms (and lower bounds) for clustering are summarized in main. Techniques The general idea of our algorithm is to quantize and combine two existing algorithms, the approximate algorithm by <cit.> and a recent seminal coreset construction algorithm by <cit.>. In a high level, we start with computing a bicriteria solution which uses slightly more than k points to achieve O(1)-approximation to . This step is based on <cit.>. Then given this solution, we partition the dataset D into groups, perform a sampling procedure in each group, and re-weight the sampled points to form the coreset, following the idea in <cit.>. For the bicriteria approximation, we provide a quantum implementation for the algorithm by <cit.> in bicriteria to obtain a solution that has O(k log n) points with cost being O(1) multiple of . A key step in this algorithm is to query for the nearest neighbor of each data point x ∈ D in a given point set with size O(k). This step is straightforward in the classical setting with cost O(nk) by calculating the exact nearest neighbor and store the results. However, to achieve the Õ(√(nk)) complexity in the quantum setting, we need improvements on nearest neighbor search. We make use of a well-known approximate nearest neighbor search technique called locality sensitive hashing (LSH), and the version that we use gives 2(1+ε)-approximate nearest neighbor using N^(1/ε) preprocessing time and Õ(d)·(1/ε) query time for N points in ℝ^d <cit.>. In coreset_details, we also use our quantum implementation of LSH (qANN) to construct a unitary that encodes the clusters induced by the approximate solution, i.e., maps each x ∈ D to a corresponding center. After we obtain a bicriteria approximation A in the previous step, we adapt the algorithm in <cit.> to build the coreset, where the dataset D is partitioned into Õ(z^2/ε^2) groups with respect to A. In this partition procedure, a key step is to calculate (C_i,A) = ∑_x∈ C_i^z(x,A) for each cluster C_i induced by A. While this seems simple in the classical setting where one directly computes the cost of each data point and summing up over each cluster in O(nk) time, this task is nontrivial in quantum since we aim for sublinear complexity. To design a sublinear quantum algorithm that approximately computes the cost of all clusters simultaneously, we propose a new subroutine called multidimensional quantum approximate summation (mqsum). Specifically, given an oracle O:|i⟩|0⟩|0⟩→|i⟩|τ(i)⟩|f(i)⟩, where τ[n]→[m] is a partition and f[n]→ℝ_≥ 0 is a bounded function, mqsum shows that using in total Õ(√(nm/ε)) queries to O_τ one can obtain ε-estimation of ∑_τ(i) = jf(i) for each j ∈ [m]. This algorithm may be of independent interest. To complement our algorithm results, we also prove quantum lower bounds for approximate k-means clustering. We consider three settings in which an ε-coreset, an ε-optimal set of centers, or an ε-estimate of the optimal clustering cost are outputted, and we prove Ω(√(nk)ε^-1/2),Ω(√(nk)ε^-1/6) and Ω(√(nk)+√(n)ε^-1/2) lower bounds, respectively. These quantum lower bounds confirm that our quantum algorithms for clustering problems are near-optimal in n and k, up to a logarithmic factor. In general, we start with proving the quantum lower bounds when k=1 by reducing from the approximate quantum counting problem <cit.>. We then obtain the general bounds with √(k) factors by applying composition theorems (composition_theorem) in a refined manner. We speculate that the gap between different settings might be intrinsic, which is also discussed in a recent paper <cit.>. Related work In general, quantum algorithms for machine learning are of general interest <cit.>. We compare our results to existing literature in quantum machine learning as follows. <cit.> conducted an early study on quantum algorithms for clustering, including divisive clustering, k-median clustering, and neighborhood graph construction. Their k-median algorithm has complexity O(n^3/2/√(k)), which is at least n and slower than our quantum algorithm. <cit.> gave a quantum algorithm for cluster assignment and cluster finding with complexity (log nd). However, their quantum algorithm requires the input data to be sparse with efficient access to nonzero elements, i.e., each of x_1,…,x_n has (log d) nonzero elements and we can access these coordinates in (log d) time. In addition, their algorithm outputs quantum states instead of classical vectors. More caveats are listed in <cit.>. The most relevant result is <cit.>, which gave an quantum algorithm named q-means for k-means clustering. Q-means has complexity Õ(k^2dη^2.5ϵ^-3+k^2.5η^2ϵ^-3) per iteration for well-clusterable datasets, where η is a scaling factor for input data such that 1≤x_i_2^2≤η for all i∈[n]. For general datasets the complexity is larger and depends on condition number parameters. Q-means can be extended to spectral clustering <cit.>. As a comparison, our quantum algorithm does not require the well-clusterable assumption (nor condition numbers related to this) and can be regarded as a direct speedup of common classical algorithms for k-means. In addition, our quantum algorithm only has (logη) dependence, and the dependence on k and 1/ϵ is also better. There are also heuristic quantum machine learning approaches for clustering <cit.> and other problems in data analysis <cit.>. These results do not have theoretical guarantees at the moment, and we look forward to their further developments on heuristic performances and provable guarantees. In addition, <cit.> proposed a quantum-inspired classical algorithm for 1-mean clustering with sampling access to input data. Open questions Our work leaves several natural open questions for future investigation: * Can we give fast quantum coreset construction algorithms for other related clustering problems with complexity Õ(√(nk))? Potential problems include fair clustering <cit.>, capacitated clustering <cit.>, k-center clustering <cit.>, etc. * Can we give fast quantum coreset construction algorithms for clustering problems in more general metric spaces? Note that <cit.> gives the result for various metric space, such as doubling metrics, graphs, and general discrete metric spaces, while still achieving Õ(nk) time in the classical setting. However, to achieve this similar coreset size bound using time O(√(nk)) in the quantum setting seems nontrivial; for instance, one cannot make use of the LSH technique that we use to speed up the approximate nearest neighbor search. § PRELIMINARIES §.§ Notations We give the notations and definitions used in the following. In this paper, we focus on the (k, z)-clustering problem in an Euclidean space: Given a data set D ⊂ℝ^d, z ≥ 1 and integer k ≥ 1, the (k, z)-clustering problem is to find a set C ⊂ℝ^d with size k, minimizing the cost function (D,C) := ∑_x ∈ D((x,C))^z. Here (x,y) := ||x-y||_2, (x,C):=min_c ∈ C(x,c). For z= 1 it is also known as the k-median problem, and for z = 2 it is also known as the k-means problem. Any set C ⊂ℝ^d with size k can be seen as a solution. Given a solution C, a point c∈ C is a center. We can map each data point x ∈ D to a certain c ∈ C, and the set {x∈ D: x c} is a cluster. Let be the cost of the optimal solution. We assume the distance is rescaled so that the minimum intra-point distance is 1, and we assume the diameter of the point set is (n). Hence, = (n). An important related concept is the coreset: Given a data set D ⊂ℝ^d, z ≥ 1 and integer k ≥ 1, a weighted set S with weight function w S →ℝ_+ is called an ϵ-coreset if ∀ C ⊂ℝ^d, |C| ≤ k, (S,C) ∈ (1 ±ϵ) ·(D,C) where (S,C) = ∑_s∈ Sw(s)·^z(s,C). As a special case, we call S unweighted if w(s) = 1 ∀ s∈ S. In this paper, we refer to an ε-estimation for a number p by p̃ if |p̃-p| ≤ε p. We use [n] for {1,…,n}. §.§ Basics of Quantum Computing The basic unit of a classical computer is a bit, and in quantum computing it is a qubit. Mathematically, a system of m qubits forms an M-dimensional Hilbert space for M = 2^m. Any quantum state |ϕ⟩ in this space can be written as |ϕ⟩ = ∑_i = 0^M-1α_i|i⟩, where ∑_i=0^M-1|α_i|^2=1. Here {|0⟩,…,|M-1⟩} forms an orthonormal basis in the Hilbert space called as the computational basis, and α_i ∈ℂ is called as the amplitude of |i⟩. Intuitively, the quantum state |i⟩ can be regarded as a classical state i, and the quantum state |ϕ⟩ in quantum_state is a superposition of classical states. The operations in quantum computing are unitaries matrices following the principles of linear algebra. Specifically, a unitary acting on an M-dimensional Hilbert space can be formulated as follows: U_f|i⟩|0⟩→|i⟩|f(i)⟩, ∀ i ∈{0,…,M-1}. Note that due to linearity, U_f works not only for the basis vectors {|i⟩}_i = 0^M-1, but also for any quantum state in this Hilbert space. For example, by applying U_f to |ϕ⟩ we obtain the following quantum state |ϕ⟩ = ∑_i = 0^M-1α_i|i⟩ ∑_i = 0^M -1α_i |f(i)⟩. This allows us to perform calculations “in parallel" and achieve the quantum speedup. Quantum access to the input data is also unitary and can be encoded as a quantum oracle. We state the definitions of the probability oracle and the binary oracle as follows: Let p [M]→ℝ_≥ 0 be a probability distribution. We say O_p is a probability oracle for p if O_p|0⟩→∑_j ∈ [M]√(p(j))|j⟩|ϕ_j⟩, where |ϕ_j⟩ are arbitrary ℓ_2-normalized vectors. Let D = {x_1,…,x_n} be a subset of ℝ^d. We say O_D is a binary oracle for D if O_D|i⟩|0⟩→|i⟩|x_i⟩, ∀ i ∈ [n]. The definition of the binary oracle also fits for any vector w = (w_1,…,w_N) ∈ℝ^N. Binary oracle is a common input model in quantum algorithms and we also call the binary oracle of D (or w) as the quantum query to D (or w). Besides, for two point sets S⊂ D ⊂ℝ^d, we say O_S is the membership query to S if O_S|x⟩|0⟩→|x⟩|I(x∈ S)⟩, ∀ x ∈ D where I(x∈ S) is the indicator for whether x ∈ S. In a quantum algorithm, we can also write information to a quantum-readable classical-writable classical memory (QRAM) and make it encoded as an oracle <cit.>. We refer query complexity as the number of queries to the input oracle and the QRAM. Time complexity is referred as the total processing time, including all the use of queries, quantum gates, and classical operations. §.§ Quantum Speedup Here, we introduce basic problems which can be sped-up by quantum computing. Those tools are rudimentary to our quantum algorithms as well as others in machine learning. Quantum Sampling In our quantum algorithms, we the following quantum sampling algorithm: There is a quantum algorithm such that: given two integers 1 ≤ m ≤ n, a real δ > 0, and a non-zero vector w ∈ℝ_≥ 0^n, with a probability at least 1-δ, the algorithm outputs a sample set S of size m such that each element i ∈ [n] is sampled with probability proportional to w_i using O(√(nm)log(1/δ)) quantum queries to w in expectation. Quantum Counting and Search In quantum computing, counting the number of points satisfying a specific property can be solved with quadratic speedup: There is a quantum algorithm such that given a real δ>0 and two sets S ⊂ D ⊂ℝ^d, |S| = m, |D| = n, it outputs an ε-estimation m̃ for m with probability at least 1-δ using O(ε^-1√(n/m)log(1/δ)) queries to D and membership queries to S. Furthermore, quantum speedup can also be achieved with outputting all such points, known as repeated Grover search: There is a quantum algorithm such that given a real δ>0 and two sets S ⊂ D ⊂ℝ^d, |S| = m, |D| = n, it finds S with probability at least 1-δ using Õ(√(nm)log(1/δ)) queries to D and membership queries to S. Quantum Sum Estimation Beyond counting and search, quadratic quantum speedup can also be achieved for estimating the sum of a set of numbers: Consider D = {x_1,…,x_n}⊂ℝ^n_≥ 0 and denote x = ∑_i = 1^N x_i as the sum of all the elements in D. There is a quantum algorithm such that given δ > 0, it outputs x̃ as an ε-estimation for x with probability at least 1 -δ, using O(√(n)log(1/δ)/ε) queries to D. § CORESET CONSTRUCTION This section presents a quantum algorithm for coreset construction in Õ(√(nk)) time. This algorithm combines and quantizes two existing classical algorithms, the bicriteria approximation algorithm of <cit.> and the coreset construction algorithm (based on an approximate solution) of <cit.>. This paper focuses on the (k, z)-clustering problem (kzc) over a size-n data set D = {x_1,…,x_n}⊂ℝ^d, and assumes the access to oracle O_D|i⟩|0⟩→|i⟩|x_i⟩ ∀ i ∈ [n]. The main result for coreset construction is as follows. There exists a quantum algorithm such that given data set D ⊂ℝ^d, positive real ϵ < 1/2^O(z), z ≥ 1, and integer k ≥ 1, it returns an ϵ-coreset for (k, z)-clustering over D of size Õ(2^O(z)kd(n)max(ε^-2,ε^-z)) with success probability at least 2/3 using: * Õ(2^O(z)√(nkd)max(ε^-1,ε^-z/2)) queries to O_D, * Õ(2^O(z)√(nk)d^3/2max(ε^-1,ε^-z/2)) queries to QRAM, * (kdlog n/ε^z) additional processing time. When d ≥Ω(log n / ϵ^2), one can apply the Johnson-Lindenstrauss transform <cit.> as a preprocessing step to obtain the following alternative bounds. * Õ(2^O(z)k(n)max(ε^-4,ε^-z-2)) coreset size, * Õ(2^O(z)√(nk)max(ε^-2,ε^-z/2-1)) queries to O_D, * Õ(2^O(z)√(nk)dmax(ε^-4,ε^-z/2-3)) queries to QRAM, * (klog n/ε^z) + O(dlog n/ε^2) additional processing time. These bounds have tight asymptotic dependence in d. The quantum algorithm contains two parts. First, bicriteria presents an algorithm to compute a bicriteria approximate solution A, which is an approximate solution with size slightly larger than k. Then, based on this A, coreset_details presents an algorithm for coreset construction. Combining the two algorithms directly yields coreset. For clustering problem it is a basic subroutine to find the nearest neighbor of each data point x ∈ D in a given set A ⊂ℝ^d since the optimization objective is the cost function (D,A) = ∑_x ∈ D^z(x,A) for any A ⊂ℝ^d. It always holds that |A| = k(n). This can be easily implemented in the classical setting, since the nearest neighbor of all the x ∈ D can be found with Õ(nk) time and all the information can be stored with O(n) space. However, this approach cannot be easily adapted to yield the Õ(nk) complexity in the quantum setting. In particular, the step of exactly computing the nearest center can require Ω(k) time. Hence, this paper uses a mapping that maps each point x ∈ D to an approximately nearest neighbor in A instead. This paper takes the advantage of an existing classical result, which is based on a widely adopted technique, Locality Sensitive Hashing (LSH). There exists an algorithm such that given two parameters δ' > 0, ε∈ (0,1/2), for any set A ⊂ℝ^d, |A| = m, it constructs a data structure using m^O(log (1/ε)/ε^2)log(1/δ') space and preprocessing time, such that for any query x ∈ℝ^d, with probability at least 1-δ' it answers a ∈ A which satisfies (x,a) ≤ 2(1+ε)(x,A) using ε^-2d(m/δ') query time. For a quantum version, there exists an algorithm that performs the preprocessing classical and stores the data structure in QRAM <cit.>, and then answers queries in a quantum manner based on the stored information. This yields a quantum algorithm with the same query time since any classical operation can be simulated by constant quantum operations. Let ε = c/2-1. Setting δ' = δ/n and using the union bound yields the following lemma. There exists an algorithm such that given two parameters δ > 0, c_τ∈ [5/2,3), for any A ⊂ℝ^d, |A| = m, it constructs an oracle O_τ|i⟩|0⟩→|i⟩|τ(i)⟩, ∀ i ∈ [n] using (mlog(n/δ)) classical preprocessing time and QRAM space. With success probability at least 1-δ, τ [n] → [m] is a mapping such that (x_i,a_τ(i)) ≤ c_τ(x_i,A) ∀ i ∈ [n]. Each query to O_τ uses d(mn/δ) queries to QRAM. By the same method, one can construct the oracle to a mapping that maps any i ∈ [n] to the corresponding center a_τ(i)∈ A instead of the index τ(i), with the same complexity. §.§ Bicriteria Approximation For a (k, z)-clustering problem, its bicriteria approximate solution is defined as follows: Assume that is the optimal cost for the (k, z)-clustering problem. An (α, β)-bicriteria approximate solution is a point set A ⊂ℝ^d such that |A| ≤α k, (D,A) ≤β. This section presents a quantum algorithm that finds a bicriteria solution with Õ(d√(nk)) query complexity, as stated in bicriteria below. This algorithm is a quantization of a classical algorithm by <cit.>. bicriteria outputs an (O(log^2 n), 2^O(z))-bicriteria approximate solution A with probability at least 5/6, using Õ(√(nk)) calls to O_D and its inverse, Õ(d√(nk)) queries to a QRAM, and (klog n) additional processing time. τ_t+1 D → A_t+1 in bicriteria Line 7 is a mapping such that for some constant c_τ (x,τ_t+1(x)) ≤ c_τ(x,A_t+1). It holds that |A_t+1| = O(k n) for any t. Using qANN, for any c_τ∈ [5/2,3), an oracle O_τ_t+1|x⟩|0⟩→|x⟩|τ(x)⟩ can be constructed using (klog(n)) classical preprocessing time and QRAM space, and each query to O_τ_t+1 uses d(kn) queries to QRAM and constant query to O_D. bicriteria is a quantum implementation of Algorithm D of <cit.>, which constructs a set of size O(klog^2n/ε) that contains a factor 2+ε approximation to k-median problem for ε∈ (0,1/2) with probability at least 1/2. bicriteria has small difference from the classical algorithm but it does not influence the correctness; a detailed proof is given in proof_biapprox. In the classical setting, to identify the set D_t one can list all the points in it or in the dataset D make those points marked. In the quantum setting, to identify D_t is to construct the unitary U_D_t|x⟩|0⟩→|x⟩|I(x ∈ D_t)⟩, ∀ x ∈ D, where I(x ∈ D_t) is the indicator for whether x ∈ D_t. This unitary can be constructed iteratively: U_D_t+1 |x⟩|0⟩|0⟩|0⟩ U_D_t, O_τ_t+1|x⟩|I(x ∈ D_t)⟩|τ_t+1(x)⟩|0⟩ ↦|x⟩|I(x∈ D_t)⟩|τ_t+1(x)⟩|I(x∈ D_t+1)⟩ O_τ_t+1^-1,O_D_t^-1|x⟩|0⟩|0⟩|I(x∈ D_t+1)⟩. For Line 5-6, bicriteria applies qsampling with unitary U_D_t and m = 13k⌈log n⌉ +1, which uses Õ(√(nk)) calls for O_D_t, O_D, and their inverses. This algorithm uses Õ(√(n)) for Line 8 and Õ(nk) for Line 11 queries to O_D_t, O_D, and their inverses. All the steps above are repeated for no more than O(log n) times, and it can be concluded that in total the algorithm uses Õ(d√(nk)) queries for a QRAM, Õ(√(nk)) queries for O_D and its inverse, and additional (klog n) processing time. We further note that the failure probability of our quantum algorithm gives at most a poly-logarithmic factor. In bicriteria, each subroutine in use suffers only a log(1/δ) factor to reach a success probability at least 1-δ and each of them is applied no more than (nk) times, so setting the failure probability as δ = O(1/(nk)) for each subroutine and the union bound ensures that all the applications to quantum subroutines success with high probability. This cause only a (nk) factor and it is absorbed by the Õ notation. §.§ Coreset Construction We present a quantum algorithm for constructing a coreset based on a bicriteria approximate solution. The construction is a quantum implementation of <cit.>. coreset_details shows a sketch of the construction. coreset_details requires an access to an (α, β)-bicriteria approximation A, which means an oracle O_A|i⟩|0⟩→|i⟩|a_i⟩ ∀ i ∈ [m] for a set A = {a_1,…,a_m}⊂ℝ^d, m ≤α k, (D,A) ≤β. Based on A, using qANN, one can construct an oracle O_τ|s⟩|0⟩→|s⟩|i⟩, ∀ s ∈ [n] where (x_s,a_i) ≤ c_τ(x_s,A) ∀ s ∈ [n] for some constant c_τ. This oracle encodes a mapping τ [n] → [m] which maps each x_s ∈ D to an approximately nearest neighbor a_i ∈ A as its center. The map τ together with A can be seen as a special solution for clustering. Let _τ(D',A):= ∑_x ∈ D'^z(x,a_τ(x)) be the cost of this solution and let C_i := {x∈ D|τ(x) = i} as the i-th cluster induced by τ and A for any i ∈ [m]. Let t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)) in coreset_details. For a positive real ε < 1/(4c_τ^z), coreset_details outputs an O(c_τ^zβε)-coreset of size Õ( 2^O(z)mdlog(n)max(ε^-2,ε^-z)) with probability at least 5/6, using Õ(2^O(z)c_τ√(nmd)max(ε^-1,ε^-z/2)) queries to O_τ, O_D, O_A, their inverses, and QRAM. Besides it uses (mdlog n/ε^z) additional classical processing time. The details of coreset_details are described as follows. This algorithm consists of two phases. During the first phase the algorithm partitions the dataset D into groups. This consists of two steps. The first step of the first phase is to partition each cluster into rings, with each ring containing the points with the same distance from the center up to factor 2. For each C_i, let R_i,j := {x ∈ C_i| 2^jΔ_C_i≤_τ(x,A) ≤ 2^j+1Δ_C_i}. Let R_i,I := ∪_j ≤ -2zlog(z/ε)R_i,j be the inner ring and R_i,O := ∪_j > 2z log (z/ε) be the outer ring. Besides, let R_I := ∪_i = 1^m R_i,I, R_O := ∪_i = 1^m R_i,O, and R_j := ∪_i = 1^m R_i,j ∀ j, -2zlog(z/ε) < j ≤ 2z log (z/ε). The ring unitary U_R is defined as U_R|s⟩|0⟩|0⟩→|s⟩|i⟩|j⟩ ∀ s ∈ [n] where x_s ∈ R_i,j for each s ∈ [n]. For j ≤ -2zlog(z/ε), U_R uses a same special notation for such j and in this paper it is denoted as j = I. The special notation can be any preselected value out of [-2zlog(z/ε), 2z log (z/ε)]. And it is the same for j = O. U_R is a unitary which answers the corresponding ring R_i,j for each query x_s ∈ D. The second step is to gather the rings into groups such that the rings with equal cost up to factor 2 are gathered together and prepared to be handled together in the second phase. For each j ∈{⌈ -2zlog(z/ε)⌉, …, ⌊ 2zlog(z/ε) ⌋ +1}∪{I,O}, let G_j,b := ∪_i ∈ I_j,b R_i,j, where I_j,b is the largest set such that for any i ∈ I_j,b, _τ(R_i,j,A) ∈ (ε/4z)^z·_τ(R_j,A)/m· [2^b,2^b+1]. And let G_j,min:= ∪_b ≤ 0G_j,b be the union of the cheapest groups, and G_j,max:= ∪_b ≥ zlog (4z/ε)G_j,b be the union of the most expensive ones. The same notation as j = I and j = O is used for b = min and b = max. The group unitary U_G is defined as U_G|s⟩|0⟩|0⟩→|s⟩|j⟩|b⟩ ∀ s ∈ [n], where x_s ∈ G_j,b for any s ∈ [n]. U_G answers the corresponding group G_j,b for each query x_s ∈ D. In the second phase the dataset seen as the union of three different kinds of points and these three parts are handle separately. The first kind contains the union of inner rings R_I and the cheapest groups G_j,min ∀ j; The second kind is all the well-structured groups G_j,b with -2zlog(z/ε) < j ≤ 2z log (z/ε) and b = 1,…,max; And the third kind is all the outer groups G_O,b with b = 1,…,max. In quantum computing, it is costly to compute the exact sum such as _τ(C_i,A) and _τ(R_i,j,A). Hence, coreset_details uses ε-estimations instead of corresponding exact values, but for convenience they are simply written as they are exact. It turns out that O(ε)-estimations are enough for constructing an O(ε)-coreset. The proof is shown in proof_coreset_details. We note that in Line 3 (and similarly, Line 5 and 7), computing Δ_C_i for all the m clusters in Õ(√(nm)) time is feasible in quantum computing. We can compute {_τ(C_i,A)}_i = 1^m and {|C_i|}_i = 1^m separately, and then calculate the division in a classical method, so we only state for the calculation of all the _τ(C_i,A). We can construct the following unitary U by one query to O_τ and its inverse. U|s⟩|0⟩|0⟩→|s⟩|τ(s)⟩|^z(x_s,A)⟩ ∀ s ∈ [n] Note that _τ(C_i,A) = ∑_τ(s) = i^z(x_s,A). According to mqsum, calculating these values requests only Õ(√(nm)) time. More details about mqsum is shown in mqcounting. Let m = O(klog n), β = 2^O(z), ε = ε'/2^O(z), and c_τ = 5/2 in coreset_details. Note that O_A is obtained by storing A in QRAM, and one query to O_τ uses d(mn) queries to QRAM by qANN. Combining bicriteria and coreset_details directly yields coreset. § MULTIDIMENSIONAL APPROXIMATE SUMMATION A crucial subroutine of coreset_details is to compute the summation of the _τ(x,A) over all the points x in each part for a given partition (Line 3, 5, and 7). This gives rise to such a problem: Given two integers 1 ≤ m ≤ n, a real parameter ε > 0, a partition τ [n] → [m], and a function f [n] →ℝ_≥ 0. The multidimensional approximate summation problem is to find ε-estimation for each s_j := ∑_τ(i) = jf(i), j ∈ [m]. This paper proposes multidimensional quantum approximate summation to solve this problem. We believe this technique can have wide applications in designing quantum algorithms for machine learning and other relevant problems. Assume that there exists access to an oracle O_τ|i⟩|0⟩|0⟩→|i⟩|τ(i)⟩|f(i)⟩ ∀ i ∈ [n] and assume that f has an upper bound M. For ε∈ (0,1/3), δ > 0, there exists a quantum algorithm that solves the multidimensional approximate summation problem with probability at least 1-δ, using Õ(√(nm/ε)log(1/δ)log M) queries to O_τ, Õ((√(nm/ε)+m/ε)log(n/δ)log M ) gate complexity, and additional O(mlog M) classical processing time. f(i) is a binary number of length ⌈log M⌉. By first computing the summation of each digit and then summing up the results together, the multidimensional approximate summation problem can be reduced to the following problem: Given two integers 1 ≤ m ≤ n, a real parameter ε > 0, and a partition τ [n] → [m]. For each j ∈ [m], denote D_j := {i ∈ [n]: τ(i)=j} as the j-th part and n_j := |D_j| for the size. The multidimensional counting problem is to find ε-estimation ñ_j for each n_j, j ∈ [m]. For this problem, this section proposes multidimensional quantum counting to solve it. Assume that there exists access to an oracle O_τ|i⟩|0⟩→|i⟩|τ(i)⟩ ∀ i ∈ [n]. For ε∈ (0,1/3), δ > 0, mqcounting solves the multidimensional counting problem with probability at least 1-δ, using Õ(√(nm/ε)log(1/δ)) queries to O_τ and additional Õ((√(nm/ε)+m/ε)log(n/δ)) gate complexity. The query complexity is optimal up to a logarithm factor. Denote p_j := n_j/n. Note that by one call for O_τ one can construct the unitary O_p|0⟩→∑_j = 1^m √(p_j)|j⟩(1/n_j∑_i ∈ D_j|i⟩). Consider p = (p_1,…,p_m) as an m-dimensional probability distribution, the above unitary can be seen as a probability oracle to p. The following method is used to estimate the distribution: There is a quantum algorithm which has the following properties: given precision ε∈ (0,1/3), error probability δ > 0, a quantum probability oracle O_p on q qubits for an m-dimensional probability distribution p, a number set S ⊂ [m], and a constant p_mt≥∑_i ∈ Sp_i as the maximal total probability on S, it outputs p̃∈ℝ^m such that |p̃_i - p_i| ≤ε ∀ i ∈ S with probability ≥ 1-δ using O(εlog(m/δ)√(p_mtε)) calls to O_p and membership queries to S. The gate complexity is Õ((q (m+p_mt/ε) +√(p_mt)/ε)log(1/δ)) using a QRAM. A naive method is to set p_mt = 1 and apply amplitudeestimation directly. However, it requests an ε very small to ensure the size estimation of the small parts sufficiently precise, which leads to gratuitous overprecise estimation for the large parts and results in the waste of time. mqcounting performs a trick to obtain the estimations hierarchically: in each iteration it sets a certain precision (Line 6), and saves only the estimation values large enough to ensure the accuracy and leaves the small parts to be estimated more precisely next time (Line 8-10). The proof of mqcounting and mqsum is deferred to proof_mqcounting. Note that our mqcounting for multidimensional quantum computing is optimal up to a logarithmic factor due to lower-mqc-counting-main. § LOWER BOUND To complement our quantum algorithms, we also prove the following quantum lower bounds. First, we have: Every quantum algorithm that solves the multidimensional counting problem (Definition <ref>) w.p. at least 2/3 uses at least Ω(√(nk)ε^-1/2) queries to O_τ. The proof of lower-mqc-counting-main is deferred to proof-lower-mqc-counting-main. For the clustering problems, we establish the following lower bounds under different settings (proofs deferred to proof-lower-clustering-main). Assume that ε is sufficiently small. Consider the Euclidean k-means/median problem on data set D = {x_1,…,x_n}⊂ℝ^d. Assume a quantum oracle O_x |i, b⟩ := |i, b ⊕ x_i⟩. Then, every quantum algorithm outputs the followings with probability 2/3 must have quantum query complexity lower bounds for the following problems: * An ε-coreset: Ω(√(nk)ε^-1/2) for k-means and k-median (k-means-coreset-lower); * An ε-estimation to the value of the objective function: Ω(√(nk)+√(n)ε^-1/2) for k-means and k-median (k-means-cost-lower); * A center set C such that (C) ≤ (1 + ε)(C^*) where C^* is the optimal solution: Ω(√(nk)ε^-1/6) for k-means; Ω(√(nk)ε^-1/3) for k-median (k-means-lower). § ACKNOWLEDGEMENTS This paper is partially supported by a national key R&D program of China No. 2021YFA1000900 and a startup fund from Peking University. #1.#1#1 55 urlstyle [Aaronson(2015)]aaronson2015read Aaronson, S. Read the fine print. Nature Physics, 110 (4):0 291, 2015. [Agarwal & Procopiuc(2002)Agarwal and Procopiuc]agarwal2002exact Agarwal, P. K. and Procopiuc, C. M. Exact and approximation algorithms for clustering. Algorithmica, 330 (2):0 201–226, 2002. [Aïmeur et al.(2007)Aïmeur, Brassard, and Gambs]aimeur2007quantum Aïmeur, E., Brassard, G., and Gambs, S. Quantum clustering algorithms. In International Conference on Machine Learning, pp. 1–8, 2007. [Apers & de Wolf(2020)Apers and de Wolf]apers2020quantum Apers, S. and de Wolf, R. Quantum speedup for graph sparsification, cut approximation and Laplacian solving. In Proceedings of the 61st Annual Symposium on Foundations of Computer Science, pp. 637–648. IEEE, 2020. 1911.07306. [Balcan et al.(2013)Balcan, Ehrlich, and Liang]BalcanEL13 Balcan, M., Ehrlich, S., and Liang, Y. Distributed k-means and k-median clustering on general communication topologies. In Advances in Neural Information Processing Systems, pp. 1995–2003, 2013. 1306.060. [Beals et al.(2001)Beals, Buhrman, Cleve, Mosca, and de Wolf]BealsBCMW01 Beals, R., Buhrman, H., Cleve, R., Mosca, M., and de Wolf, R. Quantum lower bounds by polynomials. Journal of the ACM, 480 (4):0 778–797, 2001. quant-ph/9802049. [Biamonte et al.(2017)Biamonte, Wittek, Pancotti, Rebentrost, Wiebe, and Lloyd]biamonte2017quantum Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., and Lloyd, S. Quantum machine learning. Nature, 5490 (7671):0 195, 2017. 1611.09347. [Brassard et al.(2002)Brassard, Høyer, Mosca, and Tapp]brassard1998quantum Brassard, G., Høyer, P., Mosca, M., and Tapp, A. Quantum amplitude amplification and estimation. Contemporary Mathematics, 305:0 53–74, 2002. quant-ph/0005055. [Braverman et al.(2017)Braverman, Frahling, Lang, Sohler, and Yang]BF17 Braverman, V., Frahling, G., Lang, H., Sohler, C., and Yang, L. F. Clustering high dimensional dynamic data streams. In International Conference on Machine Learning, pp. 576–585. PMLR, 2017. 1706.03887. [Braverman et al.(2021)Braverman, Jiang, Krauthgamer, and Wu]BravermanJKW21 Braverman, V., Jiang, S. H., Krauthgamer, R., and Wu, X. Coresets for clustering in excluded-minor graphs and beyond. In Proceedings of the 32nd ACM-SIAM Symposium on Discrete Algorithms, pp. 2679–2696. SIAM, 2021. 2004.07718. [Braverman et al.(2022)Braverman, Cohen-Addad, Jiang, Krauthgamer, Schwiegelshohn, Toftrup, and Wu]braverman2022power Braverman, V., Cohen-Addad, V., Jiang, H.-C. S., Krauthgamer, R., Schwiegelshohn, C., Toftrup, M. B., and Wu, X. The power of uniform sampling for coresets. In Proceedings of the 63rd Annual Symposium on Foundations of Computer Science, pp. 462–473. IEEE, 2022. 2209.01901. [Charikar & Waingarten(2022)Charikar and Waingarten]CharikarW22 Charikar, M. and Waingarten, E. Polylogarithmic sketches for clustering. In Proceedings of the 49th International Colloquium on Automata, Languages, and Programming, volume 229 of LIPIcs, pp. 38:1–38:20. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2022. 2204.12358. [Chia et al.(2022)Chia, Gilyén, Li, Lin, Tang, and Wang]chia2022sampling Chia, N.-H., Gilyén, A. P., Li, T., Lin, H.-H., Tang, E., and Wang, C. Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. Journal of the ACM, 690 (5):0 1–72, 2022. 1910.06151. [Chierichetti et al.(2017)Chierichetti, Kumar, Lattanzi, and Vassilvitskii]Chierichetti0LV17 Chierichetti, F., Kumar, R., Lattanzi, S., and Vassilvitskii, S. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pp. 5029–5037, 2017. 1802.05733. [Cohen-Addad & Li(2019)Cohen-Addad and Li]cohenaddad2019capacitated Cohen-Addad, V. and Li, J. On the fixed-parameter tractability of capacitated clustering. In 46th International Colloquium on Automata, Languages, and Programming, volume 132 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 41:1–41:14. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2019. 2208.14129. [Cohen-Addad et al.(2021)Cohen-Addad, Feldmann, and Saulpic]DBLP:journals/jacm/Cohen-AddadFS21 Cohen-Addad, V., Feldmann, A. E., and Saulpic, D. Near-linear time approximation schemes for clustering in doubling metrics. Journal of the ACM, 680 (6):0 44:1–44:34, 2021. 1812.08664. [Cohen-Addad et al.(2021)Cohen-Addad, Saulpic, and Schwiegelshohn]cohen2021new Cohen-Addad, V., Saulpic, D., and Schwiegelshohn, C. A new coreset framework for clustering. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 169–182, 2021. 2104.06133. [Cohen-Addad et al.(2022)Cohen-Addad, Larsen, Saulpic, and Schwiegelshohn]Cohen-AddadLSS22 Cohen-Addad, V., Larsen, K. G., Saulpic, D., and Schwiegelshohn, C. Towards optimal lower bounds for k-median and k-means coresets. In Leonardi, S. and Gupta, A. (eds.), Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1038–1051. ACM, 2022. 2202.12793. [Dunjko & Briegel(2018)Dunjko and Briegel]dunjko2018machine Dunjko, V. and Briegel, H. J. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics, 810 (7):0 074001, 2018. 1709.02779. [Farhi & Neven(2018)Farhi and Neven]farhi2018classification Farhi, E. and Neven, H. Classification with quantum neural networks on near term processors. arXiv preprint, 2018. 1802.06002. [Feldman & Langberg(2011)Feldman and Langberg]feldman2011unified Feldman, D. and Langberg, M. A unified framework for approximating and clustering data. In Proceedings of the 43rd annual ACM symposium on Theory of computing, pp. 569–578, 2011. 1106.1379. [Giovannetti et al.(2008)Giovannetti, Lloyd, and Maccone]giovannetti2008quantum Giovannetti, V., Lloyd, S., and Maccone, L. Quantum random access memory. Physical Review Letters, 1000 (16):0 160501, 2008. 0708.1879. [Grover(1996)]grover1996fast Grover, L. K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, 1996. quant-ph/9605043. [Gupta et al.(2003)Gupta, Krauthgamer, and Lee]gupta2003bounded Gupta, A., Krauthgamer, R., and Lee, J. R. Bounded geometries, fractals, and low-distortion embeddings. In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, pp. 534–543. IEEE, 2003. [Hamoudi(2022)]hamoudi2022preparing Hamoudi, Y. Preparing many copies of a quantum state in the black-box model. Physical Review A, 1050 (6):0 062440, 2022. 2207.11014. [Har-Peled & Mazumdar(2004)Har-Peled and Mazumdar]Har-PeledM04 Har-Peled, S. and Mazumdar, S. On coresets for k-means and k-median clustering. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, pp. 291–300. ACM, 2004. 1810.12826. [Havlíček et al.(2019)Havlíček, Córcoles, Temme, Harrow, Kandala, Chow, and Gambetta]havlivcek2019supervised Havlíček, V., Córcoles, A. D., Temme, K., Harrow, A. W., Kandala, A., Chow, J. M., and Gambetta, J. M. Supervised learning with quantum-enhanced feature spaces. Nature, 5670 (7747):0 209–212, 2019. 1804.11326. [Henzinger & Kale(2020)Henzinger and Kale]HenzingerK20 Henzinger, M. and Kale, S. Fully-dynamic coresets. In Proceedings of the 28th Annual European Symposium on Algorithms, volume 173 of LIPIcs, pp. 57:1–57:21. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. 2004.14891. [Høyer et al.(2007)Høyer, Lee, and Spalek]HoyerLS07 Høyer, P., Lee, T., and Spalek, R. Negative weights make adversaries stronger. In Johnson, D. S. and Feige, U. (eds.), Proceedings of the 39th Annual ACM Symposium on Theory of Computing, pp. 526–535. ACM, 2007. quant-ph/0611054. [Huang & Vishnoi(2020)Huang and Vishnoi]HuangV20 Huang, L. and Vishnoi, N. K. Coresets for clustering in Euclidean spaces: importance sampling is nearly optimal. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp. 1416–1429. ACM, 2020. 2004.06263. [Indyk & Motwani(1998)Indyk and Motwani]indyk1998approximate Indyk, P. and Motwani, R. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing, pp. 604–613, 1998. [Johnson & Lindenstrauss(1984)Johnson and Lindenstrauss]JL84 Johnson, W. B. and Lindenstrauss, J. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), pp. 189–206. Amer. Math. Soc., 1984. [Kapoor et al.(2016)Kapoor, Wiebe, and Svore]kapoor2016quantum Kapoor, A., Wiebe, N., and Svore, K. Quantum perceptron models. In Advances in Neural Information Processing Systems, pp. 3999–4007, 2016. 1602.04799. [Kerenidis & Landman(2021)Kerenidis and Landman]kerenidis2021quantum Kerenidis, I. and Landman, J. Quantum spectral clustering. Physical Review A, 1030 (4):0 042415, 2021. 2007.00280. [Kerenidis & Luongo(2018)Kerenidis and Luongo]KL18 Kerenidis, I. and Luongo, A. Quantum classification of the MNIST dataset via slow feature analysis. arXiv preprint, 2018. 1805.08837. [Kerenidis et al.(2019)Kerenidis, Landman, Luongo, and Prakash]kerenidis2019qmeans Kerenidis, I., Landman, J., Luongo, A., and Prakash, A. q-means: A quantum algorithm for unsupervised machine learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 1812.03584. [Kimmel(2013)]Kimmel13 Kimmel, S. Quantum adversary (upper) bound. Chicago Journal of Theoretical Computer Science, 2013:0 4, 2013. URL <http://cjtcs.cs.uchicago.edu/articles/2013/4/contents.html>. 1101.0797. [Lee et al.(2011)Lee, Mittal, Reichardt, Spalek, and Szegedy]LeeMRSS11 Lee, T., Mittal, R., Reichardt, B. W., Spalek, R., and Szegedy, M. Quantum query complexity of state conversion. In Ostrovsky, R. (ed.), Proceedings of the 52nd Annual Symposium on Foundations of Computer Science, pp. 344–353. IEEE Computer Society, 2011. 1011.3020. [Li et al.(2019)Li, Chakrabarti, and Wu]li2019sublinear Li, T., Chakrabarti, S., and Wu, X. Sublinear quantum algorithms for training linear and kernel-based classifiers. In International Conference on Machine Learning, pp. 3815–3824. PMLR, 2019. 1904.02276. [Li et al.(2021)Li, Wang, Chakrabarti, and Wu]li2021sublinear Li, T., Wang, C., Chakrabarti, S., and Wu, X. Sublinear classical and quantum algorithms for general matrix games. Proceedings of the 35th AAAI Conference on Artificial Intelligence, 350 (10):0 8465–8473, 2021. 2012.06519. [Lloyd et al.(2013)Lloyd, Mohseni, and Rebentrost]lloyd2013quantum Lloyd, S., Mohseni, M., and Rebentrost, P. Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint, 2013. 1307.0411. [Megiddo & Supowit(1984)Megiddo and Supowit]DBLP:journals/siamcomp/MegiddoS84 Megiddo, N. and Supowit, K. J. On the complexity of some common geometric location problems. SIAM Journal on Computing, 130 (1):0 182–196, 1984. [Mettu & Plaxton(2004)Mettu and Plaxton]MettuP04 Mettu, R. R. and Plaxton, C. G. Optimal time bounds for approximate clustering. Mach. Learn., 560 (1-3):0 35–60, 2004. 1301.0587. [Nayak & Wu(1999)Nayak and Wu]NayakW99 Nayak, A. and Wu, F. The quantum query complexity of approximating the median and related statistics. In Vitter, J. S., Larmore, L. L., and Leighton, F. T. (eds.), Proceedings of the 31st Annual ACM Symposium on Theory of Computing, May 1-4, 1999, Atlanta, Georgia, USA, pp. 384–393. ACM, 1999. quant-ph/9804066. [Otterbach et al.(2017)Otterbach, Manenti, Alidoust, Bestwick, Block, Bloom, Caldwell, Didier, Schuyler Fried, Hong, Karalekas, Osborn, Papageorge, Peterson, Prawiroatmodjo, Rubin, Ryan, Scarabelli, Scheer, Sete, Sivarajah, Smith, Staley, Tezak, Zeng, Hudson, Johnson, Reagor, da Silva, and Rigetti]rigetti2017unsupervised Otterbach, J. S., Manenti, R., Alidoust, N., Bestwick, A., Block, M., Bloom, B., Caldwell, S., Didier, N., Schuyler Fried, E., Hong, S., Karalekas, P., Osborn, C. B., Papageorge, A., Peterson, E. C., Prawiroatmodjo, G., Rubin, N., Ryan, C. A., Scarabelli, D., Scheer, M., Sete, E. A., Sivarajah, P., Smith, R. S., Staley, A., Tezak, N., Zeng, W. J., Hudson, A., Johnson, B. R., Reagor, M., da Silva, M. P., and Rigetti, C. Unsupervised machine learning on a hybrid quantum computer. arXiv preprint, 2017. 1712.05771. [Poggiali et al.(2022)Poggiali, Berti, Bernasconi, Del Corso, and Guidotti]poggiali2022quantum Poggiali, A., Berti, A., Bernasconi, A., Del Corso, G., and Guidotti, R. Quantum clustering with k-means: a hybrid approach. arXiv preprint, 2022. 2212.06691. [Rebentrost et al.(2014)Rebentrost, Mohseni, and Lloyd]rebentrost2014QSVM Rebentrost, P., Mohseni, M., and Lloyd, S. Quantum support vector machine for big data classification. Physical Review Letters, 1130 (13):0 130503, 2014. 1307.0471. [Reichardt(2014)]Reichardt14 Reichardt, B. W. Span programs are equivalent to quantum query algorithms. SIAM Journal on Computing, 430 (3):0 1206–1219, 2014. [Schuld & Petruccione(2018)Schuld and Petruccione]schuld2018supervised Schuld, M. and Petruccione, F. Supervised learning with quantum computers, volume 17. Springer, 2018. [Schuld et al.(2017)Schuld, Fingerhuth, and Petruccione]schuld2017implementing Schuld, M., Fingerhuth, M., and Petruccione, F. Implementing a distance-based classifier with a quantum interference circuit. Europhysics Letters, 1190 (6):0 60002, 2017. 1703.10793. [Schwiegelshohn et al.(2022)Schwiegelshohn, Saulpic, Larsen, Cohen-addad, and Sheikh-Omar]SSLCS22 Schwiegelshohn, C., Saulpic, D., Larsen, K. G., Cohen-addad, V. P., and Sheikh-Omar, O. A. Improved coresets for Euclidean k-means. In Advances in Neural Information Processing Systems, 2022. 2211.08184. [Sohler & Woodruff(2018)Sohler and Woodruff]DBLP:conf/focs/SohlerW18 Sohler, C. and Woodruff, D. P. Strong coresets for k-median and subspace approximation: Goodbye dimension. In Proceedings of the 59th Annual Symposium on Foundations of Computer Science, pp. 802–813. IEEE Computer Society, 2018. 1809.02961. [Thorup(2005)]thorup2001quick Thorup, M. Quick k-median, k-center, and facility location for sparse graphs. SIAM Journal on Computing, 340 (2):0 405–432, 2005. [van Apeldoorn(2021)]van2021quantum van Apeldoorn, J. Quantum probability oracles & multidimensional amplitude estimation. In 16th Conference on the Theory of Quantum Computation, Communication and Cryptography. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2021. [Wiebe et al.(2015)Wiebe, Kapoor, and Svore]wiebe2015quantum Wiebe, N., Kapoor, A., and Svore, K. M. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Information & Computation, 150 (3-4):0 316–356, 2015. 1401.2142. icml2023 § FURTHER PROOF DETAILS FOR BICRITERIA APPROXIMATION This section gives a rigorous proof of the correctness of the bicriteria approximate algorithm (bicriteria), that the output of bicriteria (the set A) is an (O(log^2 n), 2^O(z))-bicriteria approximate solution with probability at least 5/6. This proof follows <cit.>. First, the loop (Line 4-10) stops for t < 3⌈log n⌉ with high probability. In each iteration, assume all the points x ∈ D_t are sorted (x,τ_t(x)) from small to large. The sample s_t is drawn from D_t uniformly at random (Line 6) and all the points x preceding s_t are deleted from D_t (Line 7), so with probability at least 1/2 it holds that |D_t+1| ≤ |D_t|/2. If there's still r̃_t > 39k⌈log n ⌉ after 3⌈log n ⌉ iterations, it holds that |D_t| ≥ 2r̃_t/3 > 26k⌈log n ⌉≥ 1. Hence the event |D_t+1| ≤ |D_t|/2 happens in no more than ⌈log n ⌉ iterations, which is only 2/3 of the expectation. The probability that such an event happens is at most exp(-(1.5 log n)(1/3)^2/2) = exp(-(log n)/12) < n^-0.12. In the following, it is supposed that the loop (Line 4-10) stops for t < 3⌈log n⌉. In this situation, it holds that |A| ≤ 13k⌈log n⌉· 3⌈log n⌉ + 2·39 ⌈log n⌉ = O(klog^2 n). For every x ∈ D_t, let c_x be the corresponding center in the optimal solution. x is called “happy point" and this is denoted as (x) if there exists c ∈ A_t+1 such that (c,c_x) ≤(x,c_x). Otherwise x is called “angry point" and this event is denoted as (x). For a certain x ∈ D, suppose that there are q points in D_t corresponding to c_x in the optimal solution and as close to c_x as x. Since in A_t+1 there are 13k⌈log n⌉ points sampled from D_t uniformly at random, the probability that x remains to be angry is no more that the probability that all the q points haven't be selected, that is, [(x)] ≤ (1-q/|D_t|)^13k⌈log n⌉≤exp(-q/|D_t|13k⌈log n⌉). The expectation of the number of the angry points in D_t corresponding to c_x is at most ∑_q = 1^+∞exp(-q/|D_t|13k⌈log n⌉)≤∫_x = 0^+∞exp(-x/|D_t|13k⌈log n⌉)dx ≤|D_t|/13k⌈log n⌉. Since there are k centers in the optimal solution, the fraction of angry points in |D_t| is 1/(13⌈log n⌉). For any happy point x ∈ D_t, there exists c ∈ A_t+1 such that (c,c_x) ≤(x,c_x). Hence, it holds that (x,A) ≤(x,A_t+1) ≤(x,c) ≤(x,c_x) + (c_x,c) ≤ 2(x,c_x). For the angry points, sort all the points x ∈ D_t from small to large by the key (x,τ_t+1(x)). Let each angry point grabs the first ungrabbed happy point and assume that s_t is an ungrabbed happy point. For any unhappy point x ∈ D_t, there is a happy point y ∈ D_t preceding s_t in the sequence grabbed by x, since otherwise s_t is unhappy or grabbed and there comes a contradiction. Let G_t+1 be G_t+1 := {x ∈ D_t: (x,τ_t+1(x)) ≤(s_t,τ_t+1(s_t))}. It can be concluded that for any unhappy point x ∈ G_t+1, there exists a unique happy point y ∈ G_t+1 such that (x,τ_t+1(x)) ≤(y,τ_t+1(y)). Hence, ∑_x ∈ G_t+1, (x)^z(x,τ_t+1(x)) ≤∑_x ∈ G_t+1, (x) ^z(x,τ_t+1(x)). For all the points in G_t+1, it holds that (G_t+1,A) = ∑_x∈ G_t+1^z(x,A) ≤∑_x ∈ G_t+1^z(x,A_t+1) = ∑_x ∈ G_t+1, (x) ^z(x,A_t+1) + ∑_x ∈ G_t+1, (x) ^z(x,A_t+1) ≤∑_x ∈ G_t+1, (x)^z(x,τ_t+1(x)) + ∑_x ∈ G_t+1, (x) ^z(x,A_t+1) ≤∑_x ∈ G_t+1, (x) ^z(x,τ_t+1(x)) + ∑_x ∈ G_t+1, (x) ^z(x,A_t+1) ≤ 2c_τ^z ∑_x ∈ G_t+1, (x) ^z(x,A_t+1(x)) ≤ O(2^zc_τ^z) ∑_x ∈ G_t+1, (x) ^z(x,c_x) ≤ O(2^zc_τ^z) ∑_x ∈ G_t+1^z(x,c_x). The probability that s_t is ungrabbed happy point is the fraction of ungrabbed happy points in D_t, which is 2/(13⌈log n⌉). The probability that all the s_t in the 3⌈log n⌉ iterations are ungrabbed and happy is at least 1-2/13⌈log n⌉· 3⌈log n⌉ = 7/13. Due to the definition of G_t and D_t, ∪_tG_t contains all the points deleted during the loop (Line 4-10). Besides, Since all the remaining points are added into A, they do not contribute to the cost. With a probability 7/13 - n^-0.12 > 1/2, it holds that (D,A) = ∑_x ∈ D^z(x,A) = ∑_t∑_G_t^z(x,A) + ∑_x ∉∪_t G_t^z(x,A) ≤ O(2^zc_τ^z)∑_t∑_G_td(x,c_x) + 0 ≤ O(2^zc_τ^z). In Line 12, bicriteria repeats the processing in Line 3-11 for three times and union all the set A to boost the success probability to 5/6. As a consequence, with probability at least 5/6, the output A is an (O(klog^2n),O(2^zc_τ^z))-bicriteria approximate solution. The proof of our complexity claim has been given in bicriteria. § FURTHER PROOF DETAILS FOR CORESET CONSTRUCTION BASED ON BIAPPROXIMATE SOLUTION This section gives a detailed proof of coreset_details. We restate this lemma as below. Let t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)) in coreset_details. For a positive real ε < 1/(4c_τ^z), coreset_details outputs an O(c_τ^zβε)-coreset of size Õ( 2^O(z)mdlog(n)max(ε^-2,ε^-z)) with probability at least 5/6, using Õ(2^O(z)c_τ√(nmd)max(ε^-1,ε^-z/2)) queries to O_τ, O_D, O_A, their inverses, and QRAM. Besides it uses (mdlog n/ε^z) additional classical processing time. This section first provides the detailed quantum implementation and the analysis of complexity in implementation, and then gives a proof that the output of coreset_details (set Ω) is an O(c_τ^zβε)-coreset with probability at least 5/6 in correctness. §.§ Quantum Implementation and Complexity This section shows the quantum implementation details with the complexity. For Line 3, the algorithm estimates |C_i| and _τ(C_i,A) first. Constructing the oracle U |s⟩|0⟩|0⟩|0⟩|0⟩O_τ|s⟩|i⟩|0⟩|0⟩|0⟩ O_D,O_A|s⟩|i⟩|x_s⟩|a_i⟩|0⟩ ↦|s⟩|i⟩|x_s⟩|a_i⟩|^z(x_s,a_i)⟩ O_A^-1,O_D^-1|s⟩|i⟩|0⟩|0⟩|^z(x_s,a_i)⟩ and applying mqsum yields the needed values _τ(C_i,A) ∀ i ∈ [m]. Since ^z(x_s,a_i) ≤_τ(D,A) ≤ c_τ^z, the calculation uses no more than Õ(zlog(c_τ)√(nm)/ε) queries to U and additional time, under a fair assumption that = (n). The same technique works for |C_i|. Then the algorithm computes Δ_C_i in a classical manner. The implementation of Line 5, Line 7, and Line 8 is similar. These calculation uses in total no more than Õ(zlog(c_τ)√(nm)/ε) queries to U_R, U_G, O_τ, O_D, and O_A. Besides it uses (mzlog(1/ε)) classical processing time. The construction of the ring unitary U_R in Line 4 is U_R |s⟩|0⟩|0⟩|0⟩|0⟩|0⟩ O_τ|s⟩|i⟩|0⟩|0⟩|0⟩|0⟩ O_Δ,O_D,O_A|s⟩|i⟩|Δ_C_i⟩|x_s⟩|a_i⟩|0⟩ ↦|s⟩|i⟩|Δ_C_i⟩|x_s⟩|a_i⟩|j⟩ O_A^-1,O_D^-1,O_Δ^-1|s⟩|i⟩|0⟩|0⟩|0⟩|j⟩ where j = ⌊log(^z(x_s,a_i)/Δ_C_i)⌋, and O_Δ|i⟩|0⟩→|i⟩|Δ_C_i⟩ ∀ i ∈ [m] is constructed by storing Δ_C_i in QRAM in Line 3. One query to U_R needs constant queries to O_D, O_A, O_τ, and QRAM. The same technique works for the construction of U_G and the complexity is also the same up to a constant factor. For Line 9, the algorithm first construct the below unitary U for each well-sturctured G. U |s⟩|0⟩|0⟩|0⟩|0⟩|0⟩|0⟩ U_G, O_τ O_Δ ↦|s⟩|j⟩|b⟩|i⟩|Δ_C_i⟩|I(x_s ∈ G)⟩|p_s⟩ ↦ O_Δ O_τ^-1,U_G^-1|s⟩|0⟩|0⟩|0⟩|0⟩|0⟩|p_s⟩ where I(x_s ∈ G) is the indicator for whether x_s ∈ G and p_i = I(x_s∈ G)Δ_C_i is proportional to [x_s]. Then one application to qsampling yields the sample Ω using O(√(nt)) queries to the above unitary U. Reweighting can be completed in a classical manner since U is of small size. For Line 10 the algorithm uses the same technique. The sampling process in total needs Õ(√(nt)) queries to U_G, O_A, O_τ, and QRAM. It uses additional (mlog n) + O(t)· d(mn) for computing the weight classically. Let t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)) and sum up all the time cost. coreset_details uses Õ(2^O(z)c_τ√(nmd)max(ε^-1,ε^-z/2)) queries to O_τ, O_D, O_A and QRAM, and (mdlog n/ε^z) additional classical processing time. Similar to bicriteria, each subroutine used in coreset_details suffers only a log(1/δ) factor to reach success probability at least 1-δ and each subroutine is applied no more than (nmz) times, so it is enough to set the failure probability as δ = O(1/(nmz)) for each subroutine. This cause only a (nmz) factor on time consume and it is adsorbed by the Õ notation. §.§ Correctness This section gives a rigorous proof that set Ω, the output of coreset_details, is an O(c_τ^zβε)-coreset with probability at least 5/6. This proof follows the idea of <cit.>. The dataset D can be seen as a partition of the following three kinds of points: * the union of the inner rings R_I, the cheapest groups G_j,min with j ∈ [zlog(4ε/z),zlog(4z/ε)] and G_O,min * the well-structured groups G_j,b with j ∈ [zlog(ε/4z),zlog(4z/ε)] and b = 1,…,max * the outer rings G_O,b with b = 1,…,max. coreset_details deals with this three kinds of points separately and so does the following proof. The following proof shows that, let ε≤ 1/(4c_τ^z) and t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)) in coreset_details, this algorithm has the following three properties, which are stated formally and proved later. * first_kind Let B := R_I∪ G_j,min be the set of the first kind of points. For any S ∈ (ℝ^d)^m it holds that |(B,S) - (A,S)| ≤ 8ε((D,S) + _τ(D,A)). * well_group It holds with probability at least 1/12 that for any well-structured group G = G_j,b and the corresponding sample Ω = Ω_j,b, and for any S ∈ (ℝ^d)^m, |(G,S) - (Ω,S)| = O(c_τ^zε)((G,S) + (G,A)). * outer_rings It holds with probability at least 1/12 that, for any outer group G = G_O,b and the corresponding sample Ω_O,b, and for any S ∈ (ℝ^d)^m, |(G,S) - (Ω,S)| ≤2c_τ^zε/zlog(z/ε) ((D,S) + (D,A)). Note that there are only O(zlog(z/ε)) outer groups G_O,b. Combining the three properties directly yields the proof for correctness. |(D,S) - (Ω,S)| ≤ O(c_τ^zε)((D,S) + (D,A)) ≤ O(c_τ^zβε)(D,S). Therefore, the output of coreset_details Ω = A∪Ω_j,b∪Ω_O,b is an O(c_τ^zβε)-coreset. The two lemmas introduced as follows are important tools for the proof. Let a, b, and c be three arbitrary sets of points ℝ^d. For any z ∈ℤ_+ and any ε > 0, it holds that ^z(a,b) ≤ (1+ε)^z-1^z(a,c) + (1+ε/ε)^z-1^z(b,c) |^z(a,S) - ^z(b,S)| ≤ε d^z(a,S) + (2z+ε/ε)^z-1^z(a,b). There is a set ℂ of size n· (z/ε)^O(d) such that, for any solution S ∈ (ℝ^d)^m there exists S̃∈ℂ^m which ensures that for any point x ∈ D with either (x,S) ≤ (8z/ε)^z_τ(x,A) or (x,S̃) ≤ (8z/ε)^z_τ(x,A), it holds that |(x,S) - (x,S̃)| ≤ε((x,S)+_τ(x,A) ). Such a set ℂ is called an A-approximate centroid set for (m, z)-clustering on data set D. For any x ∈ℝ^d and r ≥ 0, let B(x,r) := {y ∈ℝ^d|(x,y) ≤ r} be the ball around x with radius r. Note that Euclidean space ℝ^d has doubling dimension O(d), which means in this metric space any ball of radius 2r can be covered by 2^O(d) balls of radius r. For any V ⊂ℝ^d, a γ-net of V is a set of points X ⊂ V such that for any v∈ V there exists x ∈ Xsuch that (x,v) ≤γ and for any x,y ∈ X it holds that (x,y) > γ. In ℝ^d, a point set V ⊂ℝ^d with diameter D has a γ-net with size 2^O(d log (D/γ)) <cit.>. Let data points x_1,…,x_n be in order with non-descreasing value of (x,a_τ(x)). Let N_i be an ε·(x,a_τ(x))/(4z)-net of B(x_i, 10z(x_i,a_τ(x_i))/ε) ∖∪_j < i B(x_j, 10z(x_j,a_τ(x_j))/ε). Let s_f ∈ℝ^d be a point such that s_f ∉ B(x_i, 10z(x_i,a_τ(x_i))/ε) ∀ i ∈ [n]. Let N := ∪_x_i ∈ DN_i ∪{s_f}. The size of N is bound by n· (z/ε)^O(d): |N| ≤ n · 2^O(dlog (z/ε)^2) = n·(z/ε)^O(d). N is an A-approximated centroid set. For any solution S ∈ (ℝ^d)^m, let S̃∈ N^m be constructed by the following method. For each point s ∈ S, let i be the smallest index such that s ∈ B(x_i, 10z(x_i,a_τ(x_i))/ε). The corresponding N_i is non-empty because otherwise there exists x_j such that j < i and s ∈ B(x_j,10z(x_j,a_τ(x_j))/ε), and thus i is not the smallest index. Let s̃ be the closest point to s in N_i. If such an index i does not exist, let s̃ = s_f. Let S̃ be the set of all the s̃. S̃ has the property defined in centroid. Let x ∈ D satisfies (x,S) ≤ (10z/ε)^z_τ(x,A). Let s be the nearest neighbor of x in S and consider the corresponding index i and s̃. It holds that (x,a_τ(x)) ≥(x_i,a_τ(x_i)) since s ∈ B(x, 10z(x,a_τ(x))/ε). By the definition of s̃ it holds that (s,s̃) ≤ε(x_i,a_τ(x_i))/(4z) ≤ (ε/4z)_τ(x,A). As a consequence, (x,S̃) ≤(x,s̃) ≤ (1+ε)(p,s) + (1+z/ε)^z-1(s,s̃) ≤ (1+ε)(x,S) + ε_τ(x,A). On the other hand, let x ∈ D satisfies (x,S̃) ≤ (10z/ε)^z_τ(x,A). Let s̃ be the nearest neighbor of x in S̃ and consider the corresponding s and index i. If the index of x is smaller than i it can be implied that s̃∉ N_i because of the definition of N_i and s̃∈ B(x, 10z(x,a_τ(x))/ε). As a consequence, (x,a_τ(x)) ≥(x_i,a_τ(x_i)). It holds that (x,S) ≤(x,s) ≤ (1+ε)(x,s̃) + (1+2z/ε)^z-1(s,s̃) ≤ (1+ε)(x,S̃) + ε_τ(x,A). For any x ∈ D with (x,S) ≤ (8z/ε)^z _τ(x,A), it holds that (x,S̃) ≤ (1+ε)(x,S) + ε_τ(x,A) ≤ (10z/ε)^z_τ(x,A). Thus (x,S) ≤ (1+ε)(x,S̃) + ε_τ(x,A). Hence |(x,S) - (x,S̃)| ≤ε((x,S) + _τ(x,A) ). The same inequality holds for any x ∈ D with (x.S̃) ≤ (8z/ε)^z_τ(x,A). The first kind of points Let B := R_I∪ G_j,min∪ G_min^O be the set of the first kind of points. There holds the following lemma: For any solution S, |S| ≤ m and ε < 1/2 it holds that |(B,S) - (A,S)| ≤ 8ε((D,S) + _τ(D,A)). We use the following two lemmas to prove first_kind: For any solution S, |S| ≤ m, any i ∈ [m], and ε < 1/2, |(R_I(C_i),S) - |R_I(C_i)|·(a_i,S)| ≤ε(R_I(C_i),S) +2 ε_τ(C_i,A). For any solution S, |S| ≤ m and any cheapest group G, |(G,S) - ∑_i = 1^m |C_i ∩ G| (a_i,S) | ≤ε(R_j,S) + ε_τ(R_j,A) if G = G_j,min for some j ≠ O. And if G = G_min^O there is |(G,S) - ∑_i = 1^m |C_i ∩ G| (a_i,S) | ≤ε(D,S) + ε_τ(D,A). Fix i. Using triangle, it holds that |(x,S) - (a_i,S)| = |^z(x,S) - ^z(a_i,S)| ≤ε^z(x,S) + (1+ 2z/ε)^z-1^z(a_i,x). For any x ∈ R_I(C_i) and ε < 1/2, there is (1+2z/ε)^z-1^z(a_i,x) ≤ (1+2z/ε)^z-1_τ(x,A) ≤ (3z/ε)^z-1(ε/4z)^zΔ_C_i≤ε_τ(C_i,A)/(1-ε)|C_i|≤ 2ε_τ(C_i,A)/|R_I(C_i)| Combining the two inequalities above and summing the result over all the points in R_I yields |(R_I(C_i),S) - |R_I(C_i)|·(a_i,S)| ≤∑_x∈ R_I(C_i) |(x,S) - (a_i,S)| ≤ε(R_I(C_i),S) +2 ε_τ(C_i,A). Using triangle, it holds that |(x,S) - (a_τ(x),S)| = |^z(x,S) - ^z(a_τ(x),S)| ≤ε^z(x,S) + (1+ 2z/ε)^z-1^z(a_τ(x),x). The sum over G tells |(G,S) - ∑_i = 1^m |C_i ∩ G| (a_i,S) | = |∑_x ∈ G(x,S) - ∑_x ∈ G(a_τ(x),S)| ≤∑_x ∈ G(ε^z(x,S) + (1+ 2z/ε)^z-1^z(a_τ(x),x)) ≤ε(G,S) + (3z/ε)^z-1_τ(G,A). If G = G_j,min for some j, it holds that _τ(G,A) ≤ (ε/4z)^z·_τ(R_j,A). Else G = G_min^O and _τ(G,A) ≤ (ε/4z)^z·_τ(R_O,A) ≤ (ε/4z)^z·_τ(D,A). In both cases the lemma holds. Combining inner_ring and cheapest_well_group straightforwardly gives first_kind. As is talked about in coreset_details, due to the particularity of quantum computing, it is costly to compute the exact value |R_i,I|+|C_i ∩ (∪_j≠ I G_j,min)|. What the algorithm uses is ε-estimations r̃_̃ĩ such that |r̃_i - (|R_I(C_i)|+|C_i∩ (∪_j∉{I,O} G_j,min)|+|C_i∩ G_min^O|) ≤ε (|R_I(C_i)|+|C_i∩ (∪_j∉{I,O} G_j,min)|+|C_i∩ G_min^O|) for any i ∈ [m]. |(B,S) - (A,S)| = |(B,S) - ∑_i = 1^m r̃_i(a_i,S)| = |(1±ε) (B,S) ∓ε(B,S) - (1±ε)∑_i = 1^m (|R_I(C_i)| + |C_i ∩ (∪_j G_j,min )| + |C_i ∩ G_min^O|)(a_i,S) | ≤ (1+ε)(∑_i = 1^m|((R_I(C_i),S) - |R_I(C_i)|(a_i,S)|) + ∑_G (|(C∩ G,S) -∑_i = 1^m |C_i∩ G|(a_i,S)| ) ) +ε(B,S) ≤ (1+ε)ε(∑_i = 1^m ((R_I(C_i),S)+2_τ(C_i,A)) + ∑_j((R_j,S) + _τ(R_j,A))) + (D,S) + _τ(D,A) ) + ε(B,S) ≤ (1+ε)ε(3(D,S)+4_τ(D,A))+(D,S) ≤ 8ε((D,S) + _τ(D,A)). Well-structured groups For well-structured groups, the following lemma holds: Let t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)) in coreset_details Line 9. For ε < 1/(4c_τ^z), it holds with probability at least 1/12 that for any well-structured group G = G_j,b and the corresponding sample Ω = Ω_j,b, and for any S ∈ (ℝ^d)^m, |(G,S) - (Ω,S)| = O(c_τ^zε)((G,S) + (G,A)). Fix a well-structured group G and for convenience, in the following for any set C ⊂ℝ^d we write G∩ C simply as C. Due to the definition of G, for every cluster C_i, the following properties hold: * ∀ x,y ∈ C_i, _τ(x,A) ≤ 2_τ(y,A), * _τ(G,A)/(2m) ≤_τ(C_i,A), * ∀ x ∈ C_i, _τ(C_i,A)/(2|C_i|) ≤_τ(x,A) ≤ (2_τ(C_i,A))/|C_i|. Recall that Ω is an i.i.d sample of size t and in each round a point x ∈ C_i ∩ G is sampled with probability [x] = _τ(C_i,A)/|C_i|_τ(G,A) and for any x∈Ω, w(x) = |C_i|_τ(G,A)/|Ω|_τ(C_i,A). To be precisely, the values such as |C_i| and _τ(C_i,A) are ε-estimations in practice, instead of the exact values shown above. The method to deal with such problems is the same as in the proof of first_kind. For convenience we do not repeat similar proof and use the exact values directly. It can be seen that given a sample large enough, |C_i| can be well approximated for every i ∈ [m] (event_estimation). The proof of event_estimation is given later. Define event ℰ to be for any i ∈ [m], ∑_x ∈ C_i∩Ω|C_i|_τ(G,A)/|Ω|_τ(C_i,A) = (1±ε)|C_i|. With probability at least 1-2m·exp(-(ε^2 t)/(6m)), event ℰ holds. For any solution S, let I_l,S := {x ∈ G| 2^l _τ(x,A) ≤(x,S) ≤ 2^l+1_τ(x,A) }, and let these {I_l,S} be divided into three parts: tiny ranges, with l ≤log (ε/2); interesting ranges, with log(ε/2) ≤ l ≤ zlog(4z/ε); and huge ranges, with l≥ zlog(4z/ε). For the three types of I_l,S, there exist the following lemmas, respectively: Let I_,S := ∪_l ≤log(ε/2)I_l,S. For any solution S, it holds that max(( I_,S,S),( I_,S∩Ω,S ) ) ≤ε_τ(G,A). Condition on event ℰ, it holds that |(C_i,S) - (C_i∩Ω,S)|≤ O(ε)(C_i,S) for any solution S, and any cluster C_i such that there exists a huge range I_l,S with I_l,S∩ C_i ≠∅. Let L_S := {C_i|∀ x ∈ C_i, (x,S) ≤ (4z/ε)^z(x,A)}. Let ℂ be an A-approximate centroid set of size n·(z/ε)^O(d) as defined in centroid. With probability 1 - exp(mlog(n) + O(mdlog(z/ε)) - 2^O(zlog z)·min(ε^2,ε^z)/log^2 (1/ε)· t ) and together with event ℰ, it holds that for any solution S ∈ℂ^m |(L_S,S) - (L_S∩Ω,S)| ≤ε(_τ(G,A) + (G,S)). Using huge and interesting, well_group can be proved. Let G be an arbitrary well-structured group. Let S be an arbitrary solution and let S̃∈ℂ^k approximate S. Denote H_S := {x ∈ G|∃ i, x ∈ C_i ∃ l > zlog (8z/ε), C_i ∩ I_l,S≠∅} H_S̃ : = {x ∈ G|∃ i, x ∈ C_i ∃ l > zlog (4z/ε), C_i ∩ I_l,S̃≠∅}∖ H_S. And denote L_S̃ as in interesting. It holds that H_S, H_S̃, and L_S̃ form a partition of G. On the one hand, H_S ∪ H_S̃∪ L_S̃ = G. On the other hand, H_S ∩ H_tildeS = ∅, H_S̃∩ L_S̃ = ∅, and L_S̃∩ H_S = ∅ since ∀ x ∈ L_S̃ (x,S̃) ≤ (4z/ε)^z _τ(x,A), thus (x,S) ≤ (1+ε)(x,S̃) + ε_τ(x,A) ≤ (8z/ε)^z _τ(x,A). By this partition and the property of S̃, it holds that |(G,S) - (Ω,S)| = |∑_x ∈ H_S(x,S) - ∑_x ∈ H_S ∩Ωw(x)(x,S) | + |∑_x ∈ G∖ H_S(x,S) - ∑_x ∈ (G∖ H_S)∩Ωw(x)(x,S)| ≤ |∑_x ∈ H_S(x,S) - ∑_x ∈ H_S ∩Ωw(x)(x,S) | + |∑_x ∈ G∖ H_S(x,S̃) - ∑_x ∈ (G∖ H_S)∩Ωw(x)(x,S̃)| + ε((G,S) + _τ(G,A) + (Ω,S) + _τ(Ω,A)) ≤ |∑_x ∈ H_S(x,S) - ∑_x ∈ H_S ∩Ωw(x)(x,S) | + ε((G,S) + _τ(G,A) + (Ω,S) + _τ(Ω,A)) + |∑_x ∈ L_S̃(x,S) - ∑_x ∈ L_S̃∩Ωw(x)(x,S)| + |∑_x ∈ H_S̃(x,S) - ∑_x ∈ H_S̃∩Ωw(x)(x,S)| ≤ O(ε)((G,S) + _τ(G,A) + (Ω,S) + _τ(Ω,A) ) ≤ O(c_τ^zε)((G,S) + (G,A) + (Ω,S) + (Ω,A) ). The last inequality uses huge and interesting. Assume that ε < 1/(4c_τ^z). Let S = A, it holds that (Ω,A) ≤(G,A) + |(G,A) - (Ω,A)| ≤ O(1) (G,A). Similarly, (Ω,S) ≤(G,S) + |(G,S) - (Ω,S)| ≤ O(1)((G,S) + (G,A) ). Hence it can be concluded that |(G,S) - (Ω,S)| = O(c_τ^zε)((G,S) + (G,A)) Using the union bound over event ℰ and the probability of interesting for all the well-structured groups G, the probability is 1 - z^2log^2(z/ε)(exp(mlog(n) + O(mdlog(z/ε)) - 2^O(-zlog z)·min(ε^2,ε^z)/log^2 (1/ε)· t ) - 2m·exp(-(ε^2 t)/(6m))). The probability can be bound by 1/12 by setting the value of t as t = Õ( 2^O(z)· m· (d+log(n))·max(ε^-2,ε^-z)). The proofs ofevent_estimation, tiny, huge, and interesting are shown as below, respectively. event_estimation is used in the proof of huge and interesting, and tiny is used in the proof of interesting. Fix i ∈ [m]. Define P_i(x) as the indicator of point x ∈Ω being drawn from C_i, i.e., P_i(x) = 1 if x ∈ C_i∩Ω, and otherwise P_i = 0. The expectation of P_i(x) has the following property: (P_i(x)) = ∑_x ∈ C_i |Ω|[x] = ∑_x ∈ C_i|Ω|_τ(C_i,A)/|C_i|_τ(G,A)≥|Ω|/2m. By Chernoff bounds, it holds that [|∑_x ∈ΩP_i(x) - (P_i(x))| ≥ε(P_i(x)) ] ≤ 2e^-ε^2(P_i(x))/3≤ 2e^-(ε^2|Ω|)/(6m). The union bound over all the clusters derives that, with probability at least 1-2m·exp(-(ε^2 t)/(6m)), for any cluster C_i there is |C_i∩Ω| = (1±ε)∑_x ∈ C_i|Ω|_τ(C_i,A)/|C_i|_τ(G,A) which implies ∑_x ∈ C_i∩Ω|C_i|_τ(G,A)/|Ω|_τ(C_i,A) = (1±ε)|C_i|. (I_,S,S) = ∑_x ∈ I_,S(x,S) ≤ |I_,S|ε/2_τ(x,A) ≤ε_τ(G,A) (I_,S∩Ω,S) = ∑_x ∈ I_,S∩Ωw(x)(x,S) = ∑_x ∈ I_,S∩Ω|C_i|_τ(G,A)/t_τ(C_i,A)(x,S) ≤ |I_,S∩Ω| |C_i|_τ(G,A)/|Ω|_τ(C_i,A)·ε/2·2_τ(C_i,A)/|C_i| ≤ε_τ(G,A). For any x ∈ C_i∩Ω, (x,S) is bound. Let y ∈ I_l,S∩ C_i with I_l,S being a huge range. For any x ∈ C_i, there is (x,y) ≤((x,A)+(y,A) )^z ≤ 3^z_τ(y,A) ≤ 3^z2^l-zlog(4z/ε)_τ(y,A)≤(3ε/4z)^z(y,S) . Using triangle, there is (y,S) ≤(1+ε/2z)^z-1(x,S) + (1+2z/ε)^z-1(x,y) ≤ (1+ε)(x,S) + ε(y,S) which implies (x,S) ≥ (1-2ε)(y,S) and (y,S) ≤ (1+3ε)(x,S) if ε < 1/3. Similarly (x,S) ≤ (1+2ε)(y,S) and (y,S) ≥ (1-3ε)(x,S). (C_i∩Ω,S) = ∑_x ∈ C_i∩Ω|C_i|_τ(G,A)/|Ω|_τ(C_i,A)(x,S) = (1±2ε)∑_x ∈ C_i∩Ω|C_i|_τ(G,A)/|Ω|_τ(C_i,A)(y,S) = (1±2ε)(1±ε)|C_i|(y,S) = (1±2ε)(1±ε)∑_x ∈ C_i(y,S) = (1±2ε)(1±ε)(1±3ε)(C_i,S) = (1± O(ε))(C_i,S). The third equation holds because of event ℰ. Let x_i,S := min_x ∈ C_i(x,S) and w_x,S = ((x,S)-(x_i,S,S))/_τ(x_i,S,A). Let E_l,S := ∑_C_i∈ L_S∑_x ∈ C_i∩ I_l,S∩Ωw(x)_τ(x_i,S,A)w_x,S and F_l,S := ∑_C_i∈ L_S∑_x ∈ C_i∩ I_l,S∩Ωw(x)(x_i,S,S). w_x,S is bound. For fixed i, l and S consider arbitrary x ∈ C_i∩ I_l,S. By the definition of x_i,S it is straightforward to see (x,S) ≥(x_i,S,S), and thus w_x,S≥ 0. Besides, because the property of well-structured group there are (x_i,S,S) ≤(x,S) ≤ 2^l+1_τ(x,A) ≤ 2^l+2_τ(x_i,S,A) and (x,x_i,S) ≤ 2^z-1((x,A)+(x_i,S,A)) ≤ 3·2^z-1_τ(x_i,S,A). Using triangle, for any α≤ 1, (x,S) ≤ (1+α/z)^z-1(x_i,S,S) + (1+z/α)^z-1(x,x_i,S) which after rearranging implies (x,S) - (x_i,S,S) ≤ 2α(x_i,S,S) + (2z/α)^z-1(x,x_i,S) ≤ 2^z(2αmax(1,2^l+1)+(2z/α)^z-1)_τ(x_i,S,A) Let α = 2^-l/z (ignoring constants that depend on z), the inequality yields that (x,S) - (x_i,S,S) ≤ 2^O(zlog z)2^l(1-1/z)_τ(x_i,S,A). Therefore, w_x,S∈ [0, 2^O(zlog z)2^l(1-1/z)]. E_l,S can be expressed differently: E_l,S = ∑_C_i ∈ L_S∑_x ∈ C_i ∩ I_l,S∩Ω w(x)_τ(x_i,S,A) w_x,S = ∑_C_i ∈ L_S∑_x ∈ C_i ∩ I_l,S∩Ω w(x)((x,S) - (x_i,S,S)) = ∑_x ∈ I_l,S∩ L_S∩Ω w(x)(x,S) - F_l,S. The expectation of E_l,S is as follows: [E_l,S] = ∑_x ∈ I_l,S∩ L_S |Ω|[x]w(x)(x,S) - [F_l,S] = ∑_x ∈ I_l,S∩ L_S|Ω|_τ(C_i,A)/|C_i|_τ(G,A)|C_i|_τ(G,A)/|Ω|_τ(C_i,A)(x,S) - [F_l,S] = (I_l,S∩ L_S,S) - [F_l,S]. Intuitively, by Bernstein's inequality the random variable E_l,S is concentrated around its expectation. Let Ω_i be the point sampled from the i-th round of importance sampling (Line 9 in coreset_details), and let X_i = w(Ω_i)_τ(x_τ(Ω_i),S,A)w_Ω_i,S when Ω_i ∈ I_l,S∩Ω and X_i = 0 otherwise. It holds that E_l,S = ∑_i = 1^t X_i. X_i and its variance are bounded. [X_i] ≤[X_i^2] = ∑_x ∈ I_l,S∩ L_S[x](w(x)_τ(x_τ(x),S,A)w_x,S)^2 ≤∑_x ∈ I_l,S∩ L_S_τ(C_i,A)/|C_i|_τ(G,A)(|C_i|_τ(G,A)/|Ω|_τ(C_i,A)_τ(x,A)2^l(1-1/z)2^O(zlog z))^2 ≤∑_x ∈ I_l,S∩ L_S 2^2l(1-1/z)2^O(zlog z)|C_i|_τ(G,A)/|Ω|^2_τ(C_i,A)_τ^2(x,A) ≤∑_x ∈ I_l,S∩ L_S 2^2l(1-1/z)2^O(zlog z)_τ(G,A)/|Ω|^2_τ(x,A) which implies [X_i] ≤{ 2^O(zlog z)_τ(G,A)_τ(G,A)/|Ω|^2 , z = 1 2^O(zlog z)2^l(1-2/z)_τ(G,A)(I_l,S,S)/|Ω|^2 , z ≥ 2 . Besides, X_i has upper bound. X_i ≤|C_i|_τ(G,A)/|Ω|_τ(C_i,A)_τ(x,A)2^l(1-1/z)2^O(zlog z) ≤ 2^l(1-2/z)2^O(zlog z)_τ(G,A)/|Ω|. By Bernstein's inequality, [|E_l,S - [E_l,S]| ≤ε/zlog z/ε·(_τ(G,A)+(I_l,S,S) )] ≤exp(-min(ε^2,ε^z)t/2^O(zlog z)log^2(1/ε)). Denote F_S := ∑_l ≤ zlog(4z/ε)F_l,S. Condition on event ℰ, the value of F_S and its expectation are as follows: [F_S] = ∑_l ≤ z log(4z/ε)∑_C_i ∈ L_S∑_x ∈ C_i∩ I_l,S |Ω|[x]w(x)(x_i,S,S) = ∑_C_i ∈ L_S |C_i|(x_i,S,S); F_S = ∑_C_i ∈ L_S∑_x ∈ C_i∩Ω w(x)(x_i,S,S) = ∑_C_i ∈ L_S(x_i,S,S)∑_x ∈ C_i∩Ω|C_i|_τ(G,A)/|Ω|_τ(C_i,A) = (1±ε)∑_C_i∈ L_S|C_i|(x_i,S,S). Hence F_S = (1±ε)[F_S], and [F_S] ≤(L_S,S) ≤(G,S). Taking an union bound over the concentration for all possible S ∈ℂ^k and all l such that log(ε/2) ≤ l ≤ z log (4z/ε), it holds with probability 1-exp(klog (|ℂ|) - 2^O(zlog z)·min(ε^2,ε^z)· t ·log^-2(1/ε) ) that, for every S ∈ℂ^k and log(ε/2) ≤ l≤ zlog (4z/ε), |E_l,S - [E_l,S] ≤ε/zlog (z/ε)(_τ(G,A) + (I_l,S,S) )|. Conditioning on the above event together with event ℰ, It holds that |(L_S,S) - (L_S∩Ω,S)| = |∑_x ∈ L_S(x,S) - ∑_x ∈ L_S∩Ωw(x)(x,S)| ≤ |∑_x ∈ L_S(x,S) - [F_S] + F_S - ∑_x ∈ L_S∩Ωw(x)(x,S)| + |[F_S] - F_S| ≤∑_l < log (ε/2) |∑_x ∈ I_l,S∩ L_S(x,S) - [F_l,S] + F_l,S - ∑_x ∈ I_l,S∩ L_S∩Ωw(x)(x,S)| + ∑_l = log(ε/2)^zlog (4z/ε)|∑_x ∈ I_l,S∩ L_S(x,S) - [F_l,S] + F_l,S - ∑_x ∈ I_l,S∩ L_S∩Ωw(x)(x,S)| + |[F_S]-F_S|. The tiny ranges can be bound as follows since F_l,S≤∑_x ∈ I_l,S∩Ω w(x)(x,S) and [F_l,S] ≤∑_x ∈ I_l,S(x,S): ∑_l < log (ε/2) |∑_x ∈ I_l,S∩ L_S(x,S) - [F_l,S] + F_l,S - ∑_x ∈ I_l,S∩ L_S∩Ωw(x)(x,S)| ≤ ∑_l < log(ε/2)(∑_x ∈ I_l,S∩ L_S(x,S) + [F_l,S] + F_l,S + ∑_x ∈ I_l,S∩ L_S∩Ωw(x)(x,S)) ≤ 2(∑_x ∈ I_l,S(x,S) + ∑_I_l,S∩Ωw(x)(x,S) ) ≤ 4ε_τ(G,A). Plugging this result into the previous inequality, it holds that |(L_S,S) - (L_S∩Ω,S)| ≤ 4ε_τ(G,A) + ∑_l = log (ε/2)^zlog (4z/ε)|E_l,S - [E_l,S]| + |[F_S] - F_S| ≤ 4ε_τ(G,A) + (zlog (4z/ε)-log(ε/2))·ε/zlog(z/ε)(_τ(G,A) + (L_S,S)) + ε(G,S) ≤ O(ε)(_τ(G,A) + (G,S)). Outer groups For outer groups, the following lemma holds: Let t = Õ(2^O(z)· m· (d+log n)·1/ε^2) in coreset_details Line 10. It holds with probability at least 1/12 that, for any group of outer rings G = G_b^O and the corresponding sample Ω_b^O, and for any S ∈ (ℝ^d)^m, |(G,S) - (Ω,S)| ≤ 2c_τ^zε/zlog(z/ε) ((D,S) + (D,A)). Fix an arbitrary S. Partition the points in G into two parts and denote G_,S := {x ∈ G|(x,S) ≤ 4^z_τ(x,A)} G_,S := {x ∈ G|(x,S) > 4^z_τ(x,A)}. Bernstein's inequality works for the close part. Let Ω_i be the i-th sampled point and let X_i = { _τ(G,A)/|Ω|_τ(Ω_i,A)·(Ω_i,S), Ω_i ∈ G_,S 0, Ω_i ∉ G_,S. Let E_,S:= ∑_i = 1^t X_i. The variance of X_i has the property [X_i] ≤[X_i^2] = ∑_x ∈ G_,S(_τ(G,A)/|Ω|_τ(x,A)(x,S) )^2 _τ(x,A)/_τ(G,A) =_τ(G,A)/|Ω|^2∑_x ∈ G_,S(x,S)/_τ(x,A)(x,S) ≤4^z/|Ω|^2_τ(G,S)(G,S). X_i has a upper bound X_i ≤max_x ∈ G_,S( _τ(G,A)/|Ω|_τ(x,A)(x,S)) ≤4^z/|Ω|_τ(G,A). Bernstein's inequality yields that [|E_,S - [E_,S]|≤ε/zlog (z/ε)(_τ(D,A) + (D,S))] ≤exp( -2^O(z)· (ε/zlog(z/ε))^2 · t) Similar technique about A-approximate centroid set yields that for any S and any G, |(G_,S,S) - (G_,S∩Ω,S)| ≤c_τ^zε/zlog(z/ε)((D,A)+(D,S)) with probability at least 1 - exp( mlog n + O(md)log (z^2/εlog(z/ε)) - 2^-O(z)ε^2t). The proof for the far part is as follows. Denote event ℰ_ to be: for any cluster C, ∑_x ∈ C∩ G∩Ω w(x)_τ(x,A) = (1±ε)_τ(C∩ G,A). Event ℰ_far happens with probability at least 1 - mexp(ε^2t/m). Let E_C = ∑_i = 1^t X_i, where X_i = { w(x)_τ(Ω_i,A), Ω_i ∈ C∩ G 0, Ω_i ∉ C∩ G . with Ω_i being the i-th sampled point. Calculation shows that [X_i] ≤ E[X_i^2] ≤ 2m^2_τ(C∩ G,A)/t^2 and X_i ≤ 2m_τ(C∩ G,A)/t, and the Bernstein's inequality implies the success probability. Fix a cluster C_i such that C_i ∩ G_,S≠∅ and let a_i be the center. Let x_i be a point such that x_i ∈ C_i∩ G_,S, which implies (x_i,S) ≥ 4(x_i,a_i). Let C_ : = {x ∈ C_i|_τ(x,A)≤ (z/ε)^z(_τ(C_i,A)/|C_i|) }. Due to Markov's inequality, |C_|≥ (1-ε/z)|C_i|. Note that G is an outer group and for any x ∈ G it holds that _τ(x,A) ≥ (z/ε)^2z· (_τ(C_i,A)/|C_i|). By the definition of G_,S and triangle it can be derived that (a_i,S) ≥ ((x_i,S) - (x_i,a_i))^z ≥ 3^z_τ(x_i,A) ≥ 3^z(z/ε)^2z_τ(C_i,A)/|C_i| (a_i,S) ≤ (1+ε)(x,S) + (1+2z/ε)^z-1(x,a_i) ≤ (1+ε)(x,S) + (3z/ε)^z-1(z/ε)^z_τ(C_i,A)/|C_i| ≤ (1+ε)(x,S) + ε_τ(a_i,S) where x is an arbitrary point in C_. Combining the lower bound of the size of C_, this implies (C_i,S) ≥(C_,S) |C_|·1-ε/1+ε·(a_i,S)≥ 3^z(z/ε)^2z-1_τ(C_i,A). Due to Markov's inequality the size of G∩ C_i is bound by (ε/z)^2|C_i| since G is an outer group. Combining with the above inequalities, it holds that (G_,S∩ C_i,S) = ∑_x ∈ G_,S∩ C_i(x,S) ≤∑_x ∈ G_,S∩ C_i(1+ε)(a_i,S) + (1+2z/ε)^z-1_τ(G_,S∩ C_i,A) ≤(1+ε/1-ε)^z(ε/z)^2(C_i,S) + (3z/ε)^z-1(1/3)^z(ε/z)^2z-1(G_,S∩ C_i,S). By simplifying the inequality and summing over all the clusters C_i, it is implied that (G_,S,S) ≤ε/zlog(z/ε)(D,S). Condition on ℰ_, calculation shows that (G_,S∩Ω, S) ≤ε/zlog(z/ε)(D,S). The combination of the results about the close part and the far part tells that |(G,S) - (G∩Ω,S)| ≤ |(G_,S,S)- (G_,S∩Ω,S)|+|(G_,S,S)| + |(G_,S∩Ω,S)| ≤ 2c_τ^zε/zlog(z/ε)((D,S)+(D,A)). To make the above inequality holds with probability at least 1/12, it is sufficient to set t = Õ(2^O(z)· m· (d+log n)·1/ε^2). § PROOF OF MULTIDIMENSIONAL QUANTUM COUNTING This section provides a detailed proof of mqcounting and mqsum. mqcounting is restated as follows: Given two integers 1 ≤ m ≤ n, two parameters ε∈ (0,1/3), and a partition τ [n] → [m]. For each j ∈ [m], denote D_j := {i ∈ [n]τ(i)=j} as the j-th part and n_j := |D_j| for the size. Assume that we have an oracle O_τ|i⟩|0⟩→|i⟩|τ(j)⟩ ∀ i ∈ [n]. For ε∈ (0,1/3), δ > 0, mqcounting outputs ñ_j such that |ñ_j-n_j| ≤ε n_j for every j∈ [m] with probability at least 1-δ, using Õ(√(nm/ε)log(1/δ)) queries to O_τ, Õ((√(nm/ε)+m/ε)log(n/δ)log M ) gate complexity, and additional O(mlog M) classical processing time. The query complexity is optimal up to a logarithm factor. We establish the correctness and complexity bound separately. Correctness The “maximal total probability" p_mt and precision 2ñε/3nm in Line 6 satisfies amplitudeestimation. Denote the exact cardinality of Q in Line 13 as n', |ñ - n'| ≤1/2 n'. For the maximal total probability, we have ∑_j ∈ Sp_j = ∑_j ∈ Sn_j/n = n'/n≤2ñ/n = p_mt. And for the precision, 2ñε/3nm≤n'/nmε≤ε < 1/3. For each j ∈ M, n_j has been estimated. For those n_j estimated after the while loop stops, we find all the members belonging to the corresponding subset and count classically in Line 16, which gives an exact cardinality. For those n_j estimated in the while loop, |p̃_j - n_j/n| = |p̃_̃j̃ - p_j| ≤2ñε/3nm. Since p̃_̃j̃≥ñ/nm, |p̃_j - n_j/n| ≤2/3εp̃_̃j̃, thus |ñ_j - n_j| ≤2/3εñ_j. Since ε∈ (0,1/3), |ñ_j - n_j| ≤ε n_j as required. Complexity In each iteration, the estimation process for {p_j}_j∈ P in Line 6 uses O(√(p_mt)/2ñε/3nm) = O(m√(n)/ε√(ñ)) = O(√(nm)/ε) applications of U_p and membership queries for P, and Õ(m+p_mt/2ñε/3nm) = Õ(m/ε) gate complexity, since ñ < m/ε (Line 15) at this time. The estimation for the cardinality of Q needs Õ(√(n/m)/ε) membership queries to P according to counting. The classical process in Line 7-11 and Line 16 needs at most O(m) time. Since we can make all elements in P sorted in O(mlog(m)) time to keep an O(log(m)) query complexity for the membership query to P, the total query complexity per iteration is Õ(√(nm/ε)), with additional Õ(m/ε) processing time. The loop (Line 5-15) has at most O(log n) iterations. Denote ñ_0 = n and ñ obtained in Line 13 at the t-th iteration as ñ_t. At the (t+1)-th iteration, for any j ∈ P there is p̃_̃j̃≤ñ_t/9nm, thus p_j ≤p̃_j + 2ñ_tε/3nm≤ñ_t/3nm. We can calculate that ñ_t+1≤3/2n'_t+1 = 3n/2∑_j ∈ Pp_j ≤3nm/2max_j ∈ Pp_j ≤ñ_t/2. As a result, after O(log n) iterations there must be ñ≤ m/ε. Finding all the items remaining in Q (Line 16) requires Õ(√(nm)/ε) queries to O_τ and membership queries to P since |P| ≤ m/ε here. The overall query complexity is Õ(√(nm)/ε). There are at most O(log n) iterations and each iteration fails with probability at most O(δ/log n). Therefore, the success probability is at least 1-δ. mqsum is restated as follows: Given two integers 1 ≤ m ≤ n, a real parameter ε > 0, a partition τ [n] → [m], and a function f [n] →ℝ_≥ 0. Assume that there exists access to an oracle O_τ|i⟩|0⟩|0⟩→|i⟩|τ(i)⟩|f(i)⟩ ∀ i ∈ [n] and assume that f has an upper bound M. For ε∈ (0,1/3), δ > 0, there exists a quantum algorithm that finds ε-estimation for each s_j := ∑_τ(i = j)f(i), j ∈ [m] with probability at least 1-δ, using Õ(√(nm/ε)log(1/δ)log M) queries to O_τ and additional Õ((√(nm/ε)+m/ε)log(n/δ)log M ) gate complexity. Write f(i) as a binary number f̅_̅0̅(̅i̅)̅f̅_̅1̅(̅i̅)̅…̅ ̅f̅_̅l̅(̅i̅)̅, l = ⌈log M ⌉. For each t = 0:l, let O_t be O_t |i⟩|0⟩→|i⟩|τ(i)I(f_t(i) = 1)⟩, where I(f_t(i) = 1) is the indicator for whether f_t(i) = 1. O_t can be constructed by constant queries to O_τ and its inverse: |i⟩|0⟩|0⟩|0⟩O_τ|i⟩|τ(i)⟩|f(i)⟩|0⟩↦|i⟩|τ(i)⟩|f(i)⟩|τ(i)I(f_t(i))⟩O_τ^-1|i⟩|0⟩|0⟩|τ(i)I(f_t(i) = 1)⟩. Applying mqcounting_appendix with oracle O_t and δ' = δ/l, mqcounting outputs s̃_j^t for j = 1:m, s̃_j^t is an ε-estimation for s_j^t = ∑_τ(i) =j f_t(i) using Õ(√(nm/ε)log(l/δ)) calls for O_t and additional Õ((√(nm/ε)+m/ε)log(nl/δ)) gate complexity. Let s̃_j = ∑_t = 0^l 2^t s̃_j^t. |s̃_j - s_j| ≤∑_t = 0^l 2^t|s̃_j^t - s_j^t| ≤ε∑_t = 0^l 2^t s_j^t ≤ε s_j. Therefore, s̃_j is an ε-estimation of s_j, ∀ j ∈ [m]. The total complexity is Õ(√(nm/ε)log(1/δ)log M) queries to O_τ, Õ((√(nm/ε)+m/ε)log(n/δ)log M ) gate complexity, and additional O(mlog M) classical processing time. § PROOFS OF QUANTUM LOWER BOUNDS §.§ Auxiliary Lemmas In the proofs of our quantum lower bounds, we use the following tools. For the alphabet set Σ,Γ, functions f: D_1 → K and g : 𝒟_2 →Γ with 𝒟_1 ⊆Γ^n,𝒟_2 ⊆Σ^m, let f ∙ g = f(g^n). The bounded-error quantum query complexity Q satisfies Q(f ∙ g) = Θ(Q(f) · Q(g)). For the alphabet set Σ, functions g : 𝒟→{0,1} with 𝒟⊆Σ^m, the bounded-error quantum query complexity Q satisfies Q(g^n) = Θ(nQ(g)). Plug f = id in composition_theorem. Then, we only need to prove that Q(f) = Ω(n). This can be seen, e.g., by reducing to PARITY, which is defined and proved to have Q(PARITY) = n/2 in <cit.>. We define the following problems: * Decisional Quantum Counting: f_n',l,l' S →{0,1} where S ⊂{0,1}^n',l l' is a partial Boolean function defined as f_n',l,l'(x_0,x_1,…,x_n'-1)={[ 0 if |X| = l; 1 if |X| = l'; not defined otherwise ]. where |X| = ∑_i=0^n'-1 x_i. The notation f_n',l,l'^(k) S^k →{0,1}^k represents the repeated direct product of f, i.e. f_n',l,l'^(k)(x^(1),x^(2),…,x^(k)) = (f_n',l,l'(x^(1)),f_n',l,l'(x^(2)),…,f_n',l,l'(x^(k))). * Approximate Bits Finding: _k⊆{0,1}^k ×{0,1}^k is a relation problem where for input bits x_1,…,x_k, output bits c_1,…,c_k are correct iff the hamming distance between x and c is less than ε_0 · k for some absolute constant 0<ε_0<1 to be determined in the proof of k-means-lower. * The operator ∘ composites a relation and a function in the natural way, resulting in a relation problem on S^k ×{0,1}^k. Any bounded-error quantum algorithm that computes f_n',l,l', given the input as an oracle, must make Ω(√(n' / Δ) + √(n'(n' - m) / Δ)), where Δ := |l - l'| and m ∈{l,l'} s.t. |m - n'/2| is maximized. §.§ Proof of lower-mqc-counting-main Now, we give the proof of lower-mqc-counting-main, which is restated below: Every quantum algorithm that solves the multidimensional counting problem (Definition <ref>) w.p. at least 2/3 uses at least Ω(√(nk)ε^-1/2) queries to O_τ. Let T = ⌊ 0.1 ε^-1⌋. Assume M is even. We reduce from the problem f_n/m,T,T+1^m/2. By lower-quantum-counting and dirsum_theorem, Q(f_n/m,T,T+1^m) = Ω(√(nm)ε^-1/2). The reduction applies by defining τ_i = 2a + x_i + 1 for i = am / 2 + b where 1 ≤ b ≤ m/2 , calling the quantum multidimensional counter to get n_1,n_2,…,n_m, and outputting n_2 - T, n_4 - T, …, n_m - T. §.§ Proof of lower-clustering-main Now, we prove lower-clustering-main, which is restated below: Assume that ε is sufficiently small. Consider the Euclidean k-means/median problem on data set D = {x_1,…,x_n}⊂ℝ^d. Assume a quantum oracle O_x |i, b⟩ := |i, b ⊕ x_i⟩. Then, every quantum algorithm outputs the followings with probability 2/3 must have quantum query complexity lower bounds for the following problems: * An ε-coreset: Ω(√(nk)ε^-1/2) for k-means and k-median (k-means-coreset-lower); * An ε-estimation to the value of the objective function: Ω(√(nk)+√(n)ε^-1/2) for k-means and k-median (k-means-cost-lower); * A center set C such that (C) ≤ (1 + ε)(C^*) where C^* is the optimal solution: Ω(√(nk)ε^-1/6) for k-means; Ω(√(nk)ε^-1/3) for k-median (k-means-lower). We prove these different settings separately as follows. §.§.§ Coreset Output Assume that ε is sufficiently small. Consider the Euclidean (k,z)-clustering problem on data set D = {x_1,…,x_n}⊂ℝ^d. An oracle O_x |i, b⟩ := |i, b ⊕ x_i⟩ is accessible. Then, every quantum algorithm that outputs an ε-coreset w.p. at least 2/3 uses at least Ω(√(nk)ε^-1/2) queries to O_x. We reduce from the multidimensional counting problem. The instance is 1-dimensional. Let B=n^100. The reduction is simply defining x_i = τ_i · B. After getting an ε-coreset from the (m,z)-clustering solver, we are able to query an ε-estimate of _z(D, C_i) where C_i = {B, 2B, …, i · B + 1, mB}, which equals to |D_i|, completing the proof. §.§.§ Objective Function Estimation Assume that ε is sufficiently small. Consider the Euclidean (k,z)-clustering problem on data set D = {x_1,…,x_n}⊂ℝ^d. An oracle O_x |i, b⟩ := |i, b ⊕ x_i⟩ is accessible. Then, every quantum algorithm that outputs a real number Ã∈ (1 ±ε) min_C^*(C^*) w.p. at least 2/3 uses at least Ω(√(nk)+√(n)ε^-1/2) queries to O_x. Our proof has two parts: Ω(√(nk)) and Ω(√(n)ε^-1/2). First, we prove that the complexity is Ω(√(nk)). We can assume that k = o(n). We reduce from the problem f_n, k, k + 1, which has lower bound Ω(√(nk)) by lower-quantum-counting. The instance is one-dimensional. We set x'_i = 0 if x_i = 0 and x'_i = i if x_i=1. Then, we estimate the (k,z)-clustering objective function. In the 0-case, the objective function value must be 0 (we have k centers for k points s.t. x_i=1); in the 1-case, the objective function value is greater than 0. Thus, an ε-approximation is able to distinguish two cases, completing this part. Second, we prove that the complexity is Ω(√(n)ε^-1/2), even for k=1. Let T = ⌊ 0.1 ε^-1⌋. We may assume ε^-1 = o(n). We reduce from the f_n,T,T+1. The instance is one-dimensional. The reduction is simply setting x'_i = x_i. Let a = ∑_i=1^n x_i. We need to distinguish two cases: a = T and a = T + 1. The objective function is f(x) = a|x|^z + (n - a)|1 - x|^z. Assume z > 1. By calculus, we can see min_x f(x) = a(n-a)/((n-a)^1/(z-1) + a^1/(z-1))^z-1. Thus, to prove that an ε-estimation can distinguish two cases, we must prove that, T(n-T)/((n-T)^1/(z-1) + T^1/(z-1))^z-1 < (1 - ε)(T+1)(n-T-1)/((n-T-1)^1/(z-1) + (T+1)^1/(z-1))^z-1. Because T = o(n), RHS/LHS∼ (1-ε) T+1/T > 1 for sufficiently small ε. As for z = 1, f(x) = a so the above argument is also valid. §.§.§ Center Set Output Assume that ε is sufficiently small. Consider the Euclidean (k,z)-clustering problem on data set D = {x_1,…,x_n}⊂ℝ^d. An oracle O_x |i, b⟩ := |i, b ⊕ x_i⟩ is accessible where the second register saves the binary representation of a real number and can have any polynomial number of qubits. Then, every quantum algorithm that outputs the optimal centers C = {c_1^,…,c_k} such that (C) ≤ (1 + ε) min_C^*(C^*) w.p. at least 2/3 uses at least Ω(min(n, √(nk)ε^-1/3z)) queries to O_x. For convenience, we assume that n is a multiple of k and focus on the z = 2 case (the proof for the z 2 case is similar). We only need to prove when 1/ε^-1/3 = o(k). Let T = ⌊ (16ε_0ε)^-1/3⌋-1 where ε_0 is to be defined. We will reduce an instance of the 2k-means problem from the problem 𝒫 = _k∘ f_n/k,T,T+1^(k). Intuitively, solving the problem 𝒫 need to solve k independent cases of a quantum counting problem (distinguishing l 1s from l' 1s), but only need to be correct on a constant fraction of instances. We prove that the quantum algorithm must cost k times the queries of the k=1 case, which can be lower bounded by lower-quantum-counting. Now we describe the reduction. The hard instance is consist of k unit balls far apart and each unit ball has T or T+1 unit vectors and n-T or n-T+1 origins on it. Let 𝒜 be an optimal algorithm for the 2k-means problem. The input of the problem 𝒫 is x ∈ S^k where S = {x ∈{0,1}^n/k : |X| = T or T+1}. Let x = x_1x_2… x_n. We map each x_i to a point in ℝ^d for d = 100log k / ε^2. Let B = n^100 and i = ak + b for 0 ≤ a < k,1 ≤ b ≤ n/k. Define v_i = (i · B, 0, …, 0). If x_i = 0, we map it to the point v_2a; if x_i = 1, we draw t_i {2, 3, …, d} uniformly and randomly. Then we set the point as v_2a+1 + e_t_i where {e_1,e_2,…,e_d} is the standard basis of ℝ^d. Note that by the union bound and the Birthday Paradox, {t_i} are distinct in each group of size n/k w.h.p. We condition on this from now on. Since the reduction is classical, simple, and local, we can implement the oracle for the 2k-means problem directly. Finally, we call 𝒜 on the above instance and output ((t( c_2 - v_1 _2)), …, (t( c_2i - v_2i-1_2)), …, (t( c_2n - v_2n-1_2))), where: * t(x) = 1/1/√(T) - 1/√(T+1)[1/√(T)-min(max(x, 1/√(T+1)), 1/√(T))] is a normalizing mapping; * (x) = {[ 0 x ≤1/2; 1 x > 1/2 ].. By easy adjustments (we can assume that there is a point in each of 2k balls centering at v_i with radius B/5 because otherwise the cost is larger than B/10, which is very large; then, we can move each center to its nearest point on the unit ball centered at v_i), we can assume without loss of generality that the solutions outputted by the 2k-means solver have the following properties: * c_2a-1 = v_2a-2 for 1 ≤ a ≤ k; * c_2a, v_2a-1_2 ≤ 1. Then, the cost of the clustering can be seen as the sum of costs of k independent 1-mean problems and each of the problem has size T or T+1. It is well known that, in the 1-mean problem of size n, (c) = (c^*) + n c - c^* ^2 where c^* = x_1+x_2+⋯+x_n/n is the optimal center. We decompose c_2a = v_2a-1 + g_a. Let T_a = {e_t_i : x_i = 1, (a-1)k < i ≤ ak }. (Recall that |T_a| = T or T+1.) Define g_a^* = 1/|T_a|∑_e ∈ T_a e. Then, the clustering is ε-optimal if and only if ∑_a=1^k |T_a| ‖ g_a - g_a^* ‖^2 < ε·∑_a=1^k 1/2|T_a|∑_x y ∈ T_a(x,y)^2 ⟹ ∑_a=1^k |T_a| ( g_a - g_a^*)^2 < ε·∑_a=1^k (|T_a| - 1) ⟹ T ·∑_a=1^k ( g_a - 1/√(|T_a|))^2 < ε· k · (T - 1) ⟹ ∑_a=1^k ( g_a - 1/√(|T_a|))^2 < ε· k. Note that t(1/√(|T_a|)) gives f_n/k,T,T+1 in the i-th block of size n/k, justifying the inner function of the problem 𝒫. ⟹ (1/√(T) - 1/√(T+1))^2∑_a=1^k (t( g_a ) - t(1/√(|T_a|)))^2 < ε· k ⟹ 1/4T · (T + 1)^2∑_a=1^k (t( g_a ) - t(1/√(|T_a|)))^2 < ε_0/16(T+1)^3· k ⟹ ∑_a=1^k (t( g_a ) - t(1/√(|T_a|)))^2 < ε_0/4· k. Intuitively, the k-means solver needs to solve k independent cases of f_n/k,T,T+1 where the right answers are t(|T_a|^-1/2). t( g_a ) ∈ [0,1] are a fractional guess in [0,1]^k of {0,1}^k. For convenience, we round the output. A simple lemma is required to bound the error of rounding: Given x_1, x_2, …, x_k ∈ [0,1] and y_1, y_2, …, y_k ∈{0,1}, we have that ∑_i=1^k 1_(x_i) y_i≤ 4∑_i=1^k (x_i - y_i)^2 . One has: ∑_i=1^k (x_i - y_i)^2 = ∑_(x_i) y_i (x_i - y_i)^2 + ∑_(x_i) = y_i (x_i - y_i)^2 ≥ 1/4∑_t(x_i) y_i 1. Back to the proof of k-means-lower. Plugging x_i = t( g_a ) and y_i = t(1/√(|T_a|)) in the above lemma, we have that the hamming distance between the output of our solver for 𝒫 and f_n/k,T,T+1^(k) is less than ε_0 k, so it is indeed a solver for the relation problem 𝒫. Now it is sufficiently to prove the lower bound for the problem 𝒫. Consider another problem defined simply as 𝒫' = f_n/k,T,T+1^(k). Applying dirsum_theorem, we have that Q(𝒫') = k Q(f_n/k,T,T+1). By lower-quantum-counting, Q(f_n/k,T,T+1) = Θ(√(n/k T)). Hence, Q(𝒫') = Θ(√(nk)ϵ^-1/6). Thus, there exists a constant C_0 > 0 such that every quantum algorithm that solves 𝒫' w.p. at least 2/3 uses at least C_0 √(nk)ε^-1/6 queries. Let 𝒜 query the oracle for t times. We construct an algorithm 𝒜'(x) for 𝒫' from 𝒜: 𝒜' calls c ←𝒜(x) and then uses the Grover search (getall) to find the set S = {i ∈ [k] : f_n/k,T,T+1^(k)(x)_i c_i } and then flips the bits of c in S and output c. By the definition of 𝒫', |S| ≤ε_0 · k. And f_n/k,T,T+1^(k)(x)_i can be computed by π/2√(nT/k) + n/k ≤ 2√(nT/k) queries by the Grover search too. Thus, finding S costs 2(π/2√(ε_0 · k · k) + ε_0 · k)√(nT/k)≤ 100 ε_0^5/6√(nk )ε^-1/6 queries. We then have t + 100 ε_0^5/6√(nk )ε^-1/6≥ C_0 √(nk)ε^-1/6. Now set ε_0 = (C_0/1000)^6/5 and solve the inequality, we get t ≥ 0.9 C_0 √(nk)ε^-1/6, completing the proof.
http://arxiv.org/abs/2306.10792v1
20230619091104
NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning
[ "Yun Yi", "Haokui Zhang", "Rong Xiao", "Nannan Wang", "Xiaoyu Wang" ]
cs.LG
[ "cs.LG", "cs.CV" ]
Variability of echo state network prediction horizon for partially observed dynamical systems Amit Apte Received 29 October 2022 / Accepted 26 February 2023 ============================================================================================= As more deep learning models are being applied in real-world applications, there is a growing need for modeling and learning the representations of neural networks themselves. An efficient representation can be used to predict target attributes of networks without the need for actual training and deployment procedures, facilitating efficient network deployment and design. Recently, inspired by the success of Transformer, some Transformer-based representation learning frameworks have been proposed and achieved promising performance in handling cell-structured models. However, graph neural network (GNN) based approaches still dominate the field of learning representation for the entire network. In this paper, we revisit Transformer and compare it with GNN to analyse their different architecture characteristics. We then propose a modified Transformer-based universal neural network representation learning model NAR-Former V2. It can learn efficient representations from both cell-structured networks and entire networks. Specifically, we first take the network as a graph and design a straightforward tokenizer to encode the network into a sequence. Then, we incorporate the inductive representation learning capability of GNN into Transformer, enabling Transformer to generalize better when encountering unseen architecture. Additionally, we introduce a series of simple yet effective modifications to enhance the ability of the Transformer in learning representation from graph structures. Our proposed method surpasses the GNN-based method NNLP by a significant margin in latency estimation on the NNLQP dataset. Furthermore, regarding accuracy prediction on the NASBench101 and NASBench201 datasets, our method achieves highly comparable performance to other state-of-the-art methods. § INTRODUCTION With the maturity of deep learning technology, an increasing number of deep network models of various sizes and structures are being proposed and implemented in academic research and industrial applications. In this process, the rapid deployment of networks and the design of new networks that meet task requirements are significant. To address this issue, researchers propose using machine learning models to solve the deployment and design problems of the models themselves. One popular strategy is encoding the input neural network and utilizing the resulting neural network representation to predict a specific target attribute directly without actually executing the evaluation program. In recent years, we have witnessed success in accelerating model deployment and design processes with the help of neural network representations <cit.>. Taking the advantages of latency predictors <cit.>, significant time cost and expertise efforts can be saved by not having to carry out the time-consuming process of compilation, deployment, inference, and latency evaluation when engineers choose networks for application. Through the use of accuracy predictors <cit.>, researchers can avoid the resource-intensive process of network training and instead perform a forward inference process to evaluate the accuracy of a multitude of networks. This measure dramatically reduces the time cost associated with network design. Although the vanilla Transformer is designed for natural language processing, Transformer architecture has found widespread adoption across diverse fields owing to its strengths in global modeling and parallelizable computation <cit.>. Very recently, several researchers have attempted to learn appropriate representations for neural networks via Transformer <cit.>. These methods have indeed achieved leading performance on relevant tasks. Nevertheless, they are mainly designed for encoding the architecture of cells (basic micro units of repeatable neural networks) in cell-structured networks. As shown in the latency prediction experiment in NAR-Former <cit.>, poor generalization performance occurs when the depth of the input architecture reaches hundreds of layers. In the development process of neural network representation learning, Graph neural network (GNN) <cit.> is also a promising technique for learning neural network representations <cit.>. They model the input neural architecture as a directed acyclic graph (DAG) and operate on the graph-structured data, which comprises the node information matrix and adjacency matrix. Recently, the NNLP <cit.> introduced a dedicated latency prediction model based on GNNs, which is capable of encoding the complete neural network having hundreds of layers and achieving a cutting-edge advance. In fact, both cell-structured architectures and complete neural networks are widely used in various applications. Cell-structured models offer good scalability, allowing for easy scaling by adding or removing cells. This adaptability makes them suitable for addressing problems of different complexities and data sizes, while also facilitating incremental model development and deployment.omplete neural networks provide better flexibility in connectivity and can achieve higher accuracy in certain cases. Furthermore, in some cases, such as in latency estimation, encoding the complete network is necessary. To handle various network architectures in different tasks, both GNN-based and Transformer-based models are necessary. However, this issue of utilizing multiple architectures can introduce constraints that may not be conducive to practical applications. For instance, when a designed network requires specific attributes, having similar model structures and high-accuracy predictions for different attributes can reduce code redundancy and improve work efficiency. In this paper, we build upon the research conducted in NAR-Former <cit.> and present a novel framework called NAR-Former V2 for universal neural network representation learning. Our framework can handle cell-structured networks and learning representations for the entire network. To accomplish this, we incorporate graph-specific properties into the vanilla Transformer and introduce a graph-aided attention-based Transformer block. This approach combines the strengths of both Transformer and graph neural networks (GNNs). Extensive experiments are conducted to evaluate our proposed framework. Results show that:(1) our method can be applied to predict different attributes, can outperform the state-of-the-art method in latency prediction on the NNLQP dataset <cit.>, and can achieve promising results in accuracy sorting prediction on the NAS-Bench-101 and NAS-Bench-201 datasets <cit.>; (2) our method has good scalability, which is capable of encoding network having only a few operations or complete neural networks that have hundreds of operations. § RELATED WORK §.§ Representation and attribute prediction of neural networks Neural network representation learning is the base for evaluating the attributes of different networks via machine learning models. Early methods <cit.> construct representation models for learning neural network representation based on LSTM and MLP. Peephole <cit.> inputs the embedding of each layer to LSTM to predict accuracy, which neglects the topological structure and is limited to handling only sequential architectures. Later, in order to better capture the structural information of the network, an accuracy predictor <cit.> uses a binary path encoding with a length equal to the number of possible paths from input to output given in terms of operations, where the element at the corresponding position of the path presenting in the input network is set to 1. When the neural network is regarded as a directed acyclic graph, the adjacency matrix describes the connection between nodes, so it is naturally used to encode the topological structure of the neural network. NAS-Bench-101 <cit.> proposed to encode the given neural network as a concatenated vector of a flat adjacency matrix and a list of node labels. Many other methods <cit.> realize accuracy and latency prediction by directly inputting the original two-dimensional adjacency matrix together with the node information matrix to GNN, which can realize the explicit encoding of the input network topology. Recently, other methods have focused on enhancing the original GNN or introducing transformers to obtain more meaningful neural network representations <cit.>. §.§ Transformer Transformer <cit.> is a self-attention-based neural network architecture that has revolutionized natural language processing <cit.> and has been adopted in many other fields <cit.>. Transformer has recently been successfully introduced into neural network representation learning <cit.>. TNASP <cit.> inputs the sum of operation type embedding matrix and Laplacian matrix into standard Transformer. NAR-Former <cit.>, on the other hand, encodes each operation and connection information of this operation into token and inputs all tokens into a proposed multi-stage fusion transformer. Excellent attribute prediction results have been achieved on cell-based dataset by using these methods. However, the strong long-range modeling ability of the self-attention mechanism may also result in subtle local variation affecting the representation of all tokens. Due to the potential impact of this feature on the generalization ability, although NAR-Former <cit.> has made attempts to encode complete neural networks, the results are still unsatisfactory. §.§ Graph neural network GNNs are designed to handle graph-structured data, which is a fundamental representation for many real-world problems such as social network analysis and recommendation systems <cit.>. Given that neural networks can be viewed as graphs, GNN-based models have emerged as a prominent and widely adopted approach for neural network representation learning <cit.>. GNNs show generalization ability through a simple mechanism of aggregating information from neighbors. For instance, the recently proposed GNN-based model <cit.> can obtain representations of neural networks with hundreds of layers and achieves new state-of-the-art results in latency prediction, even if the input network structure has not been seen during training. Nevertheless, the simple structural characteristics of GNNs, which contribute to its strong generalization ability, also lead to the need for further improvement in the performance of methods based on original GNN in cellular structure and complete neural network representation learning. Therefore, it is a promising approach for neural network representation learning to combine the Transformer and GNN to leverage the strengths of both models. § METHOD §.§ Motivation As mentioned in the <ref>, Transformer-based models have demonstrated remarkable performance in encoding and learning representations of neural networks when the input is in the form of cells. However, when dealing with complete deep neural networks (DNNs) consisting of hundreds of layers, and the depth of the input data is unknown during training, they may sometimes exhibit poorer performance compared to GNN-based methods. Additionally, as highlighted in <cit.>, real-world applications often show significant differences in the topologies and depths between training and test samples. Consequently, representation learning models must possess strong generalization abilities for handling unseen data. In this regard, GNN-based models appear to achieve better performance. This observation has prompted us to reconsider the two types of inputs, namely cells and complete DNNs, as well as the two representation learning models, the Transformer and GNN. Through a detailed comparative analysis of the structures of the Transformer and GNN, we speculate that the insufficient generalization capability of Transformer-based methods may be attributed to its structure and computation characteristics. As we know, the self-attention structure in transformers is a crucial design that allows for the effective extraction of global features in a data-driven manner. However, this structure becomes a double-edged sword when learning network representations. For input neural networks with depths of hundreds of layers, the Transformer's impressive capability to capture global information can sometimes lead to excessive sensitivity. This stems from the fact that the Transformer models interactions between all tokens using its self-attention mechanism, treating the entire sequence as a fully connected graph. This dense attention mechanism can give rise to a particular issue: even a subtle variation, such as reducing the kernel size in layer "i" from 5×5 to 3×3, can affect the representation of all other layers, ultimately leading to significant differences in the final representation. As a result of this issue, the trained model may be biased toward fitting the training data. Consequently, when the model is employed for inferring architectures outside the training data distribution, it yields inferior results and demonstrates poorer generalization performance. The corresponding experiments are presented in <ref>. [1]https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SAGEConv.html §.§ Transformer grafted with GNN Fig.1 shows the vanilla transformer block, GNN block, and our proposed graph-aided attention Transformer block. As shown in Fig.1 (a), the vanilla Transformer block has two major parts: Ĥ^l = SelfAttn(LN(H^l-1)) + H^l-1, H^l = FFN(LN(Ĥ^l)) + Ĥ^l, where H^l is the feature for the layer l. Ĥ^l is an intermediate result. SelfAttn, FFN, and LN refer to self-attention, feed-forward network, and layer normalization, respectively. GNN block just has one major part, where the representation is updated following: Ĥ^l = GraphAggre(H^l-1, A) + W^l_rH^l-1, H^l = L_2(Ĥ^l), where GraphAggre(H^l-1, A) = W_a^l(Norm(A)H^l-1). A ∈ℝ^N × N is the adjacency matrix, and L_2 denotes the l2-normalization function. W with different superscripts and subscripts represents different learnable transformation matrices. Comparing formulas (3) and (4) with formulas (1) and (2), we can observe two major differences between the Transformer block and GNN block: * The Transformer utilizes self-attention to fuse information from a global perspective, while the GNN uses graph aggregation to fuse neighbor information based on the adjacency matrix. * The Transformer block includes an additional FFN (Feed-Forward Network) component, which enhances information interaction between channels. The advantage of self-attention lies in its data-driven structure, allowing for flexible adjustment of information fusion weights based on the input data. On the other hand, the advantage of GNN is that graph aggregation focuses on the topological structure. These two advantages are not contradictory to each other. Consequently, we have naturally come up with an idea to combine the strengths of self-attention and graph aggregation. This approach inherits the flexibility of self-attention while benefiting from the good generalization capability of graph aggregation. To implement this idea, we consider the neural network encoded as a graph, with the operations or layers in the network treated as nodes. Assuming the graph has N nodes, the transformer layer we have designed for universal neural network representation learning (<ref> (c)) is calculated as follows: H^l = TAEnhance(H^l-1, D), Ĥ^l = L_2(GraphAttn(H^l, A) + W_r^lH^l), H^l = GFFN(LN(Ĥ^l)) + Ĥ^l, where D ∈ℝ^N × 1 is a vector that records the number of nodes directly connected to each node. The Graph-aided Attention (GraphAttn) module is responsible for performing attention calculations using the properties of the graph structure to adjust global self-attention. The Type-aware Enhancement module (TAEnhance) is utilized to further enhance the representation. We introduce Grouped Feed Forward Network (GFFN) by introducing group linear transformation into the original FFN. In the following sections, we will provide a detailed introduction to each component. Graph-aided attention In the proposed graph-aided attention, we employ the adjacency matrix to govern the attention calculation range. Moreover, the adjacency matrix characterizes the inter-layer connection relationships within the neural network, enabling the model to acquire topology knowledge. Hence, we define this module as the Graph-aided Attention module: X^l = Sigmoid(W_q^lH^l + b_q^l), S^l = (X^lX^lT/√(d)) ⊙ A, Z^l = W_a^l(Norm(S^l)H^l) + b_a^l. The Norm(·) means that for each node, the attention weights between it and other nodes are transformed to (0, 1) by dividing it by the sum. The character b with different superscripts and subscripts represents different learnable biases. The d refers to feature dimension of X^l. Note that simply using the adjacency matrix to control the attention map in self-attention is insufficient. To make this approach work, we have discarded the original softmax operation in self-attention and replaced it with a linear attention mechanism. This is because the softmax operation tends to focus excessively on the current node while neglecting neighboring nodes. Consequently, we have inserted a sigmoid activation function before the linear attention to ensure that all values in the attention map are positive. For further comparisons with the original self-attention, please refer to the supplementary. Type-Aware enhancement module The connection between a layer and other layers is related to the type of that layer. Therefore, the number of connected layers in each layer can be used to assist the model in learning the type of layer. By fully utilizing the internal characteristics of this graph-structured data, it is beneficial to improve the learned representations. The enhanced representation is obtained by: TAEnhance(H^l-1, D) = Sigmoid(W_d^lD + b_d^l) ⊙ H^l-1. §.§ Universal representation learning for neural network In this subsection, we will present the construction of a comprehensive framework for neural network encoding and representation based on the proposed enhanced Transformer block. We will also explain how this framework can be utilized to predict network attributes. The overall system, as depicted in <ref>, is composed of three consecutive stages: neural network encoding, representation learning, and attribute prediction. Neural network encoding We have taken inspiration from the tokenizer used in NAR-Former <cit.> and made certain modifications to encode the input neural network. For a given network consisting of N layers or operations, whether it is a cell architecture or a complete DNN, we represent it as a sequence feature comprising vectors corresponding to each layer: T = (t_1, t_2, ⋯, t_N) ∈ℝ^N × C. Each vector encapsulates both the operation and position information: t_i = (t_i^op, t_i^pos) ∈ℝ^C. Following the encoding scheme proposed in NAR-Former <cit.>, we use the position encoding formula <cit.> to transform the single real-valued numbers (e.g. operation labels and node position indices) of relevant information into a higher-dimensional space. We denote this mapping scheme as f_PE(·). For the node position encoding t_i^pos, since our improved transformer can obtain the topology information of the network with the help of the adjacency matrix, we only encode the self-position of the node with f_PE(·). For the operation encoding t_i^op, there are slight differences in the specific encoding content for input networks of different scales. If the input is in the form of a cell architecture <cit.>, there are usually no more than ten different options for each architecture operation. In this case, we directly assign category labels to all possible operations, and then use the function f_PE(·) to encode the label of the operation to obtain t_i^op. However, when a complete DNN is used as input <cit.>, more abundant operational information can be extracted and encoded. In this case, we first use one-hot vectors, which ensure the same distance between different categories, to encode the type of operation (e.g. convolution, batch normalization, ReLU, concatenation). Then use the function f_PE(·) to encode the properties (e.g. kernel size, number of groups) of the operation, which is then concatenated with the one-hot type vector as t_i^op. Representation learning The model for learning neural network representations H^K is constructed by stacking multiple instances of our proposed enhanced Transformer blocks. These improvements, specifically tailored for neural network data, allow the model to learn representations that are more meaningful and exhibit enhanced generalization capabilities. Attributes predicting Taking the representation H^K as input, the target attribute can be predicted by using the predicting head: ŷ = -logsigmoid(FC(ReLU(FC(ReLU(FC( H^K))))). Currently, among the various attributes of the network, accuracy and latency are the two main types of predicted objects. Because they have extremely high acquisition costs, and are the primary manifestation of network performance and efficiency. For latency prediction, due to the strong correlation between batch size, memory access, parameter quantity, and FLOPs with network latency, the encoding corresponding to these characteristics and the representation H^K are input into the predicting head together. To train the whole latency prediction model, the mean square error (MSE) function is adopted to measure the difference between predicted results and the ground truths. For accuracy prediction, in addition to MSE loss function, architecture consistency loss (AC_loss) and sequence ranking related loss (SR_loss) proposed by NAR-Former <cit.> are also used. Following NAR-Former, we employed a hierarchical fusion strategy in accuracy prediction experiments. We use a simplified approach, which computes the weighted sum of the outputs of each transformer layer with adaptive weights. r10cm Latency prediction on NNLQP <cit.>. Training and test sets have the same distribution. 0.85 3*Test Model 2cMAPE↓ 2cAcc(10%)↑   NNLP <cit.> Ours NNLP <cit.> Ours   avg / best avg / best avg / best avg / best All 3.47% / 3.44% 3.07% / 3.00% 95.25% / 95.50% 96.41% / 96.30% AlexNet 6.37% / 6.21% 6.18% / 5.97% 81.75% / 84.50% 81.90% / 84.00% EfficientNet 3.04% / 2.82% 2.34% / 2.22% 98.00% / 97.00% 98.50% / 100.0% GoogleNet 4.18% / 4.12% 3.63% / 3.46% 93.70% / 93.50% 95.95% / 95.50% MnasNet 2.60% / 2.46% 1.80% / 1.70% 97.70% / 98.50% 99.70% / 100.0% MobileNetV2 2.47% / 2.37% 1.83% / 1.72% 99.30% / 99.50% 99.90% / 100.0% MobileNetV3 3.50% / 3.43% 3.12% / 2.98% 95.35% / 96.00% 96.75% / 98.00% NasBench201 1.46% / 1.31% 1.82% / 1.18% 100.0% / 100.0% 100.0% / 100.0% SqueezeNet 4.03% / 3.97% 3.54% / 3.34% 93.25% / 93.00% 95.95% / 96.50% VGG 3.73% / 3.63% 3.51% / 3.29% 95.25% / 96.50% 95.85% / 96.00% ResNet 3.34% / 3.25% 3.11% / 2.89% 98.40% / 98.50% 98.55% / 99.00% § EXPERIMENTS In this section, we conduct experiments on NNLQP <cit.>, NAS-Bench-101 <cit.>, and NAS-Bench-201 <cit.> to evaluate the performance of our NAR-Former V2. A series of ablation experiments were performed to corroborate the effectiveness of our design details. More details about implementation and analysis will be provided in the supplementary materials. §.§ Implementation details Model details For latency experiments, the number of GraphAttn-based Transformer blocks is set to 2, which is the same as the baseline <cit.>. As for accuracy predicting, we fix the number of Transformer blocks to 6 to align with the standard Transformer used in the baseline <cit.>. Training details All experiments were trained using the Adam optimizer. We used a linear learning rate decay strategy with a warm-up, in which the learning rate uniformly increased to 0.001 during the first 10% of the training steps and then gradually decayed to 0. The batch size was fixed at 16. Our models are trained on a machine with a GeForce RTX 3090 GPU. To avoid randomness, each model was trained for 12 times and the two experiments with the best and worst indicators were discarded. §.§ Latency prediction We conduct latency prediction on the recently released NNLQP dataset <cit.>, which comprises 20000 complete deep learning networks and their corresponding latencies on the target hardware. This dataset has 10 different types of networks (referring to the first column of <ref>), with 2000 networks per type. Following NNLP <cit.>, we use Mean Absolute Percentage Error (MAPE) and Error Bound Accuracy (Acc(δ)) to measure the deviations between latency predictions and ground truths. The lower the MAPE, the higher the prediction accuracy, while the opposite is true for Acc(δ). Here, we considered two different scenarios. In the first scenario, the training and testing sets are from the same distribution. We constructed the training set with the first 1800 samples from each of the ten network types, and the remaining 2000 networks were used as the testing set. The detailed results are shown in <ref>. When testing with all test samples, the average MAPE of our method is 0.4% lower than that of NNLP <cit.>, and the average Acc(10%) is 1.16% higher than that of NNLP. When tested on various types of network data separately, except for the NASBench201 family, our method consistently outperforms NNLP. This indicates that our improved transformer, which utilizes the structural characteristics of the graph, has learned more reasonable representations than the original GNN. The second scenario has more practical application significance, that is, the network type needed to be inferred is not seen during the training progress. There are ten sets of experiments in this part, with each set taking one type of network as the test set, while all samples from the other nine types of networks are used as the training set. As shown in <ref>, it can be seen that using only FLOPs and memory access information to predict latency is not enough. Suffering from the gap between the accumulation of kernel delays and the actual latency, kernel-based methods (TPU<cit.> and nn-Meter<cit.>) perform worse than the GNN-based model NNLP that directly encodes and predicts the entire network. Benefiting from considering the entire input network and grafting GNN into the transformer, our method achieves the best MAPE and Acc(10%) on the average indicators of 10 experimental groups. Compared with the second-best method NNLP, the average Acc(10%) of our method has an marked increase of 8.08%. r7cm Accuracy prediction on NAS-Bench-101 <cit.>. “SE” denotes the self-evolution strategy proposed by TNASP <cit.>. 0.8 5*Backbone 52cmMethod 3cTraining Samples     0.1% 0.1% 1%     (424) (424) (4236) 3-5     3cTest Samples     100 all all CNN ReNAS <cit.> 0.634 0.657 0.816 2*LSTM NAO <cit.> 0.704 0.666 0.775 NAO+SE 0.732 0.680 0.787 3*GNN NP <cit.> 0.710 0.679 0.769 NP + SE 0.713 0.684 0.773 CTNAS <cit.> 0.751 - - 3*Transformer TNASP <cit.> 0.752 0.705 0.820 TNASP + SE 0.754 0.722 0.820 NAR-Former <cit.> 0.801 0.765 0.871 NAR-Former V2 0.802 0.773 0.861 §.§ Accuracy prediction §.§.§ Experiments on NAS-Bench-101 NAS-Bench-101 <cit.> provides 423624 different cell architectures and the accuracies of the complete neural network constructed based on each cell on different datasets. Following <cit.>, 0.1% and 1% of the whole data is used as training set and another 200 samples are used for validation. We use Kendall's Tau <cit.> to evaluate the correlation between the predicted sequence and the real sequence, and a higher value indicates a better results. The Kendall's Tau is calculated on the whole dataset or 100 testing samples. We report the average results of our predictor in 10 repeated experiments. Results are shown in <ref>. When only 424 samples were available for training, our method achieves the highest Kendall's Tau. We achieves 0.773 when tested using the whole testing set, which is 0.8% and 8.9% higher than the transformer-based model <cit.> and GNN-based model <cit.>, respectively. This proves that the modifications we made to the transformer based on inspiration from GNN are effective. r7cm Accuracy prediction on NAS-Bench-201 <cit.>. “SE” denotes the self-evolution strategy proposed by TNASP <cit.>. =1em 0.8 3*Backbone 32cmModel 2cTraining Samples     (781) (1563)     5% 10% 2*LSTM NAO <cit.> 0.522 0.526 NAO + SE 0.529 0.528 2*GNN NP <cit.> 0.634 0.646 NP + SE 0.652 0.649 3*Transformer TNASP <cit.> 0.689 0.724 TNASP + SE 0.690 0.726 NAR-Former <cit.> 0.849 0.901 NAR-Former V2 0.874 0.888 §.§.§ Experiments on NAS-Bench-201 NAS-Bench-201 <cit.> is another cell-based dataset, which contains 15625 cell-accuracy pairs. Following <cit.>, 5% and 10% of the whole data is used as training set and another 200 samples are used for validation. We use Kendall's Tau <cit.> computed on the whole dataset as the evaluation metric in this part. Average results of our predictor of 10 runs are reported. Results are shown in <ref>. The conclusion of this experiment is similar to <ref>. When compared with the second-best method, a substantial improvement (2.5%) of Kendall's Tau can be seen in the setting of training with 781 samples. Compared to NAR-Former, NAR-Former V2 achieves comparable accuracy prediction performance with fewer parameters. In latency prediction experiments, NNLP outperforms NAR-Former by a significant margin, and NAR-Former V2 exhibits a clear advantage over NNLP (a direct comparison experiment is provided in the supplementary material). In summary, by incorporating the strengths of GNN, the universal representation learning framework NAR-Former V2 is significantly enhanced. NAR-Former V2 addresses the shortcomings of NAR-Former, which was overly sensitive when handling complete network structures, while still retaining the outstanding performance of NAR-Former when handling cell-structured networks. §.§ Ablation studies r7cm The influence of using different attentions. Test on EfficientNet. =1em 0.9 Attention MAPE↓ ACC(10%)↑ Global 16.88% 36.32% Local 13.20% 44.01% In this section, we conducted a series of ablation experiments on the NNLQP dataset to investigate the impact of various modifications. The results from Rows (2) and (3) in Table <ref> indicate that for type encoding without numerical relationships, using one-hot vectors with equidistant properties across different categories is more suitable. Comparing Row (3) in Table <ref> with Row (4), we observe that introducing GNN characteristics into the Transformer improves the model's ability to learn effective representations and achieve more accurate predictions compared to using the original GNN. When replacing the FFN with the GFFN module with eight groups (Row (5)), the number of model parameters reduces to approximately one eighth of that in Row (4), without a significant decrease in prediction accuracy. Compared to Row (5), Row (6) demonstrates an increase of 0.35% in ACC(10%) and 0.95% in ACC(5%). This confirms the role of the type-aware enhancement module in further refining and enhancing the rationality of the representations. To verify our hypothesis regarding the generalization ability of the network and the effectiveness of the proposed graph-aided attention, we conducted comparative experiments in scenarios where the training and testing data have different distributions. The results of these experiments are presented in Table <ref>. In order to perform the experiment on global attention, we excluded the step of multiplying the adjacency matrix A in Equation <ref>, and instead replaced S^l with X^lX^lT/√(d). Results in Table <ref> demonstrate that incorporating the adjacency matrix to restrict the scope of attention calculation is indeed beneficial for latency prediction on unseen data. The model utilizing graph-aided attention exhibited a significant improvement of 7.68% in ACC(10%) compared to the model using global attention. §.§.§ Ablation studies of Accuracy Prediction § CONCLUSION In this paper, we combine the strengths of Transformer and GNN to develop a universal neural network representation learning model. This model is capable of effectively processing models of varying scales, ranging from several layers to hundreds of layers. Our proposed model addresses the limitations of previous Transformer-based methods, which exhibited excessive sensitivity when dealing with complete network structures. However, it still maintains exceptional performance when handling cell-structured networks. In future work, we will focus on optimizing the design of the representation learning framework and applying it to a broader range of practical applications. Such as using the proposed model to search for the best mixed precision model inference strategies. IEEEtran
http://arxiv.org/abs/2306.07908v1
20230613170142
Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision
[ "Fernando Diaz" ]
cs.IR
[ "cs.IR" ]
0000-0003-2345-1288 Google Montréal QC Canada [email protected] Across a variety of ranking tasks, researchers use reciprocal rank to measure the effectiveness for users interested in exactly one relevant item. Despite its widespread use, evidence suggests that reciprocal rank is brittle when discriminating between systems. This brittleness, in turn, is compounded in modern evaluation settings where current, high-precision systems may be difficult to distinguish. We address the lack of sensitivity of reciprocal rank by introducing and connecting it to the concept of best-case retrieval, an evaluation method focusing on assessing the quality of a ranking for the most satisfied possible user across possible recall requirements. This perspective allows us to generalize reciprocal rank and define a new preference-based evaluation we call lexicographic precision or lexiprecision. By mathematical construction, we ensure that lexiprecision preserves differences detected by reciprocal rank, while empirically improving sensitivity and robustness across a broad set of retrieval and recommendation tasks. Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision Fernando Diaz July 31, 2023 ========================================================================================================= § INTRODUCTION Evaluating ranking systems for users seeking exactly one relevant item has a long history in information retrieval. As early as 1968, <cit.> proposed Type 1 expected search length or _1, defined as the rank position of the highest ranked relevant item. In the context of TREC-5, <cit.> proposed using the reciprocal of _1 in order to emphasize rank changes at the top of the ranked list and modeling the impatience of a searcher as they need to scan for a single item. Over the years, reciprocal rank (and less so _1) has established itself as a core metric for retrieval <cit.> and recommendation <cit.>, adopted in situations where there is actually only one relevant item as well as in situations where there are multiple relevant items. Given two rankings, reciprocal rank and _1 always agree in terms of which ranking is better. Because of this, we refer to them collectively as the recall level 1 or metrics. Despite the widespread use of reciprocal rank, recent evidence suggests that it may brittle when it comes to discriminating between ranking systems <cit.>. In particular, the low number of unique values of reciprocal rank means that, especially when evaluating multiple highly-performing systems, we are likely to observe tied performance. <cit.> demonstrate that these conditions exist in many modern deep learning benchmarks. We address these issues by theoretically interpreting as a population-level metric we refer to as best-case retrieval evaluation. This allows us to propose a generalization of the ordering based on social choice theory <cit.> and preference-based evaluation <cit.>. This evaluation method, lexicographic precision or lexiprecision, preserves any strict ordering between rankings based on while also providing a theoretically-justified ordering when is tied. We compare lexiprecision and orderings using Hasse diagrams in Figure <ref>. On the left, we show the partial order of all possible positions of five relevant items in a corpus of size . Since reciprocal rank and _1 only consider the position of the first relevant item, we only have different relevance levels. While this may not be an issue in general (since is usually large), the number of rankings within each level can be very large and multiple highly effective systems can result in numerous ties. In contrast, lexiprecision has one relevance level for each unique arrangement of relevant items. That is, the number of relevance levels scales with the number relevant items and, by design, two rankings are tied only if they place relevant items in exactly the same positions. In this paper, we contribute to the theoretical understanding of evaluation through a detailed study of metrics, best-case retrieval evaluation, and lexiprecision. In Section <ref>, we motivate our work by showing that has fundamental theoretical limits, especially in situations where there are multiple relevant items. In Section <ref>, we demonstrate that can be interpreted as best-case retrieval evaluation, allowing us to to address its limitations by using methods from social choice theory and generalizing it as lexiprecision. In Section <ref>, we then conduct extensive empirical analysis to show that lexiprecision is strongly correlated with metrics while substantially improving its discriminative power.[In lieu of an isolated `Related Work' section, we have included discussion of relevant literature when necessary. This helps make connections explicit to our work. ] § MOTIVATION Our work is based on the observation that ceiling effects are inherent in evaluation. Assume a standard ranking problem where, given a query with associated relevant items, a system orders all documents in the collection in decreasing order of predicted relevance. The set of all possible rankings of is referred to as the symmetric group over elements and is represented as . For a given ranking ∈, let _i be the position of the ith highest-ranked relevant item. We can then define reciprocal rank as _1()=1/_1. When no relevant document is retrieved (e.g. if no relevant items are in the system's top k retrieval), we set _1=0. For two rankings, we define _1(,)=_1()-_1(). For the remainder of this section, we will use reciprocal rank for clarity although the analysis applies to _1 as well. Although we can easily see that there are different values for _1(), we are interested in the distribution of ties amongst system rankings for these values as predicted by theoretical properties of reciprocal rank. Specifically, we want to compute, for a given position of the first relevant item _1 and a random second ranking, the probability that we will observe a tie. For any _1, there are -_1-1 tied arrangements of positions of relevant items amongst all of the possible arrangements from a second system. If we sample an arrangement of relevant items uniformly at random, then the probability of a tie with is Pr(_1=_1|_1)=-_1-1/. We plot this probability in Figure <ref>. We can observe that, when we have few relevant items (i.e. small ), we have a relatively small and uniform probability of ties across all values of _1. However, as we increase the number of relevant items, the distribution begins to skew toward a higher probability of a tie as _1 is smaller. This means that, if we have a ranking where the first relevant item is close to the top, even if the second ranking is drawn uniformly at random, we will be more likely to find a tie than if the first relevant item were lower in the ranking. While our analysis indicates a lack of sensitivity of reciprocal rank for drawn uniformly at random as increases, we are also interested in the probability of ties when is drawn from rankings produced by real systems. We collected runs associated with multiple public benchmarks (see Section <ref> for details) and computed the the empirical distribution of ties conditioned _1 (Figure <ref>). Because of the highly skewed distribution, we plot the logarithmic transform of the probability of a rank position. As we can see, across both older and newer benchmarks, the probability of a tie for rankings when the top-ranked relevant item is at position 1 is substantially larger than if we assume is drawn uniformly at random. The 2021 TREC Deep Learning track data in particular demonstrates higher skew than others, confirming observations previously made about saturation at top rank positions <cit.>. Taken together, these results demonstrate a fundamental limitation of metrics (i.e., reciprocal rank and _1) for evaluation. As retrieval and other scenarios where reciprocal rank is used begin to attract highly performant systems, we need to extend our evaluation approaches to address these issues. § LEXICOGRAPHIC PRECISION evaluation emphasizes precision by considering the position of the top-ranked relevant item and ignoring the positions of other relevant items. However, only ever looking at the position of the top-ranked relevant item results in the ceiling effects described in the previous section. Our goal is to develop an evaluation method that preserves the ordering of a pair of rankings by (i.e., agree with _1 when _1()≠_1()) and provides a justified ordering of a pair of rankings when is tied (i.e., generate a sensible order when _1()=_1()). Although metrics like expected reciprocal rank <cit.> and average precision include reciprocal rank as a component in their computation, they are not guaranteed to preserve the ordering of reciprocal rank when there is one. In this section, we will interpret metrics as best-case retrieval evaluation, allowing us to derive a preference-based evaluation method based on social choice theory. §.§ Best-Case Retrieval Evaluation When a user approaches a retrieval system, there is a great deal of uncertainty about their information need. While a request such as a text query provides information about which items in the corpus might be relevant, it says much less about the user's appetite for relevant information. As a result, there is an implicit population of possible users issuing any particular request, each of whom may have a different utility for any particular ranking. In this section, we explore two types of uncertainty and demonstrate that, from both perspectives, evaluation represents the best-case utility over that population. We first consider uncertainty over recall requirements. <cit.> presented a model for evaluating rankings based on the diverse set of recall requirements that a user might have. Given a request and its associated relevant items, users may be interested in one relevant item, a few relevant items, or the complete set of all relevant items. We can assess the quality of a ranking for any particular user recall requirement with what <cit.> refers to as the Type 2 expected search length: the number of items a user with requirement i has to scan before finding i relevant items. So, each information need has recall levels and is the evaluation measure associated with users requiring exactly one relevant item. From this perspective, we can, for a specific ranking, look at how utility is distributed amongst possible users, as represented by their recall levels. For example, we can ask how utility for users with high and low recall requirements compares; or what the average utility across these populations is. While previous work has looked at the average-case utility <cit.> and worst-case utility <cit.>, in this work we suggest that represents the best-case performance over these possible users. The proof is relatively simple. Because _i monotonically degrades in rank, the best-case utility over this representation of users is _1 (equivalently _1). The next-best-case is _2 and so forth until we reach _, which we refer to as the worst-case. So, given two rankings and , observing _1()>_1() implies that the best-case performance over possible user recall requirements is higher in compared to . Next, we consider uncertainty over psychologically relevant items. When evaluating a retrieval system, we often use relevance labels derived from human assessors or statistical models. But what if a specific user does not find the top-ranked item labeled relevant actually relevant to them? For example, a user may have already seen a specific item or they may desire an item with a specific (missing) attribute. A judged relevant item might be inappropriate for any number of reasons not expressed in the request. The concept of psychological relevance <cit.> suggests that judging any item relevant in general (as is the case in many retrieval benchmarks, including those used in TREC) is a necessary but not sufficient criteria to determine an item's psychological relevance to any particular user. From this perspective, there are 2^-1 possible non-empty sets of relevant items for a specific request, each representing psychological relevance to a possible user. Nevertheless, amongst these possible users, if they are interested in precisely one relevant item, there are unique utilities. Again, since monotonically decreases in rank, the best-case utility is _1, followed by _2 until we reach _. Both uncertainty over recall levels and over psychological relevance focus on possible populations of users. Because the utility to the user implies utility to the system designer (e.g., for objectives like retention), understanding the best-case performance is valuable in decision-making. From the perspective of social choice theory, best-case retrieval evaluation is inherently optimistic and represents risk-seeking decision-making. §.§ Lexicographic Precision The problem with evaluating for best-case retrieval (as shown in Section <ref>) is the tendency for multiple rankings to be tied, especially as * we increase the number of relevant items and * systems optimize for retrieval metrics. We can address these ceiling effects by developing a best-case preference-based evaluation that focuses on measuring differences in performance instead of absolute performance <cit.>. While metric-based evaluation models the preference between rankings by first computing some evaluation metric for each ranking, preference-based evaluation explicitly models the preference between two rankings. Prior research has demonstrated that preference-based evaluation can be much more sensitive than metric-based evaluation <cit.>, making it well-suited for addressing the ceiling effects described in Section <ref>. Under best-case preference-based retrieval, we are interested in answering the question, `under the best possible scenario, which ranking would the user prefer?' In this respect, it is a user-based evaluation method, but one based on preferences and measurement over a population of users. More formally, given an information need and two rankings and associated with two systems, metric-based evaluation uses an evaluation metric : → (e.g. reciprocal rank or average precision) to compute a preference, () > () ≻ where ≻ indicates that we prefer to . Notice that, if () = (), then we cannot infer a preference between and . We contrast this with preference-based evaluation, which directly models this relationship : ×→, (,) > 0 ≻ Our goal is to design a preference-based evaluation that preserves the best-case properties of metrics with much higher sensitivity. Consider the two position vectors and in Figure <ref> associated with the two rankings and . These two vectors are tied in the best case (i.e., _1=_1). However, we can break this tie by looking at the next-best case (i.e. _2) where, because _2<_2, we say that ≻. If we had observed a tie between the next-best case, we could compare _3, and so forth. This is known as lexicographic sorting in the social choice literature <cit.> and reflects a generalization of best-case sorting. Given two sorted vectors of utilities, here reflected by the rank position, the lexicographic maximum begins by looking at utilities in the best-off positions (i.e. _1 and _1) and iteratively inspects lower utility positions until we find an inequality. If we exhaust all relevance levels, we indicate that there is not preference between the rankings. Note that a tie can only happen if two rankings have all relevant items in exactly the same positions. Lexicographic sorting generates a total ordering over all positions of relevant items, in contrast with just inspecting _1, which compresses all arrangements onto possible values. Because of its basis in lexicographic ordering, we refer to this lexicographic precision or lexiprecision. §.§ Number of Ties Under Lexicographic Precision We can contrast the number of ties as increases in metrics with the number of ties as increases in lexiprecision. In the latter, we only observe ties when the positions of the relevant items for two rankings are the same and, therefore, we have possible `values' and the number of ties given a fixed ranking is constant. If we add k relevant items, the number of `values' increases, resulting in an increase in discriminative power. Specifically, if we add k relevant items to , then the number of possible values scales exponentially in k. +k/ = ∏_i=+1^+k+1-i/i By contrast, for metrics, this increase in the number of unique position vectors needs to be allocated to a fixed values, resulting in collisions, as suggested by the pigeonhole principle. Moreover, these collisions will tend to increasingly occur at values associated with position vectors where _1 is small (Section <ref>). §.§ Best-Case Retrieval Evaluation Revisited In Section <ref>, we described two dimensions of uncertainty in retrieval evaluation: recall level and psychological relevance. In both cases, we saw that the best-case utility was represented by . In terms of preference-based evaluation, we would like to show that, for both recall level uncertainty and psychological relevance uncertainty, the highest ranked difference in utility will be _i^*, where i^*=_j∈[1,]_i(,)≠0. This is clear for recall level uncertainty because the population of possible users exactly matches the recall levels defining i^*. However, for psychological relevance uncertainty, we have 2^-1 possible users. That said, there are only possible metric values. Moreover, the number of possible users tied at the first recall level is 2^-1; at the second recall level is 2^-2; down to the final recall level where there is a single possible user. This arrangement of ties is the same regardless of the exact positions of the relevant items. Therefore, if we observe _1=0, we will observe 2^-1 ties amongst the possible psychological relevance states where where the first relevant item is at position _1. The next highest utility is, by the monotonicity of metrics, associated with the second recall level. We can continue this procedure until we observe an inequality, which will occur exactly at the first i such that _i(,)≠0. In other words, i^*. These observations are important since they demonstrate that lexiprecision generalizes evaluation and best-case performance across two types of uncertainty. §.§ Quantifying Preferences Although lexiprecision provides a ordering over a pair of rankings, it does not quantify the magnitude of the preference (i.e. the value of (,)). Defining a magnitude allows us to measure the degree of preference, which can then be averaged over multiple requests. We can define the magnitude directly as the value of _i and, therefore, defining (,) as, (,) =_i^*(,) where i^* is defined in Section <ref>. This has the advantage of, when i^*=1, reproducing the difference in reciprocal rank. Under this definition, the magnitude of preferences for higher recall levels will tend to be smaller due to the aggressive discounting in reciprocal rank. Alternatively, we can be more conservative in our quantification and just return a constant value based on the preference, defining (,) as, (,) =(_i^*(,)) where i^* is defined as above. Although the direction of the preference agrees with , we discard its magnitude and, as a result, differences at lower ranks are equal to those at higher ranks. Prior work found that looking at unweighted preference information alone can help with preference sensitivity <cit.>. §.§ Lexicographic Precision as Modeling _1 A different way to interpret lexiprecision is as a method to estimate a high-precision preference between rankings. Assume that we have some latent preference between two rankings, (,), that we know to be `high-precision'. That is, users prefer finding some relevant items quickly than all relevant items quickly. One way to model this preference is to inspect the positions of relevant items in and . From the perspective of `very high precision', observing _1(,)>0 provides significant evidence that (,)>0. What if we do not observe a preference at the first recall level? Inspired by Katz's back-off model <cit.>, we inspect the second recall level for evidence of the value of (,). If we do not observe a preference, we can progressively back off to higher and higher recall levels. While Section <ref> demonstrated that _1(,)=0 with high probability, backing off our estimates works best if, for i>1, we expect _i(,)=0 with lower probability. Using the runs associated with several public benchmarks, we computed _i for all pairs of rankings generated by multiple systems for the same query. We show the probability of a tie for the first twenty recall levels in Figure <ref>. We can see that the number of ties at _1 are high, ranging from roughly 20 Inspecting the number of relevant items retrieved confirms this. The DL 2021 submissions had 38.74± 21.75 relevant items in their retrievals, compared to web with 53.14±47.06. Meanwhile, robust submissions had 40.51± 41.49 relevant items retrieval, suggesting much higher variance and ml-1m with 7.46±8.57 relevant items retrieved and much higher variance, leading to more more ties at higher recall levels. Given that different benchmarks observed different behaviors for ties amongst recall levels, we need to understand how many recall levels we need to visit before finding evidence for . If a benchmark needs many recall levels but observes many ties at high recall levels, then our model of may be less reliable. We computed the number of recall levels needed, i^*, for each benchmark and plotted the empirical cumulative distribution function in Figure <ref>. We find that we need fewer than ten recall levels to capture 90 Although our preceding analysis demonstrates that a backoff model of based on lexiprecision will terminate at a reasonable depth, we still need to show that there is locality amongst _i. This means that we ask, if we observe _1(,)>0, how likely is it that _2(,)>0? _3(,)>0? If there is high locality amongst _i, then information from _i+1 can help in predicting the true value of _i when it is missing or tied. Note that, if we observe _i> 0 and is large, there is absolutely no guarantee that _i+1>0 since the next ranked relevant items could, in theory, occur anywhere in the range [_i+1,] and [_i+1,]. That said, given the number of ties at recall level 1, we are interested in understanding whether information at other rank positions can provide a way to distinguish tied rankings. In Figure <ref>, we computed the Pearson correlation amongst all pairs of _i for i∈[1,8] for the Robust 2004 benchmark. The fact that correlation between _i and _i+j degrades as j increases from 1 demonstrates that there is indeed high locality. The implication justifies the use of backoff modeling of . To test this hypothesis explicitly, we fit a linear model of _1 using _2,…, _4 as independent variables. We plot the coefficients of the linear regression in the solid line in Figure <ref>. The substantially larger coefficient on _2 indicates that the majority of the predictive power can be found at recall level 2 (j=1). Higher recall levels (j>1) are associated with much smaller coefficients. The actual contributions of higher recall levels are much smaller than this suggests since, because we are operating with reciprocals, the magnitude of _i shrinks as i grows. While the colinearity in Figure <ref> might explain some of this disparity in weights, the locality of individual Pearson correlations and high predictive accuracy means, from a modeling perspective, that a backoff model is justified. We repeated this analysis for predicting _2 from _3,…,_6 and similarly for _3 and _4. Similar to our observation when modeling _1, these results suggest that the next higher recall level is the most valuable predictor when modeling _i for any specific recall level. We repeated this regression analysis for explicitly cascaded data (i.e. only modeling cases when there is a tie at positions i'<i) as well as for regressing against the sign of the preference and observed identical findings. Although we omit those plots due to space constraints, they further support a backoff model intrepretation of lexiprecision. § METHODS In previous sections, we theoretically and conceptually connected to the notion of best-case retrieval evaluation, with a few illustrative empirical results. In order to rigorously test the viability of lexiprecision, we conducted a series of empirical analyses based on publicly available benchmarking data.[Code for computing lexiprecision can be found at <https://github.com/diazf/pref_eval>.] §.§ Data We analyzed the performance of lexiprecision across a variety of retrieval and recommendation tasks. Specifically, we collected runs submitted to TREC news (Robust 2004, Core 2017 and 2018), web (Web 2009-2014), and deep learning (Deep Learning 2019-2021) tracks as well as several public recommendation tasks <cit.>. We present details of these datasets in Table <ref>. §.§ Analyses Our empirical analyses were founded on two core questions, * how empirically correlated are lexiprecision and metrics, and * how much more robust is lexiprecision than metrics. Because of its widespread adoption in the research community, we will use reciprocal rank for analyses. In order to answer the first question, we conducted experiments designed to predict the agreement between lexiprecision and metrics under different conditions. We considered two types of agreement. Agreement in ranking preference tests whether agrees with . Because lexiprecision is substantially more sensitive than metrics, we only consider situations where _1(,)≠0. Because and always agree in sign, we will only show results for one of the metrics when computing ranking agreement. Agreement in system preference tests whether ∼_(_,_) agrees in sign with ∼_(_,_). This measures whether our choice of or affects its correlation with reciprocal rank. Agreement is measured as a percentage of preferences agreed upon. In order to assess the robustness of lexiprecision, we measure the number of ties observed amongst pairs of rankings and discriminative power. We claim that a robust approach has fewer ties and higher discriminative power. For discriminative power, we adopt Sakai's approach of measuring the number of statistically significant differences between runs <cit.>, using both Tukey's honestly significant difference (HSD) test <cit.> and classic paired test to compute p-values. The paired test uses the Student's t-test for reciprocal rank and <cit.>; and the binomial test for . § RESULTS §.§ Correlation with Reciprocal Rank By construction, we know that _1(,)>0(,)>0 and, so, the correlation between the two will be high. We can further test this by comparing how well lexiprecision predicts a ground truth preference between rankings based on _1. In our first analysis, given an observed _1(,)≠0, we measure the ability of lexiprecision and reciprocal based only on -1 subsequent recall levels to predict the sign of _1(,). That is, we use _1(,) as a target value and compute _1 and using suffixes _2: and _2:. Although artificial, this analysis provides an indication of the predictive value gained through cascaded modeling (as opposed to just looking at the top-ranked relevant item). We present the results in Table <ref>. As we can see, lexiprecision consistently agrees more with the target (masked) _1 than _1 of the suffix across all datasets, indicating that the additional information in higher recall levels can be used to predict the target (masked) _1. This agrees with our preliminary analysis in Section <ref>. We can also test the relationship between reciprocal rank and lexiprecision by measuring the agreement under incomplete information. Specifically, we consider removing either labels (treating unlabeled items as non-relevant) or requests (i.e. queries or users). We then measure the agreement between preferences with incomplete data and _1 on complete data (i.e. all requests and labels). Methods that agree more with reciprocal rank on complete data are considered more correlated. We present results for ranking and system agreement when removing labels (Figure <ref>) and queries (Figure <ref>). Across all conditions, we observe that the has as high or slightly higher agreement with _1 with complete information than _1 with incomplete information. This means that can accurately predict _1 with complete information as well or better than using reciprocal rank. Moreover, we observed that shows weaker system agreement which occurs because its magnitude does not decay with rank position and, therefore, resulting averages are inconsistent with averages of position-discounted reciprocal rank values. §.§ Sensitivity In Section <ref>, we motivated our work by showing that metrics theoretically and empirically suffer from ceiling effects. The primary instrument we used to determine this was the probability of ties between rankings. In Table <ref>, we present the percentage of tied rankings from different systems for the same request. As predicted by our analysis in Section <ref>, lexiprecision has substantially fewer ties because this only happens when two rankings place relevant items in exactly the same positions. In Section <ref>, we showed that lexiprecision implicitly and exponentially increased its fidelity as the number of relevant items increased, while would quickly suffer from ties. In Figure <ref>, we show the number of tied rankings as a function of incomplete labels. This allows us to see trends with respect to . Across our three retrieval benchmark sets, we see the growth in number of ties for as increases; meanwhile, they shrink for lexiprecision. The drop in ties for recommender systems benchmarks suggests that, as described in Section <ref>, rankings contain very few relevant items and, as a result, removing labels will result in no relevant items present and increasingly tied rankings. While the number of ties indicates that might not be able to distinguish systems, for a large enough sample of requests, a metric might still be good enough to distinguish systems. A different approach to measuring the discriminative power of an evaluation method is to count the number of differences that are statistically significant <cit.>. When we compare the percentage of pairs registering a statistically significant difference (Table <ref>), both and outperform reciprocal rank, often by a very large margin. This indicates that the number of ties indeed hurts the ability of reciprocal rank to detect significant differences, while both variants of lexiprecision are much more sensitive. § DISCUSSION Our results demonstrate that our lexiprecision variants capture the properties of while substantially increasing the ability to distinguish systems under the same best-case evaluation assumptions. Practitioners and evaluators need to assess whether the assumptions behind metrics, including reciprocal rank, or lexiprecision or any other evaluation scheme are aligned with the use case. If a retrieval environment supports the assumptions behind metrics, including ties, then, by all means, they should be used to assess performance. However, in Section <ref>, we raised several reasons why uncertainty over recall requirements and psychological relevance suggest that metrics make quite strong assumptions not realized in most retrieval settings. We designed lexiprecision to operate as conservatively as possible, preserving any preference from metrics and only acting to break ties. Although metrics and lexiprecision agree perfectly when there is only one relevant item, this does not mean that all situations where we have a single judged relevant item should adopt a metric like reciprocal rank. For example, the MSMARCO dataset <cit.> includes requests and very sparse labels; the majority of requests have one judged relevant item. One might be tempted to use reciprocal rank but <cit.> demonstrate that this would obscure the multitude of unjudged relevant items (of which there are many). This hurts efficacy of best-case retrieval evaluation including reciprocal rank, as shown in Figures <ref> and <ref>. Recommendation tasks have similar issues with sparsity due in part to it being more difficult for a third party to assess the relevance of personalized content and to the difficulty in gathering explicit feedback. Labels derived from behavioral feedback in general suffer from similar sparsity <cit.>. In this respect, we echo the call from <cit.> to make labeling practices across all of these domains much more robust. Given the observation of <cit.> that better labeling can result in less informative evaluation, we need to also develop more sensitive evaluation schemes such as lexiprecision. Finally, this study has introduced a new preference-based evaluation method for metrics. As such, our focus has been on developing an understanding for comparing pairs of rankings and systems. We do not claim that lexiprecision itself is a metric and emphasize that we use it for comparing two rankings or systems. As such, although we address some concerns with reciprocal rank raised by <cit.>, we do not make claims about lexiprecision being an interval measure. That said, the total ordering shown in Figure <ref> suggests that there may be version of lexiprecision that can indeed be represented as an interval measure. § CONCLUSION Motivated by ceiling effects in evaluation with reciprocal rank, we have attempted to increase our understanding of the metric and designed a well-grounded mitigation to conducting best-case retrieval evaluation. We have shown that lexiprecision can effectively address the limitations of reciprocal rank in retrieval evaluation. Our results highlight the importance of considering the effects of tie-breaking in the evaluation process and provide a method for conducting more reliable best-case retrieval evaluation. Given the use of retrieval metrics—including reciprocal rank—outside of information retrieval contexts, we believe these contributions will be relevant to a researchers in the broader research community. ACM-Reference-Format
http://arxiv.org/abs/2306.06506v1
20230610185415
Calculating and Visualizing Counterfactual Feature Importance Values
[ "Bjorge Meulemeester", "Raphael Mazzine Barbosa De Oliveira", "David Martens" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.HC" ]
1,2]Bjorge Meulemeester[Authors contributed equally.] [email protected] 1]Raphael Mazzine Barbosa De Oliveira^* [email protected] 1]David Martens [email protected] [1]Department of Engineering Management, University of Antwerp, Prinsstraat 13, 2000 Antwerpen, BEL [2]Max-Planck-Institut für Neurobiologie des Verhaltens – Caesar, Ludwig-Erhard-Allee 2, 53175 Bonn, DE Despite the success of complex machine learning algorithms, mostly justified by an outstanding performance in prediction tasks, their inherent opaque nature still represents a challenge to their responsible application. Counterfactual explanations surged as one potential solution to explain individual decision results. However, two major drawbacks directly impact their usability: (1) the isonomic view of feature changes, in which it is not possible to observe how much each modified feature influences the prediction, and (2) the lack of graphical resources to visualize the counterfactual explanation. We introduce Counterfactual Feature (change) Importance (CFI) values as a solution: a way of assigning an importance value to each feature change in a given counterfactual explanation. To calculate these values, we propose two potential CFI methods. One is simple, fast, and has a greedy nature. The other, coined CounterShapley, provides a way to calculate Shapley values between the factual-counterfactual pair. Using these importance values, we additionally introduce three chart types to visualize the counterfactual explanations: (a) the Greedy chart, which shows a greedy sequential path for prediction score increase up to predicted class change, (b) the CounterShapley chart, depicting its respective score in a simple and one-dimensional chart, and finally (c) the Constellation chart, which shows all possible combinations of feature changes, and their impact on the model's prediction score. For each of our proposed CFI methods and visualization schemes, we show how they can provide more information on counterfactual explanations. Finally, an open-source implementation is offered, compatible with any counterfactual explanation generator algorithm. Counterfactual explanations, Counterfactual feature importance method, explainable AI, XAI, visualization § INTRODUCTION Artificial intelligence (AI) has become capable of capturing, learning, and predicting complex data structures. The performance of GPT-3 <cit.> models, for example, are so performant that a piece of software can mimic language to a level that is often indistinguishable from the human-written text. Deep learning models can learn a voice's intonation, tone, and pronunciation habits using just a couple of seconds of audio as training data <cit.>. Learning highly complex data structures, such as speech, generally requires an equally complex model to capture the nuances of the data accurately. While this increase in complexity opens up the possibility of improved performance, it comes with a price: its explainability is greatly reduced <cit.>. This lack of justification for decisions can decisively impact their application <cit.>. Current legislation, for example, requires explainability of models that impact high-stake decisions for people's lives <cit.>. Moreover, understanding why a model works can provide insights to guide improvements in itself <cit.> and possibly detect erratic patterns learned from data that can cause discrimination or unfair decisions <cit.>. Models that become inexplicable due to their inherent complex characteristics, such as a high number of parameters or complex feature associations, are referred to as “black box” models, since their inner workings are not clear enough for humans to comprehend. The field of eXplainable Artificial Intelligence (XAI) dedicates great efforts to shed light on how complex models reach decisions by adopting several methods <cit.>. The wide variety of XAI methods can generally be categorized by the scope of their explanation, model compatibility, and output format <cit.>. In terms of scope, local explanation methods aim at uncovering why a model predicts some outcome for a single data instance, while global explanations seek to provide an overall explanation of how the model works on the entire dataset. For model compatibility, we have methods that only work for a specific kind of model - since they often use particular mechanisms to generate explanations - and approaches that are independent of model type i.e. “model-agnostic". Finally, the output format of the explanations can be simple feature importance values or more sophisticated representations such as heatmaps for image explanations. Popular global explanation approaches include, but are not limited to: using intrinsic model characteristics, such as impurity decrease analysis for tree-based models <cit.>; rule extraction strategies, such as Trepan <cit.> and ALBA <cit.>, where a less complex but comprehensible surrogate model is built to mimic the black-box model, and its intrinsic explainable parts are used as a proxy to explain the black-box model; and depicting the marginal effect of features on the prediction score (partial dependence plots, PDP) <cit.>. As for local explanations, LIME <cit.> and SHAP <cit.> are popular feature importance (FI) methods, indicating the most important (subset of) features for a given instance's prediction score. LIME considers data in the vicinity of the instance to be explained and fits a locally linear relationship. The coefficients of this linear fit can be interpreted as the FI values. SHAP calculates approximate or exact Shapley values, compared to the average prediction and average feature value of the entire dataset. Finally, counterfactual explanations provides a set of changes in feature values that lead to a different model decision <cit.>. In terms of explanation output format, all listed explanation methods (except for counterfactual explanations) include ways to visualize the feature's importance graphically. LIME allows bar charts to visualize the weights of the locally linear decision boundary[The visualization resources for LIME can be seen at https://github.com/marcotcr/limehttps://github.com/marcotcr/lime]. SHAP even allows various visual methods to compare the feature Shapley values, such as waterfall plots, force plots, the popular beeswarm and violin plots, and even image heatmaps for image recognition models[SHAP's charts can be found in their original repository: https://github.com/slundberg/shaphttps://github.com/slundberg/shap]. And the random forest's mean decrease in impurity (MDI) allows a simple representation of the feature's importance by bar charts <cit.>. One naive visualization approach that is commonly used to introduce counterfactual explanations is to show the factual and counterfactual points graphically in their input space. However, this is only possible for 2 to 3 dimensions, a rare occurrence for real-life datasets. Moreover, just showing the value changes in Cartesian space does not add considerable value to the explanation, since features usually have vastly different and interdependent distributions. They cannot be directly compared. Therefore, the most practical and adopted solution to represent the output of a counterfactual explanation method is to provide a set of feature names and associated values to be changed. We can illustrate such an explanation with the example below, which considers a machine learning model that approves or rejects credit card applications: [enhanced,width=10cm,center upper,drop fuzzy shadow southwest, boxrule=0.4pt,sharp corners,colframe=yellow!80!black,colback=yellow!10] Explanation 1: Counterfactual explanation why a credit card application was rejected. 0.4pt If your age were 28 years old instead of 20, and your salary were $4,000 instead of $1,200 then you would be approved instead of rejected. The representation does not include any visualization, nor any way to assess which of the feature changes was more important than the other. This is in stark contrast to LIME or SHAP, who have easily interpretable charts that show the importance of each feature. Additionally, the unique objective of counterfactuals explanations is to look at feature changes, rather than just features, that lead to a class change. This does not allow for a simple translation of LIME or SHAP visualization. Hence, this leads to the following two questions: can we obtain importance values for each feature change in a counterfactual explanation, and can these be used to visualize the explanation? In this paper, we solve this problem as follows: we generate Counterfactual Feature change Importance (CFI) values, and propose two methods to do so[Note that we will interchangeably talk about feature changes or feature value changes, as well as their associated CFI values and importance scores. Each feature change has a value in feature space, and an associated CFI value as well.]. Figure <ref> already illustrates the goal and result of such CFI methods for the same counterfactual explanation we previously mentioned. The Greedy CFI method and chart (a) visualize the modification sequence that occurs when changing the feature with the greatest effect on the model prediction score, one at a time. Since each change adapts the feature, each feature change will have a different effect after each modification, thus making these values dependent on the order of the feature changes. The benefit of this approach is its efficiency and simplicity. In this example, we see that the prediction score for the data instance (the factual) is 0.2, which is below the 0.5 threshold, leading to a rejected decision from the model. Changing salary from 1,200 to 1,400 apparently leads to the highest increase in the predicted score, yet still leading to a rejection score. Additionally changing the age from 20 to 28 then leads to the counterfactual instance with a prediction score of 0.95 and approval decision. Note that changing the age only has an impact this high after the salary has been changed as well. The CounterShapley CFI method and chart generalizes this concept, by looking at all possible subsets of feature changes, and assessing the average marginal value of each feature change. This naturally leads to Shapley values <cit.>. Section <ref> will describe this in more detail, but the example already indicates that salary change has an importance value of 53.3%, while age is slightly less important, having an importance value of 46.7%. Finally, we will also introduce Constellation charts, that aim to visualize all these subsets of feature-value changes. Further demonstration of this concept will be provided in our empirical section. § MOTIVATION The research interest in counterfactual explanations is evident when we look at the increasing number of publications in this field <cit.>: ever since it has been introduced in 2011 <cit.>, over 300 counterfactual generation methods have been proposed <cit.>. This interest is justified by several desirable properties that counterfactual explanations have: they are simple, model-agnostic, sparse, and decision-oriented <cit.>. The simplicity is not only related to the output explanation itself, but also refers to the familiarity that humans have to use counterfactual arguments to provide explanations <cit.>. This process is well-established both in philosophy and psychology <cit.>, showing that such explanations come natural to end users. In theory, counterfactual explanations do not rely on any specific model since they consider the prediction model as a black box <cit.>, making them model-agnostic. The explanations are typically also sparse, in the sense that only a small subset of all features occurs in the explanation. Not only does this make the explanation smaller and hence more comprehensible <cit.> (compared to an explanation using all features), but it also considerably reduces the complexity of the problem of assigning importance values to each feature change in the explanation. This sparsity property thereby removes the need to limit the search or visualization to some smaller subset of features, as is done in LIME and SHAP where the number of features is user-defined. Finally, the decision-oriented nature is a key point of counterfactuals; they focus on modifications that lead to a prediction change <cit.>. This characteristic is fundamental for highly complex machine learning methods, since their nonlinearity can lead to nontrivial feature-value relationships that can potentially bring misleading assumptions. We can verify this last point in literature works <cit.> that compare counterfactual explanations with feature importance-based methods (LIME and SHAP) and show that the most important features do not always lead to a change in classification decision. Hence, counterfactual explanations can be advantageous whenever one is more interested in explaining a predicted decision, rather than a prediction. This decision-oriented nature of counterfactuals aligns well with the transparency requirements in legislations, such as the European GDPR <cit.>. Despite these numerous benefits of counterfactual explanations, one major drawback, as introduced in the previous section, is the way these explanation results are presented. This issue is not only limited to the lack of visual approaches to plot charts, but also to the inability to assign meaningful values to feature changes. One might wonder: why not simply apply an existing feature importance method, such as SHAP, to the counterfactual explanation? The reason is multifaceted, as we will describe in detail in the next sections: (1) we focus on feature-value changes (e.g. age changing from 20 to 28), not just on feature values (e.g. age equals 20); (2) the dimensionality that we have to deal with is generally much smaller, yielding unique visualisation opportunities; (3) the sum of an instance's SHAP values sum up to the difference between the average prediction score and the instance prediction, not to the difference between counterfactual and factual scores; and (4) the reference point and the features considered in SHAP are different from those considered in counterfactual explanations. So, we aim to define Counterfactual Feature change Importance (CFI) methods that enhance the informative value of counterfactual explanations by creating feature change importance values and correspondent visualization resources that are tailored to the cause. The significance of visualization is well known in the field <cit.>, and it is further reinforced in a recent study in which consumers preferred feature importance methods over counterfactual explanations <cit.>. The authors of the study conjecture that this preference is likely due to the lack of visualization for counterfactual explanations. Likewise, merely providing what feature values should be changed in order to generate a counterfactual explanation does not show any distinction in their relative importance. This additional information can provide useful information on how the model prediction score works, ultimately leading to a better comprehension of the model's decision. This would allow users and practitioners to critically evaluate whether the model behaves as expected. The graphical representation of counterfactuals traces back to its origins <cit.>, where the greedy nature of the proposed counterfactual generation algorithm (SEDC) allowed the plotting of sequential changes until the counterfactual class is achieved. In more recent studies, the visualization problem was tackled by multiple studies. For example, GAMUT <cit.> implements a user interface that shows multiple statistics and methods to assess the relationship between the instance's features and the model's prediction. Here, counterfactual visualisation is limited to a simple model output change given certain modifications, without any assignment of how important each of those changes is. Google's What-If Tool <cit.> focuses on allowing the user to explore the model's behavior by probing feature changes that are dynamically presented in tables and charts. But in terms of reporting, the counterfactual feature changes do not offer much information about their importance: they are depicted by simple tables that highlight the value changes, and two-dimensional charts with features values or different model's prediction scores. DECE <cit.> also presents a user interface that includes various statistical analyses of features, and shows multiple counterfactual explanations according to sequential modification steps. But it still does not show the importance of each feature change to prediction scoring. Lastly, VICE <cit.> represents counterfactual changes in charts together with the data distribution, and it allows customization of which features to include in the explanation. Yet it again does not disclose the influence of each feature change. Although the previous approaches have advantages by allowing users to investigate interactively how counterfactuals work, and they include multiple statistical analyses related to the target explanation and its features, they introduce an extra layer of complexity that users must learn to operate. Furthermore, they all represent counterfactual changes by only showing what features were modified in tables or spatial shifts with two-dimensional charts. This all emphasizes the current lack of a method to represent how each counterfactual explanation feature change impacts the model's prediction scoring. § COUNTERFACTUAL EXPLANATIONS The methods presented here do not rely on any specific type of counterfactual generator. A CFI method simply assumes some counterfactual explanation has been provided with corresponding feature changes, for which importance values need to be assigned, irrespective of the algorithm that was used to generate the explanation. Therefore, this section introduces a general concept of counterfactual explanation, which covers any generation algorithm. The formal definition is then used to introduce the CFI methods' theoretical reasoning. Counterfactual explanations <cit.> assume four basic components: a dataset (D); an instance to explain (x); a prediction model (ℳ); and, in the case of a classification task, a threshold (t). Consider an N × M dimension dataset, with a set of N instances D = [d^1, d^2, ..., d^N] and M features d^n = [d^n_1, d^n_2, ..., d^n_M] , n ∈ [1...N] where d^n is called a datapoint or an instance: a vector of size M. These instances are used by a prediction model ℳ to assign a score s. Scoring an instance with a model will be denoted by ℳ(d^n) = s If we consider a binary classification task where we set a decision threshold t, assigning instances to class 0 if s < t and to class 1 if s ≥ t, we can effectively categorize all instances into either class. We will only assume binary classification in this work for simplicity, but this can easily be generalized to multiclass classification with, e.g., a one-vs-one or one-vs-rest paradigm. Since CFI only assumes the existence of some counterfactual and its associated feature values, no matter the prediction task, this can also be generalized to regression tasks. Given two different datapoints a and b, we can define the difference between b and a as the set of indices where their feature values differ: δ_a,b = { i | a_i ≠ b_i } For notational ease, we define the scoring of the instance a with replaced feature values δ_a,b as being: ℳ(δ_a,b) = s Note that the definition of δ is symmetric with respect to the two associated instances. By convention, this notation will denote the scoring of a, adapted to have the values of b on the indices in δ, where a and b are the left and right instance of the ≠ sign in Equation <ref>. This becomes important as soon as we consider subsets of δ_a,b. In the case of factual and counterfactual instances (see below), this notation will always denote the factual instance, adapted to have feature values of the counterfactual on some indices. Not the other way around. Using this notation, we can formalize a counterfactual instance c, derived from a factual instance x∈D, having the following properties: * The factual and counterfactual's predicted classes are not the same: (ℳ(x) > t) ≠(ℳ(c) > t) * The difference between the factual and counterfactual can be defined by a set of indices where their features differ, i.e., the counterfactual explanation itself: δ_x,c = { i | x_i ≠ c_i } * The counterfactual evidence is irreducible: there's no subset of valid changes that changes the original class. ∄ δ'_x,c⊂δ_x,c: (ℳ( δ'_x,c) > t) = (ℳ( δ_x,c) > t) § ASSESSING FEATURE IMPORTANCE VALUES IN COUNTERFACTUAL EXPLANATIONS §.§ Greedy CFI method The greedy CFI method is a simple yet useful approach. It will iteratively select the feature change that makes the largest contribution towards the counterfactual class prediction, as formalized in Algorithm <ref>. The resulting scores have some useful properties: the sum of the feature change scores is equal to the difference between the counterfactual and factual prediction (ℳ(c)-ℳ(x)); it has a simple interpretation that can easily be presented in charts (further discussed in Section <ref>); and it is computationally efficient. Note that Algorithm <ref> is not the most efficient implementation of a greedy algorithm, in favor of readability. This method shares a large similarity with the SEDC <cit.> counterfactual generation approach since both have a similar objective. However, while SEDC is an algorithm designed to find a counterfactual explanation given a factual instance and model, the Greedy CFI method already assumes a counterfactual instance and assigns a score for each feature change. So Greedy CFI is a generic version of the initially proposed `Score Evolution' chart of Martens and Provost. §.§ CounterShapley CFI Method Given the non-linearity of the complex models, different sequences may give different importance values for the same feature change. Therefore, we introduce the CounterShapley CFI method next, which presents a way to calculate importance values independent of a specific modification path. As this relies on Shapley values, we will first provide a brief introduction to this game theoretical concept, followed by a short discussion of the previous use of Shapley in machine learning. §.§.§ Shapley Values Shapley values quantify the contribution of different variables to a final numeric result (φ). This method is derived from game theory, where each variable is considered a player p whose combinations can form coalitions V that contribute to a certain outcome according to a gain function ℋ(V) <cit.>. Therefore, we can calculate the contribution (φ_i) of a certain player p_i in a coalition formed by a set of |Z| players by considering its contribution in all possible coalitions: φ_i(ℋ) = ∑_V ⊆ Z ∖{i}|V|!(|Z|-|V|-1)!/|Z|!(ℋ(V ∪{i})-ℋ(V)) The total number of terms Equation <ref> will have to add together to calculate the Shapley value for a single player equals: N_coalitions, i = ∑^|Z|-1_i=0|Z|-1i= 2^|Z|-1-1 However, to have a complete calculation of all players' contributions, one must consider every possible subset of players, and not just the ones excluding i. This then yields the following number of iterations: N_coalitions = ∑^|Z|_i=0|Z|i = 2^|Z|-1 These Shapley values have several desirable properties <cit.>: * Symmtery: two players' contributions are the same, only if they contribute the same to every coalition that does not contain either player. * Efficiency: the sum of all players' contributions is equal to the total worth of the game or the grand coalition. * Dummy: a player with zero contribution does not affect the outcome contribution of a coalition, independent of the coalition in which it occurs. * Linearity: if one takes the sum of two gain functions, then the Shapley value of any player as scored by this sum of gain functions is the same as when one calculates the Shapley value for each gain function, and sums the results. Similarly, when scaling up the gain function by a factor of a, this will simply yield a times the Shapley value as calculated by the original gain function. §.§.§ Shapley Values in Machine Learning - SHAP The assignment of feature importance values for individual instances traces back to 2010 with Strumbelj and Kononenko work <cit.> and has gained popularity with LIME in 2016 <cit.>. Since then, multiple algorithms have iterated upon this explanation approach to improve its stability <cit.>, where the same instance could yield different importance weights. Among the strategies to circumvent this problem, Shapley value properties demonstrate their potential for solving the stability issue demonstrated by LIME. Given the model's non-linear behavior, features are allowed to contribute differently, depending on which coalition of features they are added to. Moreover, the dummy property assures that features that do not contribute to any coalition have no importance either. Also, the efficiency property allows assigning proportional importance values relative to a certain baseline. What exactly this baseline is, depends on what we consider as the model prediction of an “empty set of players". SHAP <cit.> adapts the Shapley values calculation to the machine learning context. As a main novelty, they developed a strategy of unifying an adapted Shapley value calculation and feature importance methods. The features are considered to be the players, and the scores obtained by the model are meant to describe, similarly to Shapley's gain function, how each feature contributes to the final prediction score. It has as a baseline the average prediction score. The calculation is performed through a model agnostic approaches (such as KernelSHAP), or model-specific approaches (such as TreeSHAP) to generate more reliable feature importance values. For KernelSHAP, Shapley values are estimated by performing a least squares regression using Lasso normalization. On the other hand, TreeSHAP <cit.> is data independent: it does not directly need external data since it uses the model's tree nodes information to calculate the features' marginal contributions. Despite the success of these approximations, they have limitations that substantially impact their application in explaining machine learning models. For TreeSHAP, the obvious implication is its narrow compatibility to tree-based models only. Although KernelSHAP does not have this constraint, its calculation can be computationally expensive for a large number of features and training points: it can take up to L·2^M calculations, where L is the sample number of (training) instances, and M is the number of total features. This creates a trade-off between stability (large sample) and computational efficiency (small sample). Moreover, even though SHAP yields more stable results over different runs when compared to LIME, its feature importance values can still change depending on the sampling since both the baseline score and the coalitions rely on the randomly selected data. Finally, the coalitions considered by SHAP can be unlikely or even include impossible data points. This is due to the fact that SHAP adapts feature values in order to compare instances to a baseline case. For instance, if we consider one-hot encoded (OHE) features, some SHAP coalitions can consider multiple active features from the same OHE (e.g., concurrently being married and single). This issue has broader consequences with KernelSHAP, since it uses multiple data instances that are not necessarily in the same local region as the instance of interest. For this reason, it can end up giving deceptive importance weights to features <cit.> for a local context. Finally, SHAP is unable to assign importance scores to counterfactual explanations. It is not compatible with comparing pairs of instances, nor feature changes, because it considers the average prediction as a baseline. Despite that, SHAP could still be useful given its variety in plotting resources, filling the gap that counterfactual explanations generally fail to deliver engaging visual resources. §.§.§ CounterShapley definition As discussed earlier, counterfactual explanations consist of a factual instance to be explained, and a counterfactual point with a different class. Although this type of explanation has benefits, discussed in Section <ref>, such as simplicity and sparsity, this representation alone does not precisely show each feature change's influence on this classification change. Consequently, we cannot tell which feature changes contribute more to the score change than others. This directly impacts the quality of the model's explanation since we can only describe the class change and feature modifications. The former is trivially inherent to a counterfactual explanation, while the latter does not directly reflect the impact of each individual feature modification on the model score. Augmenting the informative value of counterfactual explanations would leverage not just one but multiple explanation methods since counterfactual generation algorithms can have different objectives: SEDC <cit.>, for example, tries to improve sparsity and CADEX <cit.> minimizes euclidean distance in feature space. Other algorithms target both metrics at the same time, such as DiCE <cit.> and ALIBI <cit.>. As described in Section <ref>, we can assign importance values to the features changes in counterfactual explanations by using their impact over the prediction score. However, the Greedy CFI method considers a specific modification pattern that may not be useful when overall relative importance is desired, especially in highly non-linear models. To solve that issue, we can adopt a strategy similar to SHAP and use the Shapley value approach (introduced in Section <ref>) to calculate counterfactual explanation feature importance values. Let us consider the counterfactual evidence set δ_x,c and associated feature values x_i and c_i, such that the factual feature values have to be changed from x_i to c_i in order to go from a factual x to a counterfactual c. If we define the model ℳ as the scoring function, δ_x,c=δ and |δ| = K for notational ease, we can use Equation <ref> to define a Shapley value φ_i for the features-to-be-changed δ as: eq:shapleyφ_i(ℳ) = ∑_V ⊆δ∖{i}|V|! (K - |V| - 1)!/K!(ℳ(V ∪{i}) - ℳ(V )) where K is the number of features, and the sum extends over all sets V that are a subset of δ∖{i}. This weighted sum is simply the average marginal contribution of x_i to the class switch, as predicted by the model ℳ. In more detail, from right to left, this equation can be understood by: * Evaluating the model outcomes of some factual instance x, but adapted to have all features changes in V, as well as the modification of the feature of interest (x_i = c_i). * Evaluating the same model outcome for the same adapted instance, but without changing the feature of interest. * Considering the difference in these two model outcomes * Weighing this difference, depending on how often this particular coalition of feature changes may occur. * Considering all coalitions of all sizes that do not include a particular feature change, defined by index i. If we want to know how much some feature change contributed to the change in model prediction score, we can change these features from the factual to the counterfactual value one by one, each time evaluating the difference in the model outcome. Given the non-linearity of complex models, the order in which these features are changed may have an influence on the difference in the model outcome. For example, changing some feature value of an instance if two previous values have already been changed can yield a different outcome than changing the exact same feature if no previous changes have been made yet. For this reason, we must consider every possible order in which some feature can be changed, justifying the application of Shapley values. This is a major difference of CounterShapley compared to the Greedy approach, as the latter considers a fixed sequence of modifications. §.§.§ Difference between Shapley, SHAP and CounterShapley Similarly to SHAP, the CounterShapley approach inherits all the desirable properties of Shapley values. It is fundamental to highlight that, despite the similarity with SHAP, CounterShapley cannot be directly compared with it. They have divergent characteristics and objectives, as shown in Table <ref>, and explained in detail below. Baseline The baseline in Shapley is originally just the value of the empty coalition where no players are involved. In contrast, for KernelSHAP, the baseline consists of the average model's prediction over all available data points. For CounterShapley, the baseline is the model prediction of the factual instance that is to be explained. This is a completely different baseline, which is simpler to understand and trivial to calculate. Coalitions The information needed to calculate the importance values is also different for the three methods. Despite their similar goal of calculating the marginal contribution of different coalitions, they require different coalitions. For Shapley values, the coalitions are formed by all possible combinations of players. For KernelSHAP, the coalitions are formed by all possible feature value combinations of the sampled data points; the contribution values are then obtained by calculating the average prediction score of the instances with the same replaced features for the different points being considered. CounterShapley's coalitions are all the possible combinations of feature value differences between the factual and counterfactual instance, i.e. δ. Complexity With these considerations on what each method calculates, we can estimate their complexity. All have an exponential complexity, but we observe that CounterShapley does not necessarily consider all features (or players). When a counterfactual algorithm is optimized to minimize the number of features that are to be changed in order to create a counterfactual, this can greatly reduce the complexity of assessing the importance of these features. In Appendix <ref>, we illustrate how the calculation of CounterShapley works for one feature change, which also explicitly illustrates its complexity. Apart from the fact that KernelSHAP requires more features than CounterShapley, it also has a higher complexity because it requires more data instances. CounterShapley only needs 2, while KernelSHAP needs a decent number before it can assess the feature importances with some degree of stability. Efficiency Additionally, their efficiency property is also notably different. Shapley values have their sum equal to the value of the scoring function when all players are being considered. When we sum the KernelSHAP values we have the difference between the average prediction of the sample and the explained instance prediction score. For CounterShapley, the sum of their values is equal to the difference between the counterfactual and factual prediction scores. Goal and applicability These different baselines, coalitions, calculations, and sums highlight their distinct interpretations and applicability. CounterShapley considerably differs from SHAP since it aims to explain the feature's effect on the factual to the counterfactual change, while SHAP compares the feature of interest with the average prediction. SHAP is more suitable if one wants to understand how every single feature influences the prediction scoring, and CounterShapley is advisable for assigning importance weights to counterfactual explanations. Additionally, CounterShapley does not create the explanation, but rather augments the informative value of pre-existing counterfactual explanations. Therefore, if the speed of generating an explanation is a relevant matter, one must also include the counterfactual generation time, which can vary greatly depending on the counterfactual generator being applied <cit.>. §.§.§ Implementation of CounterShapley We continue by describing how CounterShapley values can be calculated, highlighting the methods' theoretical differences and advantages, and show how it lends itself very nicely to visualisation purposes. Algorithm <ref> presents a method of generating a map between any-sized coalitions of feature changes and their corresponding model prediction outcomes. This map can be used in conjunction with Algorithm <ref> to calculate the Shapley value of the change in some feature i. Since Algorithm <ref> has already concerned itself with mapping all possible coalitions to their corresponding model prediction outcome, Algorithm <ref> does not need to perform any further calls to the model's prediction method and can simply fetch them from the result of Algorithm <ref>. This is an important aspect of not just this specific implementation, but any implementation, as the algorithmic complexity is defined by the number of calls to this model prediction method. The extension to calculate all Shapley values for all features δ is now trivial, and can be done by using Algorithm <ref> for every change in feature i in δ. getCombinationsgetCombinations Kwinin § VISUALIZING COUNTERFACTUAL EXPLANATIONS In this section, we present three different chart types that increase the informative value of counterfactual explanations. Two of them, Greedy and CounterShapley charts, use the CFI methods described in the previous section. The third chart type, the Constellation chart, uses the same calculations as CounterShapley since it analyzes all possible coalitions with the counterfactual feature modifications. §.§ Greedy Chart The greedy chart aims to illustrate the sequential nature of the changes made by the greedy CFI method. We argue that this method brings value to graphical representations for two main reasons. First, as said previously, the calculation of scores has lower computational costs if compared to CounterShapley, from an exponential O(2^K) to a lower complexity O(K^2) behavior. Second, this approach gives us a fixed path that can be depicted in charts, governed by a greedy selection of feature changes. The latter can be useful if the receiver of the counterfactual explanation wants to increase, as much as possible in each modification step, the prediction score. Moreover, this greedy CFI method gives the same importance scores as CounterShapley if the model has a linear association between features. Thus, for lower-complexity models, it can give a sufficiently good approximation of the CFI values obtained with the CounterShapley method. To exemplify its use, let us consider the following counterfactual explanation: [enhanced,width=10cm,center upper,drop fuzzy shadow southwest, boxrule=0.4pt,sharp corners,colframe=yellow!80!black,colback=yellow!10] Explanation 2: Explanation for a loan application decision 0.4pt If your age were 30 instead of 20, salary were $2,200 instead of $1,500, sex were F instead of M, and state were CA instead of NY, the application would be classified as approved instead of rejected. Figure <ref> depicts how Explanation 2 would be displayed using the Greedy chart. The first point, at the bottom, represents the prediction score of the factual point, which has the features Sex, City, Salary, and Age equal to “M”, “NY”, 1,500, and 20 respectively. The next point (with a prediction score of about 0.35) shows the change in prediction when the feature Sex is changed from “M” to “F”. This modification (if compared to the three others) is the one that leads the highest increase towards the counterfactual class. The next points follow the same pattern, greedily, showing the best feature change to modify the classification result (from rejected to approved). The chart then has a sequence of changes starting from the factual point (bottom left) and gradually selects (from the bottom to the top) the changes that most positively contribute to the counterfactual class, which is depicted on the right side of the red threshold line. This simple chart allows for the visualization of sequential changes and their impact on the prediction score. It's fundamental to emphasize that this chart represents scores influences in a specific setting: a greedy search. This may not reflect the CounterShapley values in non-linear models. Note for example how changing the City from “NY" to “CA" has a larger impact on the model score than changing Sex from “M" to “F", but is not the first change in line. This is because this change would not be as influential if the “Sex" wasn't changed first. Likewise for changing “Salary": without changing “Sex" and “City" first, the impact of this feature change would not be as big as it is. Such is the nonlinear nature of the model. Therefore, practitioners must be aware of whether this chart meets their expectations in explaining a decision, and possibly avoid it if they want to provide a precise estimation of feature change impact over the prediction score of complex models. §.§ CounterShapley Chart The importance scores of counterfactual feature changes allow us to create a new visualization, here referred to as a CounterShapley chart. Our main objective with this chart is to represent counterfactual explanations in a simple but informative way which allows easy user understanding of what features lead to a class change and how important each of those features is. Figure <ref> shows the CounterShapley chart for the counterfactual Explanation 2, summarizing diverse information in a single image. The bars represent the CounterShapley values for each feature, and inside them we indicate their contribution in percentage to the total CounterShapley value. Below the bars, we have the feature names and their respective value changes. In addition, the chart shows the original instance, counterfactual instance, and the model prediction score after performing all feature changes. Although all this information could be described in a textual counterfactual, as mentioned earlier, the graphical representation is beneficial <cit.> to understanding and interpretation. It's interesting to note how the information conveyed here differs from Figure <ref>. The Greedy chart tells us that “Sex" is the first best change one can make, despite the fact it does not increase the score by a lot initially. The CounterShapley chart tells us that, on average “Sex" has the biggest effect on the prediction score when averaged over every possible combination of features. The Greedy chart shows that “Salary" has the biggest impact, but only if “Sex" and “City" were changed beforehand. The CounterShapley chart tells us that, when considering every possible combination of feature changes, “Salary" only accounts for 23.2% of the overall change in prediction score. Thus, the large impact of “Salary" in the Greedy chart is only because the previous feature changes allowed it to make as big of an impact. Indeed, the CounterShapley value of “Sex" is large (40.6%), indicating that “Salary" impacts the prediction score a lot on the Greedy chart, but only because “Sex" is included in the set of feature changes. “Sex" does not have a large contribution on its own, but it allows other features to make a big impact. §.§ Constellation Chart Counterfactual explanations tend to have sparse feature changes, as we discussed in previous sections. The Constellation chart makes good use of this characteristic to display how each feature change combination influences the counterfactual prediction score. This chart has the same complexity as calculating the CounterShapley values, since it iterates over all feature combinations (2^K). Contrarily to both previous charts, it also has a more complex representation. However, this type of chart can be useful to practitioners to have a complete view of how each counterfactual feature combination affects prediction scoring, allowing an extensive analysis of feature changes' influences. Moreover, it can also serve as a debugging tool for counterfactual explanations, since it is possible to verify if any subset of changes crosses the decision threshold, essentially finding a subset of the counterfactual that's also a valid counterfactual (see property 3 of the definition of counterfactuals in Section <ref>). It has a similar application in describing importance values to counterfactual changes as the CounterShapley chart, but instead of summarizing them in single values, it detangles all possible feature's change association impacts over the model's prediction score. In Figure <ref> we take the same Explanation 2 counterfactual to generate a Constellation chart. The Constellation chart representation in Figure <ref> shows how each single feature change (“Age”, “Salary”, “City”, and “Sex”) and their combinations affect the prediction score. On the left, we have the four counterfactual changes listed. The first pink dashed line (on the left) shows the factual score according to the x-axis. Next to the right, the four larger pink dots represent a single feature change. For example, the bottom large pink dot indicates the prediction score (according to the x-axis) when the feature “Sex" is changed from “M” to “F”. Similarly, the other large pink dots correspond to the feature changes at the same height. The smaller pink dots represent the combinations of different feature changes. For instance, the first small pink dot (from the bottom to the top) indicates the prediction score when both features “Sex" and “Salary" are modified (this can also be deducted from the lines connecting to the larger dots). Notice that the height of the combination dots is equal to the average height of the features combined, but this height has no explicit meaning other than visualisation purposes. The rightmost point represents the prediction score when all feature changes are applied, as indicated by the lines connecting to all large dots on the left. While this view looks more complex than the other previous methods, it also targets a more specialized public that is interested in understanding the underlying meaning of CounterShapley values and features' effects. For example, this chart already shows how the exclusion of “Age" still yields an instance that's very close to the decision threshold. Note for example how: “Sex" is indeed the first best feature one can change (lower left big pink dot), conform with the Greedy chart; and, on average, most small pink dots that are linked with “Sex" have indeed higher prediction scores than those not linked with “Sex", explaining its large CounterShapley value as seen in the CounterShapley chart. Finally, all small pink dots that are connected to both “City" and “Sex" yield the highest values. This is also congruent with the fact that these have the biggest CounterShapley values, as seen in the CounterShapley chart. Moreover, this chart can help in other technical tasks, such as model improvement, since the multiple feature associations can reveal unexpected effects that may require further investigation. This will be shown in the next section. § EMPIRICAL EXPERIMENTS In the previous sections, we have introduced two main algorithms to assess the feature importance values of counterfactual explanations: CounterShapley and a Greedy approach. These counterfactual feature importance values now make it possible to directly compare counterfactual explanations with other feature importance methods, such as LIME <cit.> and SHAP <cit.>; but also to white-box models, such as the weights of a logistic regression. In the following sections, we generate various counterfactual explanations using NICE <cit.>, but the results work equally well for other counterfactual methods such as DiCE <cit.>, ALIBIC <cit.>, or SEDC <cit.>. The CFI values are compared to those of SHAP and LIME for various test cases. We show different experiments with multiple data, models, and methods that characterize how our CFI methods work, highlighting the differences with LIME and SHAP. In the first two sections, we use artificial data to demonstrate the characteristics of our methods, while the other three subsequent sections use real-world data, showing its applicability. We also explain cases in which the plots proposed here embed critical value for enhanced explanations and “debugging” counterfactual explanations. All experiments presented here are fully reproducible and can be found in the “Experiments” branch of our GitHub repository (see below). Note that these comparisons are not quality assessments of these methods, as there is no consensus on what a “correct” explanation should be <cit.>. However, empirical testing is useful in providing a better hint at what each explanation approach tries to achieve. Our work uses Python to make all calculations regarding the counterfactual scores, and the Matplotlib package <cit.> to create the graphical elements. Our implementation is open-source and independent from a specific algorithm, which makes it possible to integrate it with any counterfactual generation algorithm. The repository can be found on GitHub https://github.com/ADMAntwerp/CounterPlotshttps://github.com/ADMAntwerp/CounterPlots. All empirical analyses use Python and open-source packages such as Scikit-Learn <cit.>, Pandas <cit.>, and Seaborn <cit.>. §.§ Simple Model Experiments We use three simple, explainable (white-box) machine learning models (logistic regression, decision tree, and K-nearest neighbors) together with artificially generated data, and explain the predictions with three importance attribution methods: LIME, SHAP, and CounterShapley. Since we need a counterfactual generator for the CounterShapley calculation, we opt for NICE <cit.> to create the counterfactuals. We can then infer if the results given by explanation methods follow an expected pattern or not, by using the intrinsic explainability of these simpler models: feature weights of the logistic regression, decision nodes for the decision tree, and the overall data distribution for KNN. For the logistic regression model, we explain 1,000 instances from a 5-feature synthetically generated dataset with two classes and a fixed random state (42) created with Scikit-Learn. Table <ref> shows the logistic regression weights for each feature. Since individual explanations may not provide enough information to compare each method, we analyze the CFIs by aggregating the explanations that contain non-zero values (Table <ref>) and by plotting them in histogram charts (Figure <ref>). The first main observation we can take from Figure <ref> is the fact that the CounterShapley CFI method generally yields only positive importance scores, while the other methods include negative scores. Negative scores simply represent that an increase in these features leads to a probability decrease that an instance belongs to class 1. It highlights the substantially divergent interpretation from the CounterShapley values, which measure the impact of the feature changes on the class probability, knowing that the sum of all changes should be positive. A negative feature value here would mean that, not only is there at least one coalition including this feature that decreases the probability of a class switch, but these also outweigh the overall effect of the remaining coalitions that increase the class probability. Either there are more coalitions that have a negative effect on the class probability; or the negative effects are, on average, stronger than the positive effects. As long as the coalition including all feature changes (i.e. the counterfactual itself) yields a class flip, such feature changes can occur in counterfactual explanations. Cases like these are rare, but not impossible. Second, although the average importance scores for SHAP and LIME are similar, their distribution is considerably divergent. SHAP appears to favor small values, as seen by the peak around 0, while LIME has an undefined distribution, generally assigning high-importance values to features, Finally, let's compare the logistic regression weights (in Table <ref>) to the share of explanations that each feature is included (with an importance score different than zero). We see that the feature with the highest weight (3rd) is the most frequent in CounterShapley values; on the other hand, the other methods always assign scores for all features. While, again, this does not represent an advantage, it shows that CounterShapley is sparser given that it is based on counterfactual explanations. For the decision tree, we made a very simple model with a depth of 2 and a total of 3 decision nodes using the same artificially generated data that we used for the logistic regression experiments. Figure <ref> shows a representation of the model, where we can observe that only 3 (out of the 5 features) are present in the model: the first, second, and fifth. Figure <ref> then shows the distribution of the importance scores obtained by the feature scoring methods. Again, we see a similar trend regarding the negative values assigned to importance scores for SHAP and LIME, which, similarly, means that increasing their respective features decreases the probability of class 1. Moreover, including the analysis of Table <ref>, we see that the 2nd and 3rd features have 0 importance for CounterShapley and SHAP, which is understandable (and expected) since these two features are not present in the decision tree nodes. Although LIME assigns importance scores to the 2nd and 3rd features, they are relatively low values compared to the other scores. Regarding the absolute importance values, it is clear on CounterShapley gives high importance to the 5th feature since it is the first splitting node and for the 1st feature since it splits more instances than the 2nd feature. Finally, the last explainable model was a KNN classifier fitted in a 2-features dataset depicted in Figure <ref>. This dataset is also artificially created with Scikit-Learn, with two informative features and classes, a fixed random state (equal to 42) to guarantee reproducibility, and all other parameters as default. The results of the average importance scores are shown in Figure <ref>. All methods assign a higher (absolute) importance to the 2nd (y-axis) feature. This is understandable since, by analyzing the data distribution in Figure <ref>, we can observe that the y-axis makes better segregation between classes than the x-axis. Similarly to the previous cases, in Table <ref>, we see again the sparsity of the CounterShapley explanations and also that the most important feature (the 2nd) is more frequent in the explanations. Considering all three experiments with the simple models described above, we see that CounterShapley assigns importance scores that are substantially different from SHAP and LIME. One major difference observed in all experiments is the fact that CounterShapley values focus on the score changes needed to flip the classification - given the already mentioned decision-driven nature. This characteristic can make the score interpretation easier since both LIME and SHAP assign importance scores to every class (in the case of binary classification, both 0 and 1 classes). However, we highlight that, depending on the research question, multiple importance scores for classes may be a desirable outcome, with the practitioner being on a task to consider each case in particular. The experiments also present, in general, the intrinsic model's justifications (weights, decision nodes, and data distribution) correspond to the average model's explanations. Consequently, all methods are aligned with the main features responsible for the classification. Nevertheless, as thoughtfully discussed in the previous sections, the numerical scoring values and intrinsic meaning of CounterShapley values are not the same as SHAP and LIME. §.§ Simple Data Experiments For the next experiments, we evaluate how CounterShapley, SHAP, and LIME assign importance scores to feature changes in a complex black box model based on random forest classifiers. However, if we use equally complex data, we wouldn't be able to say if the numerical results correspond to what is expected from the model and data. Therefore, we use simple datasets whose patterns can be clearly grasped by analyzing their class distributions. These data have multiple patterns and follow a similar concept as the experiments performed by Robnik-Sikonja et al. <cit.>. Our first experiment evaluates the impact of adding noisy training data by making a very simple dataset consisting of 3 informative features and 3 random features with no direct connection to the class labels. This dataset was artificially generated using the Scikit-Learn package, for a classification task with two labels and a fixed random state (42). Figure <ref> shows the distribution of importance score values for informative features (1st, 2nd, and 3rd) and random features (4th, 5th, and 6th). Additionally, Table <ref> shows the percentage of features that had non-zero importance scores. The importance scores show that all three methods assign more relative importance to the informative features if compared to the random ones. For this specific dataset, all three methods also show a similar pattern having the 1st feature as the most important, followed by the 2nd and 3rd. One may question why the random features are not equal to zero, especially because in a previous experiment, in which the model only used a subset of features, both CounterShapley and SHAP assigned exactly 0 importance scores to unused features. The reason is that, differently from the latter case, the random features in this experiment influence the prediction scoring. We also see the characteristics mentioned in the previous experiments, like CounterShapley having positive scores, sparsity with the most important features being more frequent, and average scores different from SHAP and LIME, still happening in this case. In the following simple data experiments, we use only 2 features, and classes synthetically generated with Scikit-Learn to follow a certain geometric pattern that can be detected by simple visual analysis. The first experiment divides the data points into four quadrants. The separation between the upper and lower quadrants has different thresholds, as shown in Figure <ref>. This figure also shows the x-axis importance score using CounterShapley, SHAP, and LIME methods. Since we use a binary class and the random forest classifier assigns classes with a high degree of confidence, the y-axis importance charts are very similar but with inverse scores, with clear regions being darker and vice-versa. The charts in Figure <ref> show that SHAP and LIME have very similar behavior scoring importance; for the y threshold equal to 0.5, they score about the same score for all points in any region. This can be justified since we have equally distributed classes over the 4 quadrants: therefore, overall, the x and y axis contribute equally to the output prediction. When the threshold is changed, the upper and lower quadrants have different scores for the x feature. This is also a reasonable result because the region with increased height has a lower importance for y since a variation in that direction has a reduced effect on probability while the x-axis is unchanged. However, the pattern observed for CounterShapley is substantially different if compared to the other two methods. As previously discussed, counterfactual explanations show the minimal feature change to modify the model's classification. Therefore, with a score based on this concept, the explanations are divided into regions where the x feature is closer to the decision threshold (hereafter having a higher importance score) and regions in which the y feature is closer to the decision threshold (consequently, x has a lower importance score). So for the same data and model, CounterShapley gives different scores if compared to SHAP and LIME. All methods have reasonable justifications for their scoring strategy. However, we emphasize that CounterShapley seems more reasonable if we want to score counterfactual explanations since it uses the same concepts. For the next experiment, the dataset has a pattern with two behaviors. For features up to x=0.5 the classification has a linear (x=y) threshold, while for values higher than 0.5, it has a constant behavior (y=0.5). Figure <ref> illustrates this dataset class distribution and the importance scores for the x (top) and y (bottom) features. There we can see the complementary effect of importance scores, given that we have a binary classification and a predictor that assigns scores with a high degree of confidence. The CounterShapley CFI method clearly separates the two regions in the chart, for the linear behavior region (x<0.5) does not show a clear preference for either feature while, for the constant region, the y feature has evident higher importance values. This behavior is justified because, in the linear region, the x or y feature distance to the classification threshold is about the same for most instances, while in the constant region, y is usually the closest path to the model's decision threshold. Unlike the previous cases, SHAP and LIME had substantially different patterns; however, both generally give more importance to the y feature. Nevertheless, their importance has a clearly more subtle relationship with the dataset class distribution if compared to CounterShapley importance scores. Figure <ref> shows the importance scores for a circular pattern in the dataset. In this case, the 0 class points are concentrated in the center, while the 1 class is around the circular region formed by the 0 class instances. The analysis of importance values shows that LIME assigns about the same importance for the x and y features in all instances. This may happen given the sampling strategy, which randomly selects nearby points which end up normalizing the value given the circular geometry. Interestingly, SHAP and CounterShapley have some similarities where the middle top and bottom regions assign a higher relevance to the y feature while the lateral regions assign more importance to the x feature. However, the internal region of SHAP does not look to have this pattern. The justification for such a pattern in CounterShapley may be explained by the sparsity-optimized strategy that the counterfactual generator uses. Therefore, in lateral regions, the most optimal feature change to reach the decision threshold is modifying the x feature, while it is y otherwise. All experiments with simple data above show again that CounterShapley values give considerably different importance values for feature changes, following a scoring pattern more consistent with the idea behind counterfactual explanations, minimal sparse changes to change prediction classification. Moreover, we can notice the capacity that CounterShapley values have to react differently to multiple patterns in the same dataset, where these regions are less evident if we use SHAP and LIME. This characteristic is intrinsically linked to the fact that counterfactual explanations are based on only two points (factual and counterfactual). In contrast, SHAP and LIME have a normalizing effect since they consider multiple points over the dataset. This difference does not necessarily represents an advantage of CounterShapley values because all methods present reasonable justifications for their scores. Therefore, practitioners must consider all these characteristics and their objectives before selecting an importance scoring method. §.§ Replication Experiments One of the main advantages of the methods presented in this paper is their broad compatibility with any counterfactual generation algorithm. The compatibility is not only in terms of theoretical requirements since they only need the factual, counterfactual, and model prediction functions, but also given our algorithmic framework that can be easily implemented using Python and popular data science packages. In this section, we show how our methods can be applied to three different counterfactual generation algorithms: NICE <cit.>, DiCE <cit.>, and ALIBIC <cit.>. We provide a snippet code of how they are implemented, highlighting the simplicity, and we perform some experiments with data and models which they use in their respective papers. All these experiments are publicly available on GitHub in the Experiments branch. NICE <cit.> is a fast and reliable counterfactual generation algorithm that finds optimal explanations. It uses dataset labeled instances to find a first optimal instance with a different class and then perform multiple optimization iterations to improve the counterfactual with respect to a specific optimization objective (which can be sparsity, proximity, or plausibility). Appendix Algorithm <ref> shows how easily the NICE official package[https://github.com/DBrughmans/NICEhttps://github.com/DBrughmans/NICE] can be adapted to generate all analyses proposed by the methods presented in this article. Our method allows the creation of an object (showed in the Appendix Algorithm <ref>) that works exactly like NICE, since it is an extended class, but we change the output of the “explain” function to not being a simple list of the counterfactual points. Rather, the function's output returns “CounterPlot” objects, which are able to give the CounterShapley values for each feature and plot any of the three charts presented here (Greedy, CounterShapley, and Constellation). Figure <ref> shows an example of the result obtained from an instance of the Wisconsin Breast Cancer dataset, as used in the NICE paper. We also use the DiCE's <cit.> counterfactual generator official Python package[https://github.com/interpretml/DiCEhttps://github.com/interpretml/DiCE]. This counterfactual generator can work independently of a specific model (model agnostic) and allows to generate a set of diverse counterfactual explanations. Appendix Algorithm <ref> shows how a function can take the objects generated by the DiCE counterfactual generation and transform them into the object that allows all analysis presented in this paper. Note that, in DiCE's case, the algorithmic framework is more complex since the package is responsible for the dataset encoding, and the resulting counterfactual is inside a custom object. Nevertheless, our algorithm can still be adapted and processes all the diverse counterfactual instances produced by DiCE. As a sample example, we use the same dataset DiCE uses in their package official documentation, the Adult dataset, and Figure <ref> shows the results. In this case, we evaluate the features needed to change the prediction from low-income (class 0) to high-income (class 1). ALIBIC <cit.> is also adapted using our framework using Appendix Algorithm <ref>. This counterfactual generator performs a gradient descent using a complex loss function that includes terms for distance, sparsity, and similarity to the dataset manifold. Similarly to what we have done with DiCE, we use the package official documentation[https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_housing.htmlhttps://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_housing.html] to get a dataset, generate a model and counterfactual explanation. In this case, we use the California Housing dataset and an artificial neural network as the machine learning model. ALIBIC generates an object that includes multiple attributes, such as the counterfactual point, classes, and features' ranges. ALIBIC's adaptation to generate our analysis is considerably simpler than that of DiCE, as DiCE uses multiple built-in methods from their framework, such as the dataset treatment and model adapter. For ALIBIC (and NICE), we use simpler strategies that directly use the factual point and the prediction model. Figure <ref> shows an instance explanation example with the three chart types. With the examples shown above, we showcase that not only the theoretical concepts described in this article are independent of a specific counterfactual generator, but also our algorithmic framework works with any type of generator. Therefore, this represents a substantial contribution to the counterfactual explanation area since it can readily embed a higher explanation value to novel and past algorithms. §.§ Missclassification Experiments One of the most valuable uses of explanation methods, especially those that focus on local/instance-level explanations, is the investigation of why a model assigned a wrong classification to a given instance. The explanation can reveal patterns or features that can be further investigated, leading to correction or improvements. Counterfactual explanations, as already mentioned, have multiple characteristics (simple, sparse, decision-oriented, etc.) that make them a valuable method for investigating wrong classification instances. However, the isonomic view of feature changes' importance to the model's prediction may not give enough information for a complete understanding of the underlying causes of the misclassification. Therefore, this experiment shows an example of how the analysis presented in this article can help to unveil more information about the model scoring and decision. To illustrate this application, we use the Bankruptcy Prediction dataset <cit.>, which includes bankruptcy data from the Taiwan Economic Journal from 1999 to 2009. As a model, we use the Scikit-Learn random forest algorithm with default parameters and a fixed random seed (42). Figure <ref> shows the feature change importance for an instance that is incorrectly classified as a healthy company but got bankrupt. The counterfactual explanation and instances can show interesting insights about this misclassification, such as: a lower total asset growth rate would positively contribute towards a correct classification. This makes sense since slower growth generally is seen as a pessimistic indicator of a company's prospects. As this was the most important feature change, contributing to more than 50% of CounterShapley value, measures such as the acquisition of more data related to asset growth and feature engineering could potentially help the model achieve better performance. Moreover, the two other features also give us an interesting perspective on the misclassified instance. The second most important feature change (according to CounterShapley values) is a higher gross profit to sales share, and the third, with a similar importance value, is a higher net value per share. At first sight, these changes look unexpected since both rises are associated with positive outcomes. However, it's important to notice these two changes alone are not sufficient to flip the classification and must be associated with the first, most important feature change. This analysis enhances the informative value of counterfactual explanations, allowing us to investigate the pattern formed by these feature changes: maybe the association of higher shares and profit with lower asset growth indicates a purely speculative/momentaneous gain which is not sustainable in the long run. These unexpected associations can also mean that the model is not behaving as wanted, making spurious associations caused by overfitting or another problem in the dataset. In summary, the counterfactual explanations associated with the CFI methods and charts give us extra knowledge of how changes affect the prediction score and classification result, which can lead to better model understanding and improvements. §.§ Special Cases Finally, in this last section of the experiments, we show two special cases in which the analysis presented in this article could contribute to a better understanding of the model and counterfactual result evaluation. First, we talk about counterfactual feature changes which have negative CounterShapley values, revealing non-linear behaviors between features. Then, we present a way to identify counterfactual explanations that have more feature modifications than needed. §.§.§ Negative Contributions Complex models can lead to highly non-linear behaviors in which features that usually have a certain behavior (for example, increase the probability of a class) completely change when a specific set of features are present. Simple counterfactual explanations cannot detect such behavior since they only disclose the feature modifications needed to make a change in the classification prediction. However, as shown in Figure <ref>, our analysis can detect features that, although they have a negative contribution over the prediction scoring, are necessary for a class change. Our analysis uses the UCI's Wine Quality dataset[Dataset page: https://archive.ics.uci.edu/ml/datasets/wine+qualityhttps://archive.ics.uci.edu/ml/datasets/wine+quality] and a random forest classifier built with Scikit-Learn and default parameters. In Figure <ref>, we show one instance example that has a feature (f3) with a negative CounterShapley value as seen in its respective CounterShapley chart. This can be further understood with the Constellation chart, which shows that f3 decreases the inverse class probability if changed alone (bottom clear blue dot) or in association with other features. However, as can also be seen in the Greedy and Constellation chart, this feature is fundamental for the class change since, if absent, it is not sufficient to modify the model's predicted class. §.§.§ Counterfactual subset The definition of a counterfactual explanation includes the requirement of having an irreducible set of valid modifications (as described in Equation <ref>). Although the interpretation of valid modifications is debatable <cit.>, being able to identify subsets of changes that form a counterfactual explanation may be useful for a better understanding of the prediction model and for the assessment of the counterfactual explanation. The usual simple counterfactual results are not capable of showing this since they only show the feature changes needed for a class modification. Therefore, the analysis presented in this work may be a valuable tool for evaluating the counterfactual result. Figure <ref> shows an example of how a counterfactual with class changing subsets would look in Greedy, CounterShapley, and Constellation charts. In our example figure, the Greedy and CounterShapley charts indicate that the counterfactual explanation has more features than needed to change the classification change. The Constellation Chart may be the most appropriate analysis since it clearly shows which subsets of change lead to a counterfactual class. For this example, we can see the change in occupation feature is not needed to modify the prediction class. § CONCLUSION Machine learning predictions can be effectively explained with counterfactual explanations. Its simple and decision-driven nature makes it easier for even laypersons to get a sufficient understanding of why automated decisions are made. However, up to now, the representation of this type of explanation was mainly based on textual communication which, although has its own benefits, lacks the benefits of graphical elements. Moreover, the isonomic view of the counterfactual modified features may not give a complete picture of why a model has certain decisions since it does not explain the relationship between features and prediction scoring. This paper proposes two CFI methods as a solution to those challenges, Greedy and CounterShapley, as being approaches that can embed relative importance to counterfactual features (and their corresponding value modifications) and three chart types that highlight different aspects of the counterfactual explanation. Hence, the Greedy CFI method illustrates how a specific path, following the steepest increases of scores, links to the counterfactual changes, and CounterShapley fills the knowledge gap of how the prediction scores are affected by each changed feature in overall. Those CFI methods can provide better insights into how the model works, consequently augmenting the informative value of the explanations. Regarding the plotting strategies, CounterShapley charts use the feature change importance values to create simple yet descriptive graphs. For the cases in which computational burden is an issue, we propose the Greedy Counterfactual chart to quickly provide a visual representation that describes the greediest strategy to achieve the counterfactual prediction score. The third chart type, the Constellation chart, takes advantage of the sparse nature of counterfactual explanations to show the possible feature associations and their impact on the prediction scoring. Despite the usefulness of these methods in augmenting the informative value of counterfactual explanations, they are not adequate for situations where the characteristics of counterfactuals are not desirable, for example, if an importance score is required for every model's feature. Finally, all methods and charts here described are implemented in an open-source package that can be easily integrated into any counterfactual generator, bringing immediate impact considering the multiple counterfactual generation algorithms available. As for future perspective, we envision the application of counterfactual-based scores and their visual representations in real-world cases, assessing the user reception in terms of understandability and preference over other explainability approaches.
http://arxiv.org/abs/2306.02758v1
20230605102603
Exploring Critical Overdensity Thresholds in Inflationary Models of Primordial Black Holes Formation
[ "Ioanna D. Stamou" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph", "hep-th" ]
Exploring Critical Overdensity Thresholds in Inflationary Models of Primordial Black Holes Formation Ioanna D. Stamou ^1 ^1Service de Physique Théorique, C.P. 225, Université Libre de Bruxelles, Boulevard du Triomphe, B-1050 Brussels, Belgium In this paper we study the production of Primordial Black Holes (PBHs) from inflation in order to explain the Dark Mater (DM) in the Universe. The evaluation of the fractional PBHs abundance to DM is sensitive to the value of the threshold δ_c and the exact value of δ_c is sensitive to the specific shape of the cosmological fluctuations. Different mechanisms producing PBHs lead to different thresholds and hence to different fractional abundances of PBHs. In this study, we examine various classes of inflationary models proposed in the existing literature to elucidate the formation of PBHs and we evaluate numerically the associated threshold values. Having evaluated the thresholds we compute the abundances of PBHs to DM using the Press Schecter approach and the Peak Theory. Given the influence of different power spectra on the thresholds, we investigate whether these inflationary models can successfully account for a significant fraction of DM. Moreover, we provide suggested values for the critical threshold. By examining the interplay between inflationary models, threshold values, and PBH abundances, our study aims to shed light on the viability of PBHs as a candidate for DM and contributes to the ongoing discussion regarding the nature of DM in the Universe. § INTRODUCTION Dark Matter (DM) is considered as one of the biggest problem in Cosmology. Recent observations, such as the detection of Gravitational Waves emitted by a binary black hole merger <cit.>, have reignited interest in the possibility that Primordial Black Holes (PBHs) could constitute a significant fraction of DM. The idea of PBHs was proposed in the 1970s by Hawking and Carr <cit.>. The renewed detection of Gravitational Waves has revitalized the exploration of the connection between DM and PBHs. In particular, there are numerous theoretical studies on the formation of PBHs from inflationary models, such as Refs. <cit.>. According to these studies an enhancement in the power spectrum at small scales can lead to PBHs formation, which could explain a significant fraction of DM (or even the whole DM) in the Universe. Many of these models are based on single field inflation with a near inflection point in the scalar potential, such as those of Refs. <cit.>. The drawback of these models is that a lot of fine-tuning in the underlying parameters is required. Other models with two-field inflation have been proposed <cit.>. For instance, hybrid models have been intensely studied in the literature <cit.>. The formation of PBHs in the majority of these models takes place in the radiation dominated epoch. In our study we only consider this class of models. The formation of PBHs occurs when a cosmological perturbation collapses to a black hole if its amplitude δ exceeds a certain threshold value δ_c. Early analytical estimates of δ_c were based on a simplified Jeans length approximation, which gives δ_c∼ w, where w is the equation of state <cit.>. More recent studies have refined this value by incorporating the theory of General Relativity, obtaining δ_c ∼ 0.4 in the radiation dominated era <cit.>. However, this analytical computation provides only a lower bound because it does not account for non-linear effects. Full numerical relativistic simulations are required to fully capture these effects, which recent studies have shown to be dependent on the initial curvature profile, with 0.4 ≤δ_c ≤ 2/3 <cit.>. Significant progress has been made in understanding the mechanism of PBH formation through detailed spherically symmetric numerical simulations that incorporate a non-linear approach<cit.>. It was previously remarked that the threshold for PBHs formation depends on the specific mechanism of inflation and the properties of the collapsing object, such as its mass, size, and initial density profile <cit.>. In addition to that, the abundances of DM from PBHs are extremely sensitive to the threshold and the exact value of the peak. In other words, the shape of the power spectrum leads to a different threshold and hence to a different abundance of PBHs to DM <cit.>. There are numerous models which adopt a specific value of the threshold, which is in accordance with the range presented in the literature, such as <cit.>. However, the exact value of the threshold has an important role in the calculation and it can lead to ruling out and not ruling out inflationary models for producing PBHs as DM. In this study we evaluate numerically the threshold for some classes of inflationary models presented in the literature. Specifically, we study the thresholds for the case of two inflation model with a non-canonical kinetic term <cit.>, the case of hybrid model <cit.> and the case of a single field with an inflection point<cit.>. Having these values for the threshold we evaluate the abundance of PBHs to DM. Instead of having an acceptable value for the threshold, we evaluate numerically these thresholds for each case. Therefore, we can conclude if these models can predict a significant fraction of DM and we can propose a value for the threshold for those models. The layout of this paper is as follows: in Section 2 we present the basic aspects for the threshold δ_c. In Section 3 we show the evaluation of δ_c for a given power spectrum. In Section 4 we present the calculation of the abundances of PBHs. In Section 5 we present the application of the previous analysis to inflationary models and especially to two field models, hybrid models and models with an inflection point. Finally, we draw our conclusions in Section 6. § THE THRESHOLD Δ_C In this section we introduce the threshold of PBHs. Generally, the threshold for PBHs is determined by the density fluctuations in the early Universe, which can collapse under their own gravity to form black holes if they exceed a certain threshold value. The PBHs are formed from cosmological fluctuations after re-entering the horizon. Under the assumption of spherical symmetry, the spacetime metric on the superhorizon scales can be given as ds^2=-dt^2+a^2(t)[ dr^2/1- K(r)r^2 +r^2 dΩ^2]=-dt^2+a(t)^2exp(2ζ(r̂))[dr̂^2+r̂^2dΩ^2] where a(t) is the scale factor and K and ζ(r) are the conserved comoving curvature perturbations defined at the superhorizon scales. Combining these expressions we have: K(r)r^2=-r̂ζ'(r̂) (2+r̂ζ'(r̂)). and the coordinates r and r̂ are connected as follows: r = r̂exp(ζ(r̂ )) dr/√(1-K(r)r^2)=exp(ζ(r̂ ))d r̂. As shown in <cit.> the shape of cosmological perturbations is related to the K and ζ(r) and so they are related to the power spectrum P_R. In other words, different scalar power spectrum profiles can result in varying thresholds. The energy density profile is defined from: δρ/ρ_b≡ρ(r,t)-ρ_b(t)/ρ_b(t)= f(w) ( 1/aH)^2 ( K(r) +r/3K'(r)) where H is the Hubble parameter, ρ_b is the mean background energy density and f(w) is: f(w)= 3(1+w)/(5+3w) with w is the equation of state w=p/ρ. The criterion to define the PBHs formation can be given at the peak of the compactification function, which is defined as follows: C≡2δ M/R(r,t) where R is the areal radius and δ M= M(r,t)- M_b (r,t). M is the Misner-Sharp mass and M_b(r,t)=4πρ_bR^3/3. If we define the expansion parameter ε as follows ε=1/a(t)r_mH(t)=1/a(t) r̂_mexp(ζ) H(t) one can express the compactification function C at the leading order O(ε^3) of the gradient expansion of Misner-Sharp equations as follows <cit.>: C ≃ f(w) K(r)r^2=f(w)( 1- [1+r̂ζ'(r̂)]^2). The expression of energy density profile is valid if ε≪ 1. Finally, the threshold δ_c is equivalent to the peak value of the compactification function, δ_c=C(r̂_m). Fluctuations with with amplitude δ_m bigger than the threshold, δ_m> δ_c, can collapse to form PBHs, otherwise fluctuations with δ_m< δ_c cannot lead to PBHs formation. Having the assumption of spherically symetric perturbation we define the dimensionless shape parameter q<cit.> as: q=-C”( r_m) r_m^2/4C( r_m). and in terms of r̂ it takes the form: q=-C”(r̂_m)r̂_m^2/4C(r̂_m)( 1-C(r̂_m)/f(w)). This parameter is characterised from the width of the peak of the compactification function. We remark here that different profiles with same parameter q have the approximated same threshold δ_c in the case of radiation epoch <cit.>. § THE ESTIMATION OF THE THRESHOLD There is a number of works where an analytical expression for the threshold δ_c was studied <cit.>. In our study we use this of Ref.<cit.>. In this section, we summarize the evaluation of the threshold for a given scalar power spectrum P_R. As we remarked previously, the shape of the power spectrum leads to a different threshold. This analysis is presented in Refs.<cit.>. In general, the power spectrum is given by P_R(k)=k^3/2π^2|ℛ_𝓀|^2 with ℛ_𝓀 the curvature perturbation and k is the comoving wavenumber. A concise analysis of the power spectrum evaluation will be presented later. In this section we follow the steps of Ref. <cit.> in order to obtain the threshold δ_c for a given power spectrum. An equivalent analysis is presented in Ref. <cit.>. In order to obtain the threshold, one should consider the two point correlation function as: g( r̂)=1/σ_0^2∫^∞_-∞dk/ksin(k r̂)/kr̂P_R(k). with σ_0^2=∫^∞_-∞dk P_R(k)/k. The Eq. (<ref>) connects the scalar power spectrum P_R with the two point correlation function. As a first step in the calculation of the δ_c, we should locate the maximum value of the compactification function r̂_m. The value of r̂_m can be obtained by the root of the following equation: ζ'(r̂_m)+r̂_mζ”(r̂_m)=0 where μ is the amplitude of curvature fluctuation and it is given as follows: μ= ζ(r̂)/g(r̂) or μ=±√(1-C(r̂_m)/f(w))-1/g'(r̂_m)r̂_m. The critical amplitude, μ_c, is obtained at C(r_m)=δ_c. In order to obtain the threshold we define the function G(r̂_m)=g'(r̂_m)-r̂_m^2 g”'(r̂_m)/2/g'(r̂_m). The threshold δ_c can be given from the following expressions: q=G(r_m)1/√(1-δ_c(q)/f(w))1/(1+ √(1-δ_c(q)/f(w))) where δ_c is given: δ_c=4/15e^-1/qq^1-5/2q/Γ(5/2q)-Γ(5/2q,1/q). The dimensionless parameter q given in Eq. (<ref>) is the shape parameter and it is defined as the shape around the peak of the compactification function given previously in Eq.(<ref>). The Eq. (<ref>) is an analytical expression to calculate the threshold δ_c as a function of the shape parameter q in radiation era<cit.>. Finally, Γ is the incomplete gamma function. § THE PBHS PRODUCTION Until now, we have introduced the evaluation of the thresholds δ_c. As we study the possibility of PBHs to explain the DM, we introduce the evaluation of the fraction of PBHs to DM, f_PBH. In this section we summarize the calculation of f_PBH. The present abundance of PBHs is given by the integral: f_PBH = ∫ dln M Ω_ PBH/Ω_ DM and the fractional abundance of PBHs to DM is: Ω_ PBH/Ω_ DM= β(M_PBH(k))/8 × 10^-16(γ/0.2)^3/2(g(T_f)/106.75)^-1/4(M_PBH/10^-18g)^-1/2, where β is the mass fraction of Universe to collapse to PBHs, γ is the correction factor which depends on the gravitational collapse, M_PBH is the mass of PBHs and g(T_f) is the effective number of degrees of freedom when the PBHs are produced. The mass of PBHs, M_PBH, is associated with the mass inside the Hubble horizon as: M_PBH= γ M_H=γ4/3πρ H^-3 where ρ is the energy density of the Universe during the collapse. The M_PBH is given as M_PBH=10^18(γ/0.2) (g(T_f)/106.75)^-1/6(k/7 × 10^13 Mpc^-1)^-2g. We choose γ=0.8 and g_*=106.75 <cit.>. The variance of curvature perturbation σ_δ is related to the power spectrum by the following expression: σ_δ^2 (M_PBH(k))= 4(1+w)^2/(5+3w)^2∫dk' /k'(k'/k)^4 P_R(k') W̃^2(k'/k) and for the i-th spectral momentum the smoothed density is give as: σ_i^2 (M_PBH(k))= 4(1+w)^2/(5+3w)^2∫dk' /k'(k'/k)^4 (k')^2i P_R(k') W̃^2(k'/k) where the equation of state, w, in radiation dominated epoch is equal to 1/3. W̃(k'/k) is the Fourier transform of the window function. In the following we assume a Gaussian window function. The fraction β can be evaluated with the Press Schecter approach (PS) <cit.> or with the Peak Theory (PT) <cit.>. In PS approach the mass fraction, β_PS, is given by the probability that the overdensity δ is above a certain threshold of collapse, denoted as δ_c. To estimate the probability of PBHs formation and establish a connection between the collapse threshold and the power spectrum, we make the assumption that curvature perturbations can be described by Gaussian statistics. This assumption allows us to analyze the formation probability of PBHs and investigate how it relates to the characteristics of the power spectrum. The fraction β_PS for this approach reads as: β_PS(M_PBH)= 1/√(2 πσ_δ ^2 (M))∫^∞_δ_c dδ  e^-δ ^2/2 σ_δ^2(M) =1/2Erfc(δ_c/√(2 )σ_δ). The Eq. (<ref>) is computed using the incomplete gamma function: β_PS(M_PBH)=Γ(1/2, δ_c^2/2 σ_δ^2)/2√(π). The PS approach has been found to underestimate the fractional abundances of PBHs by approximately two orders of magnitude <cit.>. For this reason we consider the PT as well. The peak number density is given by n_peak=∫_ν_c^∞𝒩(ν)dν=σ_2^3/(2π)^2(√(3)σ_1)^3∫^∞_ν_cdνG̃(κ,ν)e^-ν^2/2 where ν≡δ /σ_0 and ν_c corresponds to this value at the threshold δ_c. The function G̃ is given from: G̃(κ,ν)=∫_0^∞f(x)/√(2π(1-κ^2))exp[ -(x- κν)^2/2(1-κ^2)]dx where f=x^3-3x/2[ (√(5/2)x)+(√(5/8)x) ]+√(2/5π)[ ( 31x^2/4+ 8/5)e^-5x^2/8+( x^2/2-8/5)e^-5x^2/2]. The mass fraction in PT is given from: β_PT=1/√(2π)( σ_2/(aH)(√(3)σ_1))^3∫_ν_c^∞G̃(κ,ν) exp[-ν^2/2]dν. A recent approximation gives good results in comparison with the exact numerical PT <cit.>. According to this approximation the β_PT function can be approximated by: β_appr=1/√(2π)Q^3(ν_c^2-1)exp[-ν_c^2/2] where Q=σ_1/(aH√(3)σ_δ). In the following we will use this approximation. As we can notice in this section from the mass fraction β is sensitive to the exact value of the threshold. In both studies of PS and PT, Eqs.(<ref>) and (<ref>), the value of β and, hence, the value of the f_PBH is exponentially sensitive to the value of the threshold δ_c. A slightly different threshold can give different results and that is the reason for the numerical evaluation of the threshold. § THRESHOLD IN INFLATIONARY MODELS In this section we present the calculation of the threshold δ_c for different classes of models proposed in the literature, which are a two-field model with a non-canonical kinetic term, a hybrid model with a waterfal trajectory and a model with an inflection point in the effective scalar potential. So, we examine if these models can predict a significant fraction of DM. We define the parameter of each mechanism, which is responsible for the height and width of power spectrum. Finally, we evaluate the threshold and we propose a value in order these models can explain the maximum of PBHs abundances. §.§ Threshold in two field model with non-canonical kinetic term In this section we evaluate the threshold and the abundance of PBHs to DM by the two-field model studied in Ref. <cit.>. This two-field model is characterized by two stages of inflation with a large non-canonical kinetic coupling, which connects the two fields<cit.>. The perturbations at small scales are dramatically enhanced by the sharp feature in the form of non-minimal coupling. In particular, the action of a two-field toy model is given by the expression: S=∫ d^4 x √(-g)[ M_P^2/2R -1/2 (∂ϕ)^2 - e^-b_1ϕ (∂χ)^2 +V(ϕ,χ) ] where M_P is the reduced Planck mass. The potential is given as follows: V(ϕ,χ)= V_0 ϕ^2/ϕ_0^2+ϕ^2 + m_χ^2/2χ^2 where ϕ is a canonical scalar field, χ is a non-canonical scalar field and b_1 an interaction between the fields. V_0, ϕ_0 and m_χ are parameters. In the following we consider the choices of parameters in <cit.>: V_0/(m_χ M_P)^2=500 and ϕ_0=√(6)M_P. As we mentioned before, for the computation of δ_c we need as a first step to evaluate the primordial power spectrum . The power spectrum is evaluated numerically by solving the perturbation of the fields. For this reason we briefly refer to this evaluation. The equations below are written for n-fields, but one can easily obtain those for one or two fields. More details can be found in <cit.>. Generally, the equation of the fields φ^i using efold time is given from (we work in M_P=1): φ̈^c+Υ_ab^cφ̇^a φ̇^b+( 3- 1/2σ̇^̇2̇)φ̇^c+( 3- 1/2σ̇^̇2̇)V^c/V=0 where dots represent the derivative in respect to efold time and indices i=a,b,c represent derivatives with respect to the field. Υ^c_ab denotes the Christofel symbol in respect to the fields: Υ^c_ab=1/2G^cd(G_da,b +G_db,a -G_ab,d ), where G_ij is the field metric. By σ̇ we denote the velocity field which is given as follows: σ̇^2=G_abφ^aφ^b. The Hubble parameter is given by: H^2=V/3-1/2σ̇^̇2̇ and the slow-roll parameter ϵ_1 is given from: ϵ_1=1/2σ̇^̇2̇. The equation for the perturbations δφ^2 of the fields are given by: δφ̈^c+(3-ϵ_1)δφ̇^c+2Υ^c_abφ̇^aδφ̇^b+( Υ^c_ab,dφ̇^aφ̇^b+V^c_d/H^2 -G^caG_ab,dV^b/H^2)δφ^d +k^2/a^2H^2δφ^c=4Ψ̇φ̇-2ΨV^c/H^2 and the equation for the Bardeen potential Ψ is given by: Ψ̈+(7-ϵ_1)Ψ̇+( 2V/H^2+k^2/a^2H^2)Ψ=-V_c/H^2δφ^c. In this analysis we suppose that we are initially in Bunch-Davies vacuum. The power spectrum is given from the equation: P_R=k^3/(2 π)^2(R_i^2) where R_i=Ψ +H ∑_i^N_fieldsφ̇_̇i̇δφ_i /φ̇_̇i̇^2. In Fig. <ref> (left panel) we show the power spectrum for different values of the parameter b_1 given in the kinetic term of the action (<ref>). As one can notice the peak height and the width of the power spectrum depend on the parameter b_1. For the given power spectra we evaluate the value of the threshold following the steps of the Section <ref>. The threshold, which is finally given from the Eq. (<ref>), is depicted in the right panel of Fig. <ref> as a function of b_1 (black line). Therefore one can notice that diferent profiles of power spectrum lead to different thresholds δ_c. Finally, we evaluate the abundance of PBHs to DM using the PT formalism in Eq. (<ref>). In the right panel of Fig. <ref> we present the specific choice of the value of the b_1 in order to explain the whole amount of DM (orange dashed line). We observe that this model requires a threshold at around δ_c=0.515. It is also imperative to verify that the fractional abundance Ω_ PBH/Ω_ DM, given in Eq. (<ref>), does not exceed 1. In the specific case we have investigated, the value slightly surpasses 1, leading to the need to explore smaller but notable abundances. Therefore, for obtaining the maximum DM we need a slighter smaller power spectrum and thus a slighter smaller threshold. To sum up, this inflationary mechanism with two fields and a large coupling, which connects these two fields, can explain a significant amount of DM from PBHs formation. §.§ Threshold in hybrid model Hybrid models for explaining the PBHs have already been proposed <cit.>. According to these models, large curvature perturbations can be generated during a mild waterfall trajectory. In particular, these models are in the framework of two field inflation, where one field acquires tachyonic solution in the critical point and the other, which plays the role of the inflaton, becomes unstable. The analysis of the background dynamics is decomposed in two phases: a slow-roll phase until the critical point and a second waterfall phase till the end of the inflation. The calculation of the inflationary observables during these phases has been performed analytically in the slow-roll approximation <cit.>. In this section we present the two field hybrid model <cit.> for explaining the DM and we evaluate the δ_c and then the abundances of PBHs to DM. The F-term hybrid model in the globally supersymmetric renormalizable superpotential reads as follows <cit.>: W= κ S ( Ψ_1 Ψ_2 -M^2/2) , where Ψ_1, Ψ_2 are chiral superfields, the scalar component of the superfield, S is the gauge singlet inflaton field, κ is a dimensionless coupling constant and M is a mass. The scalar potential in hybrid models is given from V_F^SUSY= Λ[ (1-ψ^2/M^2)^2+ 2 ϕ^2 ψ^2/M^4 ] , where we have assumed Λ=κ^2 M^4/4 . In order to fix the non-canonical kinetic term we have |S|=ϕ /√(2) and |Ψ_1|=| Ψ_2|=ψ/√(2). The potential is flat along the direction ψ=0, |ϕ|>|ϕ_c|=M and is given by a constant value for the energy density V=κ^2 m^4. The field ψ develops tachyonic solutions if κ^2(-M^2+ϕ^2+6ψ^2)<0 . Along the flat direction this condition becomes: ϕ_c^2< M^2 . The value of the field ϕ_c denotes the critical point, as below this value the field develops tachyonic solutions. The inflaton ϕ field moves through the valley until it reaches the critical point. After that the other field, which is called waterfall, acquires tachyonic solution and the inflaton moves through the waterfall. Finally, the inflation ends in false vacuum. A potential which leads to enhancement of power spectrum and is derived from hybrid model is given as follows <cit.>: V_hybrid= Λ[ (1-ψ^2/M^2)^2+ 2 ϕ^2 ψ^2/M^4 +a_1 (ϕ- ϕ_c)+ a_2 (ϕ- ϕ_c)^2 ] where a_1 and a_2 are dimensionful parameters. These parameters play a crucial role in shaping the characteristics of the potential and directly influence the behavior of the system at the critical point The power spectrum of this two-field potential can be analytically estimated by employing the slow-roll approximation and dividing the solution into two distinct phases <cit.>. In the first phase, inflation is solely driven by the inflaton, while the influence of the other field is considered negligible. However, as the inflationary process progresses, the terms associated with the other field gradually begin to dominate, leading to a significant impact on the dynamics of the system and this signifies the onset of the second phase. The approximated power spectrum is given from the following expression <cit.>: P_R = 1/4 π^2Λ/3M_P^2( 1/ a_1^2 M_P^4 +M^4/64M_P^4 ξ_2^2 ψ_k^2) ≈1/4 π^2Λ/3 M_P^2M^4/64 M_P^4 ψ_k^2 ξ_2^2 . where ξ_2 = -√(a_1 χ_2 M)/2, with χ_2 = ln( M^3/2√(a_1)/2 ψ_0). and ψ_k =ψ_0 e^χ_k, with χ_k=4a_1M_P^4/M^3 (χ_2^1/2 M^3/2/2M_p^2√(a_1)+ M ϕ_c^1/2/4 M_P^2 a_1^1/2 x_2^1/2-N)^2. The approximation (<ref>) is in good agreement with the numerical evaluation of the power spectrum, as it is shown in Refs.<cit.>. For this reason we use this expression for the computation of the thresholds. Interestingly, it is possible to characterize any of the quantities in this expression by the combination Π: Π=M√(ϕ_c)/M_P^2√(a_1). Hence in the next we use this parameter for the evaluation of the threshold. In the left panel of Fig.<ref> we depict the power spectrum of Eq.(<ref>) for different values of the parameter Π. We plot the spectra in respect to the comoving wavenumber k, which is connected to the number of efolds as k=k_*e^N and k_* is the pivot scale. In the right panel of Fig.<ref> we repeat the calculation for the threshold δ_c analyzed in Section <ref>. It is evident that larger values of the power spectrum width lead to a decrease in the corresponding threshold. This result aligns with the findings of Ref. <cit.>, where the study focuses on the Gaussian and lognormal power spectrums. As the width of the power spectrum increases, the shape parameter q decreases, indicating that the participation of multiple modes in the collapse results in a flatter compactification function <cit.>. Through this work we used PT instead of PS approach. To illustrate the contrast between these two methodologies we present a comparison in Fig.<ref> of how the f_PBH varies with respect to the parameter Π. In this figure we display the computed PBH abundance using both the PS (blue line) and PT (orange line) approaches. Remarkably, a significant disparity emerges between the PS approach and PT. When employing the PS approach, it is shown that we cannot explain a remarkable amount of DM. The fraction abundances increase as the value of Π decreases. However, in order to have such abundances we need the peak less than 10^-8 and these regions are restricted from microlensing for Subaru (HSC), Eros/Macho/Ogle <cit.>, ultra faint dwarf (UFD) <cit.> and CMB measurements <cit.>. In contradiction to PS approach, we observe a notable fraction of 𝒪(1) with the PT at the values of Π=18. These findings highlight the effectiveness of the PT approach, as it offers the opportunity to potentially explain even the whole DM, thus yielding valuable insights for further exploration and analysis. Moreover, it is of utmost importance to ensure that the maximum value of the ratio Ω_PBH/ Ω_DM does not exceed 1. In this particular instance, we have determined that this value is lower than one, and so we can elucidate the whole DM content. Therefore, the hybrid models can obtain even the whole DM and the value of the threshold can be at around δ_c=0.51. §.§ Threshold in model with an inflection point An intriguing possibility in the framework of inflation is that the existence of an inflection point in the single field potential could provide an explanation for the generation of PBHs and subsequently account for a significant portion of the DM in the Universe. This mechanism arises when the slow-roll parameter, represented by ϵ_1 in Eq. (<ref>), acquires a substantial value, resulting in the violation of the slow roll approximation. Importantly, this parameter remains below one, allowing inflation to continue. Following this, a phase emerges where the inflaton remains nearly constant, leading to a local amplification. During this plateau, the power spectrum experiences enhancement, facilitating the production of PBHs during the radiation-dominated phase of the early Universe. A lot of works have adopted the idea of an inflection point Refs.<cit.>. In many previous works, a value of δ_c in the range of acceptance has been used. Generally, one can find the exact value of δ_c for these models and then with fine tuning of the parameters which give the peak height, they can have a significant fraction of DM. For the study of single field inflation with an inflection point we present the model <cit.>. Similar result one can obtain for other models as well. The model in Ref.<cit.> is embedded in no scale supergravity theory. The Kähler potential and superpotential are given from Eqs.: K= -3 ln(1-|y_1|^2/3-|y_2|^2/3), W= m(-y_1y_2 +y_2y_1^2/l√(3))(1+c_2e^-b_2y_1^2y_1^2) where y_1 and y_2 are chiral fields and l, b_2 and c_2 are parameters. The inflationary direction can be fixed assuming one field of these fields as the inflaton and the other as the modulo one. The analysis is presented in  Refs<cit.>. The scalar potential V can be expressed in terms of the Kähler potential, K and the superpotential W: V= e^K/M_P^2[( K^-1)^i_j̅(W^j̅+WK^j̅/M_P^2) (W̅_i+W̅K_i/M_P^2)- 3|W|^2/M_P^2] where ( K^-1)^i_j̅ is the inverse of Kähler metric and K^i=∂ K/ ∂Φ_i. After the evaluation of the potential and concerning the non-canonical kinetic term of the Lagrangian, one can obtain an inflection point in the effective scalar potential. One should also take into account specific values for the parameters in order to obtain significant peaks at small scales in the effective scalar potential. In this analysis we adopt the parameters of Ref<cit.>. In Fig.<ref> (left panel) we depict the power spectrum as a function of the comoving wavenumber k for different choice of the parameter b. One can notice that a lot of fine tuning is required for obtaining a capable peak height of the power spectrum. For the evaluation of power spectrum we solve numerically the perturbation of the field, as they given from the differential equations (<ref>) reduced in the single field case. Finally we derive the power spectrum from Eq. (<ref>). In the right panel of Fig.<ref> we depict the evaluated thresholds δ_c for some choices of the parameter b_2 with the formalism from Section <ref>. The dashed line in the right panel corresponds to the case where we have f_PBH=1. The abundances of PBHs is calculate with the PT, as before. However, it is important to ensure that the maximum value of the ratio Ω_PBH/ Ω_DM does not exceed 1. The results are similar to the case of two field models with a large curvaton field. This value slightly surpasses 1, leading to the need to explore smaller abundances of PBHs. Therefore, inflationary models with an inflection point can predict a significant amount the DM from PBHs with the value of the threshold at around δ_c=0.503. Nevertheless, the drawback of these models is that a lot of fine tuning to the interlining parameters is required. An additional problem for these models is that the one loop correction at large scales are too large to accept the validity of perturbation theory <cit.>. § CONCLUSIONS The formation of PBHs has been the subject of study for decades, with recent advances in numerical simulations providing a more complete understanding of the mechanism involved. The production of PBHs can be explained by a significant enhancement in the scalar power spectrum at small scales. In this work we study the production of PBHs from inflationary models by concerning the evaluation of the threshold δ_c. Specifically, we study the evaluation of the threshold δ_c in some proposed mechanisms of producing PBHs in the framework of inflation. The first mechanism is based on a two field model with a non-canonical kinetic term, which is responsible for an important enhancement in the scalar power spectrum. The second one is based on a hybrid model characterized by a waterfall trajectory. The last mechanism is a single inflation with an inflection point in the effective scalar potential. After evaluating the thresholds for PBHs formation, we calculate the fractional abundance of PBHs in relation to DM. Two approaches were used for this calculation: the PS approach and the PT. We observed that the PS approach tends to underestimate the results, thus necessitating the adoption of an approximation based on the PT. Our findings indicate that mechanisms based on both two-field inflation and single-field inflation with an inflection point have the ability to account for a significant fraction observed DM in the Universe, given specific parameter choices. Furthermore, we have shown that the mechanism of a mild waterfall in the hybrid model has the capability to account for the entirety of the observed DM content in the Universe. Notably, when comparing the results obtained from the PS approach with those from the PT, it becomes evident that the PS approach yields lower values for the abundances in contrast to the PT approach. Consequently, while the PS approach imposes constraints on these models in order to explain only a fraction of the DM, the PT evaluation of the fraction provides the potential to explain even the entire DM. This discrepancy underscores the enhanced effectiveness and broader explanatory power of the PT approach in understanding the origins of DM in the Universe. Finally, it is worth noting that in all the investigated models, the threshold value consistently converges around 0.51. This consistent threshold value holds significant implications for future comparative studies focused on the production of PBHs through inflationary processes. Understanding the precise threshold value provides a crucial benchmark for assessing the viability and efficiency of different inflationary scenarios in generating PBHs. These findings pave the way for more comprehensive and in-depth investigations into the underlying mechanisms that contribute to the formation of PBHs. § ACKNOWLEDGMENTS We would like to express our gratitude to S. Clesse, A. Escriva, and I. Musco for their fruitful discussions and valuable insights. Their contributions have greatly enriched this work. elsarticle-num
http://arxiv.org/abs/2306.04775v1
20230607204835
Exploiting Observation Bias to Improve Matrix Completion
[ "Sean Mann", "Charlotte Park", "Devavrat Shah" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Loss Functions for Behavioral Game Theory Greg d'Eon University of British Columbia Sophie Greenwood University of British Columbia Kevin Leyton-Brown University of British Columbia James R. Wright University of Alberta ================================================================================================================================================================================================================================================ We consider a variant of matrix completion where entries are revealed in a biased manner, adopting a model akin to that introduced by <cit.>. Instead of treating this observation bias as a disadvantage, as is typically the case, our goal is to exploit the shared information between the bias and the outcome of interest to improve predictions. Towards this, we propose a simple two-stage algorithm: (i) interpreting the observation pattern as a fully observed noisy matrix, we apply traditional matrix completion methods to the observation pattern to estimate the distances between the latent factors; (ii) we apply supervised learning on the recovered features to impute missing observations. We establish finite-sample error rates that are competitive with the corresponding supervised learning parametric rates, suggesting that our learning performance is comparable to having access to the unobserved covariates. Empirical evaluation using a real-world dataset reflects similar performance gains, with our algorithm's estimates having 30x smaller mean squared error compared to traditional matrix completion methods. § INTRODUCTION The problem of matrix completion – estimating a matrix from several noisy observations of its entries – has received considerable attention in recent years. Matrix completion has far-ranging potential applications, from use in clinical trials to public policy evaluation (<cit.>). Moreover, there are a myriad of applications that utilize large matrices where data entries are missing and matrix completion becomes a natural pre-processing task. To that end, consider an n × n matrix = [x_ij]_i, j ∈ [n] of interest[We assume square matrix for convenience. Notation [n] = {1,…, n}.]. We have noisy observation of a subset of its entries. Let the matrix = [a_ij] ∈{0,1}^n× n denote the observation pattern, where a_ij = 1 if measurement for entry (i,j) is observed 0 otherwise. If a_ij = 1, we have noisy observation of x_ij, denote as y_ij where y_ij = x_ij + ξ_ij where the ξ_ij are independent, zero-mean, subgaussian noise. If a_ij = 0, then y_ij = ⋆ is considered “missing”. Let = [y_ij]_i, j ∈ [n] denote this noisy observation matrix. MCAR data. Much of the existing literature on matrix completion focuses on the model where entries are missing completely at random (MCAR) : for each i, j ∈ [n], a_ij = 1 with probability p > 0 and 0 with probability 1-p independently, i.e. a_ij∼(p). Some prominent early works include that of <cit.>, <cit.>, and <cit.>. As a more recent example, the work of <cit.> proposes a simple estimation approach known as Universal Singular Value Thresholding () that achieves the minimax error rate up to a constant factor for MCAR data and performs well in practice. Moreover, the recent work of <cit.> proposes an approach similar to the work of our paper but in the MCAR model, achieving a consistent estimator using a nearest-neighbor based approach. However, despite the wealth of research focusing on this model, it is often not applicable in practice, as there are often covariates influencing both the observation pattern and the outcome. Thus, in order to understand the practical conditions under which matrix completion can be performed, we must focus our attention on the model previously described where observations are missing not at random (MNAR). Motivation for MNAR Data. The setup we describe for MNAR data can be found in many situations of interest, most notably perhaps being recommender systems. For example, consider the case of movie recommendations where the rows of correspond to users, the columns correspond to movies, and an entry x_ij corresponds to user i's rating of movie j. In this scenario, we wish to understand how a user would rate a movie they have not seen, allowing us to provide the user with recommendations for new movies they are likely to enjoy. We note, however, that users are less likely to view movies in genres they do not like, meaning that they are less likely to rate these movies. Clearly the observation pattern for such a matrix is not uniform at random as users' genre preferences affect which movies they give ratings for. Another example of a non-random observation pattern is with clinical trials, where we may consider the rows of a matrix to represent participants, the columns to represent treatments, and the entries to represent a participant's response to a treatment. In such a scenario, we aim to estimate the response of each patient to each possible treatment, allowing us to frame the problem as one of matrix (or tensor) completion. However, we can see that which patients receive which specific treatment may be confounded by conditions such as age, gender, or the specific variant of a disease, implying again that the data would not be missing completely at random. Related works on MNAR data. Though most work on matrix completion does not apply to the MNAR model, there have been several more recent approaches proposed to handle entries that are revealed with non-uniform probability: a_ij∼(p_ij) independently. In the work of <cit.>, the problem is approached as follows: first estimate the matrix = [p_ij]_i, j ∈ [n] using standard matrix completion approach; then, estimate ∘ = [p_ijx_ij] using standard matrix completion approach, and then by diving entries of second estimate by first. This method makes relatively few assumptions about the underlying model, the most important being that has a low nuclear norm and all entries of are large enough (i.e. can be sparse matrix). Many other works dealing with this model also take similar approach. For example, <cit.>, <cit.>, <cit.>, and <cit.>. A recent work in the MNAR model taking a different approach is that of <cit.>, which proposes a method known as Synthetic Nearest Neighbors that estimates directly from the partially observed matrix when is low-rank but requires certain structure in the observation pattern. While <cit.>, <cit.>, and other approaches in the literature propose consistent estimators, neither directly exploits the relationship between the latent factors governing the missingness pattern and those governing the outcome matrix. One recent work that does exploit this shared information is that of <cit.>, in which it is assumed that the observation probability is a scalar function of the outcome itself. They propose a similar multi-stage algorithm to the one presented in this paper, though their assumption about the relationship between the observation probability and the outcomes is quite limiting. Contributions. Building upon recent prior works, we propose a model for the MNAR setting where both the observation pattern and the outcome depend on the common latent factors. In particular, let a_ij∼(p_ij) with p_ij∝_i^⊤_j where _i, _j ∈ℝ^d are d-dimensional latent factors associated with user i and item j respectively. The entries of interest x_ij = f(_i, _j) for some Lipschitz function f: ℝ^2d→ℝ so that f has low-rank factorization f(, ) = ∑_k=1^r θ_k() ϕ_k() with Lipschitz factors θ_k, ϕ_k : ℝ^d→ℝ for k ≤ r. In the recommendation context, it captures the phenomenon that users are less likely to view movies in genres they do not enjoy, and would thus rate worse. That is, information about users' preferences can be extracted both from their ratings and from what movies they have chosen to rate. This shared information corresponds to common latent features which affect both the observation pattern and the outcome in our model. Indeed, we utilize bias in observations to our advantage. The matrix = [a_ij] is fully observed with 𝔼[] = being a low-rank matrix. Therefore, we can estimate the latent factors _i, _j for all i, j using traditional matrix completion methods reasonably accurately. Specifically, our method Mask Nearest Neighbors () estimates the distances between latent factors associated with the users and items using . Such distances allow us to cluster the entries of the partially observed matrix into groups of users and items that are similar. In the later stages of , we use these learned clusters to apply k-nearest neighbors and impute the matrix. We compare the finite-sample error rates of with the supervised learning parametric rates for the function f. If we have N samples observed in a parametric setting where we are trying to learn a Lipschitz f: ℝ^2d→ℝ, the parametric rate scales as N^-1/2d + 4. Under our algorithm, when N samples are observed (on average) with a biased observation pattern, the effective error scales as ∼ N^-1/2d for d≥ 2. Thus, we show that by having a factorization of f, is able to achieve effectively the parametric error rate despite the handicap of not observing the features or latent factors. Indeed, with r=1 and constant ϕ_1, the rate is lower bounded by N^-1/d+2. See Theorem <ref> and subsequent discussion for details. Empirically, we establish the performance gains promised by our theoretical results in the real-world setting. Specifically, we utilize more than a million interaction data points of users on the Glance platform – a smart lock-screen that aims to personalize user experience through recommending dynamic lock-screens (also called glances). We find that is able to improve the mean-squared error 30x compared to a standard matrix completion method (see Table <ref> for details). We also report empirical performance using a synthetic dataset to discuss nuanced properties of that are not necessarily captured by theoretical results. § PROBLEM SETUP We consider a recommender system setup with n “users” and “items” respectively. We enforce the number of users and items to be equal for simplicity of exposition, since the proposed algorithm and its analysis then become symmetric. The practical application of the algorithm does not require the numbers of users and items to be equal. Latent factor model. We posit that users and items can be fully described by d-dimensional unit vectors, where d ≪ n is thought of as a constant. Specifically, user i and item j are associated with _i, _j ∈^d-1 respectively, which we assume are independently and uniformly distributed on the unit sphere. We will refer to {_i}_i ∈ [n] and {_j}_j ∈ [n] as the user and item latent factors. Such latent factors determine both the probability of observation as well as the outcome of interest. Outcomes. The outcome of interest can be interpreted as an affinity score between a user and an item. In our model, the outcome can be expressed as a sum of r ≪ n separable functions, x_ij = f(_i, _j) = ∑_k=1^r θ_k(_i) ϕ_k(_j). Here, θ_k, ϕ_k: ^d-1↦ [B_ℓ, B_h] are L-Lipschitz functions. As such, the outcome, despite forming a low-rank matrix of rank r, is not a simple bilinear function of the user and item latent factors, as is typically the setup in matrix completion. Observation model. Our setup also differs from the classical matrix completion literature in that entries are not randomly revealed with uniform probability p. Rather, the observation probability is p_ij = ρ_n/2 (_i^⊤_j + 1), where ρ_n is a sparsity factor that is allowed to vary with n. This means that the observation pattern induced is very closely related to the random dot product graph construction <cit.>. Considering the infamous selection bias issue in recommender systems, which is well-displayed in <cit.>, this is a more realistic assumption. The observation indicator is then a_ij∼(p_ij). Having a_ij = 1 means the entry (i, j) is revealed, and we observe a noise-corrupted outcome y_ij = x_ij + ξ_ij where ξ_ij∼(σ^2). Rationale. We believe this setup strikes a good balance between richness, i.e. allowing the outcome matrix and observation probabilities to be sufficiently different, and structure, i.e. maintaining a large amount of shared information between the two quantities. For instance, this model is more general than that of <cit.>, which posits that the expected outcome for user i and item j can be written as _i^⊤_j and the observation probability is a scalar function of the outcome. Goal. Given the observation pattern {a_ij}_i, j ∈ [n] and observations = {y_ij: a_ij = 1}, we desire estimators x̂_ij≈ x_ij with finite-sample error rates. §.§ Notation Key quantities. Define the observation probability matrix = [p_ij] ∈ [0, 1]^n × n and the observation pattern = [a_ij] ∈{0, 1}^n × n. Then we can write = + where = [η_ij] is a matrix of independent, centered Bernoulli noise. Further, let = [ _1,…,_n ]^⊤∈^n × d and, analogously, = [ _1,…,_n ]^⊤∈^n × d be the user and item latent factor matrices. With being the all-ones vector, we can then write = ρ_n/2 (^⊤ + ^⊤). We will often need to refer to individual rows and columns of matrices; for example, _i · or _i denotes the i-th row of , written as a column vector, while _· j denotes its j-th column. Norms. Denote the Euclidean norm of a vector by . For a matrix , denote its largest singular value (i.e., operator norm) and Frobenius norm as and respectively. We will also denote the largest and smallest singular values of as σ_max() and σ_min(). Asymptotics. For real sequences a_n and b_n, n ∈, we write a_n ≲ b_n, or equivalently a_n = O(b_n), if there exist finite n_0 and c such that for all n ≥ n_0 it holds that a_n ≤ c · b_n. We write a_n ≪ b_n if lim_n →∞ a_n / b_n = 0, which implies a_n ≲ b_n. Finally, writing a_n = Õ(b_n) has the analogous meaning to a_n = O(b_n), except log dependencies are ignored. § ALGORITHM The proposed algorithm Mask Nearest Neighbors () has three stages. First, the observation pattern is used to estimate pairwise distances among user and item latent factors. Second, the users and items are clustered using the estimated distances, which facilitates the construction of a clustered outcome matrix. Finally, the clustered outcome matrix is completed by exploiting its rank-r structure and the fact that its sparsity and noise level are significantly reduced, compared to its individual-level counterpart, via clustering. Distance estimation. The first step of the algorithm is to estimate the user-user and item-item distances in the latent space. To that end, we first define the centered observation pattern = - ρ_n/2^⊤. Note that ≜[] = ρ_n/2^⊤ is of rank d. From this, we compute its singular value decomposition = 𝐐Σ𝐖^⊤ and construct a rank d approximation of = ∑_i = 1^d σ_i _i _i^⊤ where σ_1 ≥…≥σ_n are the singular values of , and _i and _i are the corresponding left and right singular vectors. For users i and j we can then define the distance estimate _ij = 2/ρ_n√(d/n) _i - _j which we will show converges to the true latent space distance _i - _j. Repeating the procedure on the columns of yields distance estimates between item latent factors, which will be denoted as _ij. Clustering. We group the users and items by first selecting maximal subsets that are sufficiently separated. Concretely, we initialize the central users, denoted ^⋆⊆ [n], as the empty set. Then, we repeatedly add an arbitrary user to the set if its estimated distance from any other central user is at least 6ε_n, where ε_n is a hyperparameter we defer quantifying. ^⋆ is defined analogously. By assigning each non-central user to its closest central counterpart in estimated distance, a partition of the users ⋃_i ∈^⋆_i = [n] is formed. Similarly construct the item partition ⋃_j ∈^⋆_j. Note that we index the partitions by the sets ^⋆, ^⋆ to avoid confusion from multiple indexing systems. The observations can then be partitioned as = ⋃_i ∈^⋆⋃_j ∈^⋆{y_kl: (k, l) ∈_ij}, where _ij = {(k, l): a_kl = 1, k ∈_i, l ∈_j} is the set of observed outcomes between users in cluster i and items in cluster j. Matrix completion. At this stage, we are ready to define the ground truth clustered outcome matrix = [h_ij]_i ∈^⋆, j ∈^⋆, where h_ij = f(_i, _j), i ∈^⋆, j ∈^⋆. We again emphasize that the rows and columns of are indexed by central users ^⋆ and central items ^⋆ for simplicity. Using data, we form the unfilled clustered outcome matrix ∈^|^⋆| × |^⋆|, where h̅_ij = 1/|_ij|∑_(k, l) ∈_ij y_kl if |_ij| ≥ N_n; ⋆ otherwise. N_n is a hyperparameter that governs whether there are “enough” observations available to produce a sufficiently accurate estimate of h_ij via simple averaging. We will impute the missing entries of using the available entries. Further define the imputed clustered outcome matrix , with ĥ_ij = h̅_ij if h̅_ij≠⋆; (, i, j) otherwise. The subroutine (, i, j) is defined as follows, for the case r = 1: consider the bipartite graph = (^⋆, ^⋆, {(i, j): h̅_ij≠⋆}), i.e. edges of correspond to revealed observations of central users and items. If one exists, let = (i_1, j_1, …, i_ℓ, j_ℓ) be a shortest path in from user cluster i_1 to item cluster j_ℓ, where ties are broken arbitrarily but deterministically. Then (, i_1, j_ℓ) = h̅_i_1 j_1∏_k=2^ℓh̅_i_k j_k/h̅_i_k j_k-1 if exists; 0 otherwise. One can easily verify that if r = 1, is connected, and h̅_ij = h_ij for all revealed entries of , then (, i_1, j_ℓ) = h_ij exactly. Indeed, our analysis shows that for appropriate choices of ε_n, N_n and sufficiently large n, is connected with overwhelming probability. Moreover, where revealed, |h̅_ij - h_ij| is small enough for the propagated error in the imputed entries ĥ_ij to still be small. When r > 1, a similar algorithm can be used after further grouping the rows and columns of into blocks of size r. Prediction. Once is formed, imputing all entries is simple: for i ∈_k and j ∈_l, we return x̂_ij = ĥ_kl. In words, to predict the outcome generated by an arbitrary user-item pair, return the estimated outcome between their “most similar” central user and item respectively. § RESULTS Consider the setup introduced in Section <ref> when r = 1 and d ≥ 2. Suppose ρ_n = n^-β where β < Γ(d) < 1/4, with Γ(d) denoting an increasing function needed to guarantee that the distance estimation error decays faster than ε_n. Correspondingly, we pick the hyperparameters ε_n = n^-1 - β/2/d, and N_n = (n), with (n) being defined in Lemma <ref>. Then, for sufficiently large n, with probability at least 1 - Cδ where C is an absolute constant, max_i, j ∈ [n] *x̂_ij - x_ij = Õ*n^-1 - β/2/d. In order to prove the above theorem, we will split our analysis into probabilistic and deterministic components. To prove the probabilistic part, we will define several high probability events and show that with exponentially decaying probability, all the events occur together. Then, conditioning on these events occurring, we will proceed with a deterministic analysis of the approximation error of the the matrix completion stage of , which will comprise the remainder of the proof. We defer the full proof of Theorem <ref> to Appendix <ref>. Comparison with parametric rate. We now characterize our error rate in terms of the number of observations available. For any choice of β, in expectation we receive N = n^2 - β observations in the dataset . Thus, we have that n ≈ N^1/(2 - β) and our error bound scales as max_i, j ∈ [n]*x̂_ij - x_ij = Õ*N^-1 - β/2/d(2 - β) = Õ*N^-1/2d. Contextualizing this result, we note that the parametric error rate for k-nearest neighbor methods scales as N^-1/(d + 2) <cit.>, where N is the number of observations, for Lipschitz continuous functions over d dimensions. Meanwhile, our setup concerns sums of products of Lipschitz functions in d dimensions, which is slightly different. Hence, we compare our bound with the parametric rate in two regimes: (i) in the degenerate case where the outcome does not depend on either the user or item latent factors, then f reduces to a Lipschitz function on a d-dimensional domain. In this case, for moderately large d, our rate scales roughly as the square-root of the parametric rate; (ii) when the function of interest f depends nontrivially on both the user and item latent factors, then f can be considered a 2d-dimensional function, Lipschitz in both arguments. Comparing with the corresponding parametric rate of N^-1/(2d+4), we find that this additional factorization structure interestingly brings a small improvement over the parametric rate. Concluding, estimating x_ij using achieves an error rate that is competitive with the parametric rate for sufficiently slowly decaying ρ_n, despite the following unconventional challenges: (i) the algorithm does not have direct access to the covariates and , but rather has to estimate them; (ii) samples are drawn in a biased manner as opposed to the uniform sampling of datapoints over the domain, which is typically assumed. § EXPERIMENTS §.§ Synthetic Data Data. We performed several experiments with different numbers of datapoints n and sparsity values ρ_n, all of which had data generated using the following steps. First, we generated rank d = 5 latent factors ∈ [-1, 1]^n × d, ∈ [-1, 1]^n × d by sampling d entries i.i.d from a standard normal distribution for each of the n users and items and then normalizing each row to have unit norm. We then generated the matrix of probabilities as described in our observation model by setting = ρ_n/2 (^⊤ + ^⊤). Next, we sample ∼(). We defined our outcome matrix = θ()ϕ()^⊤ where θ() = e^√(d) and ϕ() = e^√(d). We then generated the subgaussian noise terms ξ_ij by sampling n^2 entries i.i.d from a standard normal distribution. Finally, the partially-observed noisy outcome matrix was generated by setting y_ij = x_ij + ξ_ij if a_ij = 1 and y_ij to be a null value otherwise. Algorithms compared. We compare the performance of the proposed algorithm against a standard matrix completion algorithm . In the experiments, we utilize a practical implementation of the proposed algorithm where (i) is used in place of in the distance estimation step and (ii) the matrix completion from the clustered outcome matrix is done using alternating least squares (ALS). We compare against a modified version of where the rank of the matrix is known, allowing for more comparable performance against which utilizes the matrix rank. We note that is designed to handle biased data while the modified algorithm is not. Metrics. We use R^2 score, MSE, and MAE to assess the similarity of the estimated full outcome matrix with the true outcome matrix. We directly compare the distribution of estimated entries to the distribution of true outcomes to assess the bias of the algorithms. Experiment setup. We performed two different experiments with the described algorithms. Throughout both experiments, we used 16-fold cross-validation to tune the hyperparameters. In the first experiment, we tried 3 values of n ∈{100, 562, 3162}. We set ρ_n = n^-1/4. We then ran both and modified with these values of n and compared the resulting estimates. In the second experiment, we set n = 1000 and tested 3 values of β∈{0, 1/6, 1/4}. We set ρ_n = n^-β. We then again ran both and modified and compared the bias of the resulting estimates. We concluded by doing 10 experimental runs of both and modified with all of these ρ_n = n^-β values and to calculate the R^2, MSE, and MAE as a function of n for each β. Results. Before comparing the performance of and modified on the synthetic dataset, we examine the bias of the estimates for the full outcome matrix in both cases (experiments 1 and 2). As can be seen in <ref>,the distribution of estimates generated by better approximates the true distribution of outcomes as the number of datapoints increases. Moreover, as n increases, the shape of the distribution indicates that the results are less biased. However, the estimates generated by modified do not display this same trend and do not appear to become less biased as the number of datapoints increases. This difference in bias can also be seen in <ref> as n is held constant at 1000. As ρ_n decreases from 1 to n^-1/4, the bias of barely changes but the bias of modified increases significantly. Now, we consider R^2 score, MSE, and MAE as metrics to compare the estimates made by and modified against the true outcomes. The results of this experiment can be seen in <ref>. We can see that across all these metrics, outperforms modified . When comparing the R^2 scores, we see that the performance of the two algorithms varies more as ρ_n decreases (β increases). The results of these R^2-, MSE-, and MAE-based metrics all line up with the observations made about the relative bias of the estimates in the two algorithms, indicating that the latent factor clustering approach utilized by is indeed de-biasing the data and leads to better estimates. §.§ Real-World Data Data. We used a dataset obtained from Glance[Glance (www.glance.com) is a smart lock screen that aims to personalize the screen.] detailing the interaction of users with different pieces of content, considering a partially observed matrix where rows represent users, columns represent the different pieces digital content, and outcomes are the natural logarithm of the duration (in seconds) that the user interacted with the content. The dataset we considered had 1014696 observed measurements for 1305 users and 1471 content items. The sparsity of the dataset was thus 53% and consistent with the observation model described in our setup. Algorithms compared. We compare the same practical implementation of and a modified version of described in the synthetic data experiments. Metrics. As in the synthetic data experiments, we used R^2 score, MSE, and MAE to assess the similarity of our estimates with the true outcomes for the test sets. We also directly compare the distributions of estimated entries to true outcomes in our test set. Experiment setup. We split the data into 90/10 train/test sets 10 times. We determined the best estimate of the rank of the true outcome matrix and the rank of the observation pattern using 9-fold cross-validation with . Additionally, we used 16-fold cross-validation to separately tune the other hyperparameters. We ran both and modified on the training set in each experimental run. Results. As we can see from <ref>, the estimates from modified are extremely biased when the algorithm is used with MNAR data. The estimates from , however, appear to be minimally biased. Moreover, from <ref>, we can see that the the estimates made by modified are very sensitive to outliers in the data, while the estimates from are not. We also observe from <ref> that the R^2, MSE, and MAE of are far better than that of modified , implying that works significantly better on MNAR data in practice compared to methods such as designed for MCAR data. § CONCLUSION We have proposed , a matrix completion method operating under the MNAR model that exploits the shared information between the outcome and observation pattern to improve predictions. The desirable theoretical properties and empirical performance of highlight the value of modeling these shared latent factors. From a statistical learning perspective, we have shown that the parametric rate for nearest-neighbor-style algorithms can almost be achieved even when (i) covariates are unobserved, and (ii) observations are sampled in a biased manner; this could be of independent interest. Finally, we note that valuable future directions of research include (i) extending the analysis of to the general case of r > 1, and (ii) potentially relaxing the conditional independence assumption between different entries of the observation pattern. plainnat § PROOF OF THEOREM <REF> §.§ Key lemmas We first introduce a series of lemmas which will help establish a series of events that occur jointly with high probability, thus enabling the algorithm to perform well. Distance estimation. First, towards guaranteeing good clustering performance, we must show that the estimated distances converge to the true distances between pairs of users/items. Consider the model introduced in Section <ref> and the algorithm proposed in <ref>. With probability at least 1 - C'δ, max_i ≠ j*_ij - _ij≤(n), where, for an absolute constant C > 0, (n) ≜C/ρ^2_n√(d^3/nlog*2n/δ). Recall the distance estimate _ij = 2/ρ_n√(d/n) _̂̃̂ĩ̂ - _̂̃̂ĵ̃̂. We will first show that the analogous estimator with the true centered probability matrix is accurate, then show that the estimated centered probability matrix converges to , concluding the result. Note that 2/ρ_n√(d/n) _i - _j = √(d/n) (_i - _j). Thus, σ_min*√(d/n) _i - _j≤2/ρ_n√(d/n) _i - _j≤σ_max*√(d/n) _i - _j. where σ_min() refers to the smallest singular value of and so on. Since √(d)· is a random matrix with i.i.d. isotropic rows, using Lemma <ref> and picking t = √(log(2/δ)/c) we have with probability 1 - δ *σ_k*√(d/n)  - 1 ≤ C√(d/n) + √(log(2/δ)/cn) ≤ C√(d/nlog*2/δ) for any singular value σ_k. Thus, *2/ρ_n√(d/n) _i - _j - _i - _j ≤ C√(d/nlog*2/δ)·_i - _j ≤ C√(d/nlog*2/δ). Therefore, if we had access to the ground truth centered probability matrix , then we can estimate the inter-user and inter-item distances accurately. Of course, we can only use to estimate with SVD, so now we will show that _j - _j is also small using Lemma <ref>, a matrix perturbation bound. Towards this, recall = + where () = d, and is the rank-d truncated SVD of ; in the context of the lemma, we are setting =, =, while is the identically named perturbation matrix. Let us denote the reduced SVD of as _d _d _d^⊤. In order to apply the result, we first need to produce several bounds on norms of relevant matrices, all of which appear in the statement of Lemma <ref>. Step 1: Upper bounding . The noise matrix contains independent centered Bernoulli noise, which is bounded and thus sub-gaussian. Using Lemma <ref> and choosing t = √(log(2/δ)/c) again, we have for sufficiently large n ≥ c' log(2/δ) and some c' > 0 that ≤ 2C √(n) + √(log(2/δ)/c) ≤ C' √(n). Note that quantities such as c, C represent generic absolute constants and their meanings and values may change from line to line. Step 2: Lower bounding σ_d(). Recall = ρ_n/2^⊤. By (<ref>), we have for sufficiently large n ≥ c' d log(2/δ) that σ_d() ≥√(n/d) - C' √(log*2/δ) ≥ C”√(n/d). By the same argument, we condition on the same bound holding for σ_d(). Then, since , are full rank almost surely, σ_d() ≥ρ_n/2·σ_d() ·σ_d() ≥C ρ_n n/2d. Step 3: Upper bounding _j. Note that _j = ^⊤ e_j with e_j denoting the j-th standard basis vector. Thus, _j≤≤ C'√(n) as shown in (<ref>). Step 4: Upper bounding _j. Recall _j = ρ_n/2_j. Using (<ref>), which bounds the largest singular value of , we have for sufficiently large n ≥ c' d log(2/δ) that _j = ρ_n/2_j ≤ρ_n/2_j ≤ρ_n/2*√(n/d) + C' √(log*2/δ) ≤C”ρ_n/2√(n/d). Step 5: Upper bounding _d _d^⊤_j. First, since _d contains orthonormal columns, _d _d^⊤_j ≤_d_d^⊤_j ≤_d^⊤_j. We will bound this quantity using Lemma <ref>. Towards showing that _d^⊤_j is a sub-gaussian random vector, consider any fixed ∈^d-1. Then, denoting _d = [r_ki]_k ∈ [n], i ∈ [d], ^⊤_d^⊤_j = ∑_i=1^d ∑_k=1^n v_i r_kiη_jk which is a linear combination of independent sub-gaussian random variables η_jk∼(1/4), and thus sub-gaussian. Moreover, its variance proxy is 1/4∑_i=1^d ∑_k=1^n v_i^2 r_ki^2 = 1/4∑_i=1^d *v_i^2 ∑_k=1^n r_ki^2 = 1/4∑_i=1^d v_i^2 = 1/4. Therefore, _d^⊤_j is a (1/4)-sub-gaussian random vector. Using Lemma <ref> with t = √(d/2log*2n/δ) and union bounding over all rows of yields max_j ∈ [n] _d^⊤_j≤√(d/2log*2n/δ). Combining. Using Lemma <ref>, whose relevant quantities we have just bounded in (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), we have by substitution that _i - _i^2 ≤ C ·*n/ρ_n^2 n^2 / d^2*C' n + ρ_n^2 n/4d + d/2log*2n/δ ≤ C ·*d^2/ρ_n^2*C' + ρ_n^2/4d + d/2log*2n/δ ≤ C ·*C' d^2/ρ_n^2 + d/2log*2n/δ ≤C d^2/ρ_n^2log*2n/δ. Scaling appropriately according to (<ref>) then yields 2/ρ_n√(d/n) _i - _i≤C/ρ^2_n√(d^3/nlog*2n/δ). Finally, combining (<ref>) and (<ref>) and using the triangle inequality yields *2/ρ_n√(d/n) _i - _j - _i - _j ≤2C/ρ^2_n√(d^3/nlog*2n/δ) + C' √(d/nlog*2/δ) ≤C/ρ^2_n√(d^3/nlog*2n/δ). Moreover, during the course of this analysis we have conditioned on several events that occur with probability at least 1 - δ. By the union bound, they occur jointly with probability 1 - C'δ for some absolute constant C', completing the proof. Coverage. A key phenomenon that underpins our results is dense coverage of the latent feature space by users and items. Towards guaranteeing this, let Σ_n ⊂^d-1 be a minimal ε_n-net of ^d-1, with the properties that - ' > ε_n for distinct , ' ∈Σ_n and for any ∈^d-1 there exists ∈Σ_n such that - ≤ε_n. It is widely known that |Σ_n| ≤ (1 + 2/ε_n)^d ≤ (3 / ε_n)^d where the latter inequality assumes ε_n ≤ 1, which we enforce. Denote the α-ball centered at as _α(), and let {_i}_i ∈ [n] be independent and uniformly random points on ^d-1. Then min_∈Σ_n*_ε_n() ∩_[n]≤(n)≤*3/ε_n^dexp*-1/4(n). where (n) ≜n/8*ε_n/2^d-1. Take any fixed ∈Σ_n, and consider the random points that lie within ε_n of . By Lemma <ref>, _i ∈_ε_n()≥1/4sin^d-1Φ where, using the geometry of the problem and recalling that ε_n ≤ 1, sinΦ = ε_n √(1 - ε_n^2 / 4)≥ε_n / 2. Thus we can write _i ∈_ε_n()≥1/4*ε_n/2^d-1 = 2/n(n). Using a multiplicative Chernoff bound with ratio 1/2 and union bounding over all ∈Σ_n, we have the desired result. Note that enforcing that 1 ≪ n ε_n^d-1 suffices to make the bound nontrivial, i.e. (n) grows with n. Observations. Conditioning on the dense coverage of the latent feature space by users and items, we now argue that all pairs of user and item clusters – as defined by Σ_n, rather than the central users and items – that are reasonably close will generate a guaranteed number of observations. Condition on the event described by Lemma <ref>. Consider _i, _j ∈Σ_n for which _i - _j≤√(2) - 2 ε_n, where we assume n is sufficiently large so that ε_n < 1 / √(2). Then the set of observations between these user and item clusters is denoted ^Σ_n_ij = {y_kl: a_kl = 1, _k ∈_ε_n(_i), _l ∈_ε_n(_j)}. Then, with high probability, it holds for all such pairs (_i, _j) that *^Σ_n_ij≥(n) where (n) ≜1/4ρ_n ^2(n). First consider any _k ∈_ε_n(_i), _l ∈_ε_n(_j) where _i - _j≤√(2) - 2 ε_n. Then _k - _l≤_i - _j + _k - _i + _l - _j≤√(2). Now, recall p_kl = ρ_n/2 (_k^⊤_l + 1) = ρ_n/2*2 - 1/2_k - _l^2 ≥ρ_n / 2. From the event of Lemma <ref>, we know that there are at least (n) users and items in the balls around _i and _j respectively, all pairs of which satisfy the above inequality. Using a Chernoff bound and union bounding over all pairs of points in Σ_n, we get min_i, j ∈ [|Σ_n|]*^Σ_n_ij≤(n)≤*3/ε_n^2dexp*-1/4(n) as desired. For appropriate choices of ε_n and thus (n) ≫ 1, the bound becomes nontrivial. Denoising. Here we argue that every revealed entry of the unfilled clustered outcome matrix is an accurate estimate of its corresponding entry in . Condition on the events of Lemma <ref> and <ref>. Further suppose that the hyperparameter N_n is chosen to be (n), and n is large enough such that (n) ≤ε_n. Then, with probability at least 1 - δ, it holds for every revealed entry (i, j) of that *h̅_ij - h_ij≤(n) where (n) ≜ 14 L B_h ε_n + √(σ^2/2 (n)log2n^2/δ). We can express the difference between h̅_ij and h_ij for any i ∈^⋆, j ∈^⋆ as h̅_ij - h_ij = 1/|_ij|∑_(k, l) ∈_ij(f(_k, _l) - f(_i, _j))_Δ_ij + 1/|_ij|∑_(k, l) ∈_ijξ_kl_ν_ij. Recall that an entry of is considered revealed if _ij≥ N_n = (n). Thus, ν_ij is the mean of at least (n) independent subgaussian random variables of variance proxy σ^2, and by union bounding Lemma <ref> over all n^2 user-item pairs we have max_(i, j): h̅_ij≠⋆ν_ij≥√(σ^2/2 (n)log2n^2/δ)≤δ. Moreover, by Lemma <ref>, all the estimated user-user and item-item distances have error bounded by ε_n for sufficiently large n. Since for any k ∈_i it must be true that _ik≤ 6ε_n by construction (otherwise user k itself would have been a central user), we know the true distance is also small, i.e. _i - _k≤_ik + (n) ≤ 7ε_n. The same holds for _j - _l for any l ∈_j. Thus, by the Lipschitz property of f, every (k, l) ∈_ij satisfies *f(_k, _l) - f(_i, _j)≤ 14 L B_h ε_n. Combining the two bounds confirms that, conditioned on Lemma <ref> and <ref>, with probability at least 1 - δ it holds for every revealed entry h̅_ij *h̅_ij - h_ij≤(n). §.§ High probability events Consider the following event definitions, parameterized by some probability δ > 0: * : For all pairs i ≠ j, *_ij - _ij and *_ij - _ij are bounded by (n). * : For every ∈Σ_n, there are at least (n) units _i such that _i - ≤ε_n and items _j such that _j - ≤ε_n. * : For sufficiently large n, it holds for every pair (_i, _j) from Σ_n for which _i - _j≤√(2) - 2 ε_n that *^Σ_n_ij≥(n) – supposing that ε_n < 1 / √(2). * : For every revealed entry (i, j) of , *h̅_ij - h_ij≤(n). Now, we will argue that the union of all above events occur with high probability. For sufficiently large n and appropriate choices of ε_n such that (n) ≫ 1 and (n) ≫ 1, the high probability events listed above occur jointly with high probability, i.e. ∩∩∩≥ 1 - C'δ. Union bounding over the complement of all the high probability events gives ∩∩∩ = 1 - ^c ∪^c ∪^c ∪^c ≥ 1 - ^c - ^c - ^c - ^c We have from lemma <ref> that ^c ≤ Cδ Moreover, as the bound given by lemma <ref> for is exponentially decaying in n for an appropriate choice of ε_n, for sufficiently large n it holds that ^c≤δ The remaining lemmas pertaining to the high probability events give us bounds on the conditional probability of events. Lemma <ref> gives us a conditional bound on the probability that event _obs occurs given _cov. Combining its result with the law of total probability gives _obs^c≤_cov^c + _obs^c |_cov≤ 2δ Furthermore, the same logic applied to Lemma <ref> gives _noise^c≤_dist^c_obs^c + _dist^c + _obs^c + _noise^c |_dist, _obs≤ (C + 3)δ. Thus, we have ∩∩∩≥ 1 - C'δ. §.§ Proof of main result Conditioned on all the high probability events occurring, the remaining analysis of our algorithm is deterministic. Sufficient observations in close clusters. Consider any central user i ∈^⋆ and central item j ∈^⋆ such that _i - _j≤√(2) - 4ε_n, i.e. they are “not too far away”. We now argue that _ij≥(n) must hold. First, consider two elements _i', _j'∈Σ_n that are within ε_n of _i and _j respectively. They are guaranteed to exist since Σ_n is an ε_n-net. All users within ε_n of _i' are then within 2ε_n of _i (analogously for _j' and _j). For large enough n so that the distance estimation error (n) is at most ε_n, the estimated distances are accurate enough such that any such user _k satisfies _ik≤ 3ε_n. Since any other central user is at least 6ε_n in estimated distance from _i by construction, it must be that user k is allocated to the cluster of central user i. In other words, all users (items) within ε_n of _i' (_j') are allocated to user i's (item j's) cluster. Now, _i' - _j'≤√(2) - 4ε_n + 2ε_n = √(2) - 2 ε_n so by Lemma <ref> we have _ij≥*^Σ_n_ij≥(n). The bipartite graph is connected. We will now use the above result to argue that the bipartite clustered observation pattern graph is connected. Specifically, we will show by geometric construction that any arbitrary pair of central user and item are connected in through a path of constant length. Consider any i ∈^⋆, j ∈^⋆ and the geodesic between _i and _j on ^d-1 which is the shortest arc on the unit sphere connecting _i and _j. This line segment is of length at most π. Pick an even number of points along this line, with consecutive points being distance C apart (ignoring the endpoints) where C < √(2) is a positive absolute constant. Let us denote this ordered set of points as = {_i, _1, …, _2ℓ, _j}. The key here is to “snap” each of {_i}_i ∈ [2ℓ] to nearby central users (for even i) and items (for odd i) in an alternating manner. Towards this, consider the following chain of reasoning for any _i (without loss of generality, let i be even): * There exists ∈Σ_n such that - _i≤ε_n, since Σ_n is an ε_n-net; * There exists a user _q such that _q - ≤ε_n, by the coverage condition guaranteed by Lemma <ref>; * Either _q is a central user, or there is a central user within 6ε_n of _q in estimated distance, i.e. within 7ε_n in true distance, by the clustering construction. Therefore, there exists a central user (or item) within 9 ε_n of any _i. These form the path = (_i, _k_1, _k_2, ..., _k_2ℓ, _j). Any adjacent user-item pair in this path, say (_k_q, _k_q+1), satisfy _k_q - _k_q+1≤ C + 18 ε_n. The result from the previous section applies if C + 18 ε_n ≤√(2) - 4 ε_n or ε_n < *√(2) - C/22, which is an absolute constant. Thus, for sufficiently large n, the result applies and there are at least (n) observations generated between the clusters of _k_q and _k_q+1. In other words, h̅_k_q k_q+1 must be revealed, corresponding to an edge in . As such, indeed necessarily defines a path from user cluster i to item cluster j in , and since i ∈^⋆ and j ∈^⋆ are arbitrary, is connected. Moreover, the length of is upper bounded by a constant of the form π / C. Imputation error. We now use the connectedness of to give a bound on the error of ĥ_ij for unobserved entries (i, j) in . Consider any i ∈^⋆, j ∈^⋆ for which h̅_ij = ⋆. As is connected, we know that there must be a path = (_i_1, _j_1,…,_i_ℓ, _j_ℓ) from i = i_1 to j = j_ℓ. By definition of , this means that h̅_i_kj_k, h̅_i_k, j_k - 1 must be revealed in for all k ∈ [ℓ]. Moreover, ℓ is upper bounded by an absolute constant. We can then produce the estimate ĥ_ij = h̅_i_1j_1∏_j = 2^ℓh̅_i_kj_k/h̅_i_kj_k - 1 Given the occurrence of , we know that h̅_i_kj_k≤ h_i_kj_k + (n), implying that we can upper bound the ratio between h̅_i_kj_k and h_i_kj_k h̅_i_kj_k/h_i_kj_k≤ 1 + (n)/h_i_kj_k≤ 1 + (n)/B_ℓ; Similarly, we can lower bound this ratio as h̅_i_kj_k/h_i_kj_k≥ 1 - (n)/h_i_kj_k≥ 1 - (n)/B_ℓ. Then, any individual term in the expression for h̅_ij satisfies the multiplicative error bound 1 - (n)/B_ℓ/1 + (n)/B_ℓ≤h̅_i_kj_k/h̅_i_kj_k - 1≤1 + (n)/B_ℓ/1 - (n)/B_ℓ. Moreover, as the path length ℓ is upper bounded by an absolute constant, call it C', we have max{ĥ_ij/h_ij, h_ij/ĥ_ij}≤*1 + (n)/B_ℓ/1 - (n)/B_ℓ^C'. For sufficiently large n such that (n)≤ B_ℓ / 2, this can be bounded as max{ĥ_ij/h_ij, h_ij/ĥ_ij} ≤*1 + 2(n)/B_ℓ^C' ≤exp*2 C' (n)/B_ℓ. Now suppose ρ_n = n^-β and ε_n = n^-α. We will pick α as a function of β to produce a good bound. Recall the definition of (n) from Lemmas <ref>, <ref>, and <ref>, (n) = 14 L B_h ε_n + √(σ^2/2 (n)log2n^2/δ) = 14 L B_h ε_n + √(2σ^2/ρ_n ^2(n)log2n^2/δ) = 14 L B_h ε_n + √(128 · 2^2d - 2σ^2/ρ_n n^2 ε_n^2d-2log2n^2/δ) = 14 L B_h n^-α + √(128 · 2^2d - 2σ^2/n^2 + 2α - 2 α d - βlog2n^2/δ). There are two terms in (n), of the forms n^-α and n^-(1 - β/2 - (d - 1)α) respectively, which depend on α in contradicting ways. Our goal is to “balance” the rates as much as possible, which suggests the choice α = (1 - β/2)/d. However, we must note that in order to satisfy the assumption (n) ∼ n^-1/2 + 2β≪ε_n to force the errors in estimated distances to decay faster than the resolution required for clustering, we further need the inequality α < 1/2 - 2β to hold. We find that for the case of β < Γ(d) ≜d - 2/4d - 1, the choice α = (1 - β/2)/d indeed satisfies the condition α < 1/2 - 2 β, resulting in the rate 2 C' (n)/B_ℓ≲ n^-α√(log2n^2/δ) which converges to zero with n. Recalling (<ref>) and noting the fact that e^x ≤ 1 + Cx for small x, we have then concluded that with probability at least 1 - δ, for reasonably large n and β < Γ(d), it holds for every (i, j) ∈^⋆×^⋆ that max{ĥ_ij/h_ij, h_ij/ĥ_ij}≤ 1 + Õ(n^-α), or, considering the fact that f is bounded on [B_ℓ^2, B_h^2], *ĥ_ij - h_ij≤Õ(n^-α). What remains is to relate this to our ultimate quantity of interest *x̂_ij - x_ij. This is simple as any user (item) is assigned to a central user (item) within distance O(ε_n), so the approximation error is also of order ε_n = n^-α by the Lipschitz property of f. It immediately follows from our choice of α that max_i, j ∈ [n] *x̂_ij - x_ij≤Õ(n^-1 - β/2/d). § HELPER LEMMAS Let the surface area of the unit d-sphere be A_d. Consider the corresponding hyperspherical cap subtended by the angle Φ∈ (0, π/2), and let A_d^cap be its surface area. It holds that A_d^cap/A_d ≥1/4sin^d-1Φ. Using a result from <cit.>, we write A_d^cap/A_d = 1/2 I *sin^2Φ; d-1/2, 1/2, where I(x; a, b) is known as the regularized incomplete beta function and defined by I(x; a, b) = B(x; a, b)/B(a, b) where B(a, b) is the beta function and B(x; a, b) = ∫_0^x t^a-1 (1-t)^b-1 dt is the incomplete beta function. Then B*sin^2Φ; d-1/2, 1/2 = ∫_0^sin^2Φ t^d-3/2 (1 - t)^-1/2 dt ≥∫_0^sin^2Φ t^d-3/2 dt = 2/d-1sin^d-1Φ. Meanwhile, a simple but tight bound <cit.> for the denominator is B*d-1/2, 1/2≤4/d - 1. Combining the bounds yields A_d^cap/A_d ≥1/4sin^d-1Φ. Let X_1,…,X_n be a sequence of independent mean-zero random variables with X_i ∼(σ_i^2). Then, for any t > 0, it holds that *1/n∑_i=1^n X_i > t≤ 2 exp*-n^2 t^2/2 ∑_i=1^n σ_i^2. Let = + ∈^m × n with () = d. Let _d _d _d^⊤ and _d _d _d^⊤ denote the top d singular components of the SVD of and respectively. Then, the truncated SVD estimator = _d _d _d^⊤ is such that for all j ∈ [m], _j - _j^2 ≤ 2 ·*^2/σ_d^2()*_j^2 + _j^2 + _d _d^⊤_j^2. First note that = _d _d^⊤ by definition of the truncated SVD. Thus we can write _j - _j = (_d _d^⊤_j - _d _d^⊤_j) + (_d _d^⊤_j - _j) = _d _d^⊤_j + (_d _d^⊤ - _n) _j. Since the two terms belong to orthogonal subspaces, we have _j - _j^2 = _d _d^⊤_j^2 + (_d _d^⊤ - _n) _j^2. The first term of (<ref>) can be expanded by the triangle inequality as _d _d^⊤_j^2 ≤ 2 ·*_d _d^⊤_j - _d _d^⊤_j^2 + _d _d^⊤_j^2 ≤ 2 ·*_d _d^⊤ - _d _d^⊤^2 _j^2 + _d _d^⊤_j^2. Since () = d, the Wedin sinΘ Theorem <cit.> guarantees that _d _d^⊤ - _d _d^⊤≤/σ_d(). Combining (<ref>) and (<ref>) yields _d _d^⊤_j^2 ≤ 2 ·*^2/σ_d^2()_j^2 + _d _d^⊤_j^2. Now we return to the second term of (<ref>). By definition, = _d _d^⊤. Therefore, using (<ref>) once again, we get _d _d^⊤_j - _j^2 = (_d _d^⊤ - _d _d^⊤) _j^2 ≤_d _d^⊤ - _d _d^⊤^2 _j^2 ≤^2/σ_d^2()_j^2. Combining (<ref>), (<ref>), and (<ref>) we obtain the desired bound. Let ∈^n × d, where d < n, have rows sampled i.i.d. from an isotropic distribution, i.e. [_i _i^⊤] = _d. Then for every t ≥ 0, with probability at least 1 - 2 exp(-ct^2) the singular values of satisfy √(n) - C √(d) - t ≤σ_min() ≤σ_max() ≤√(n) + C √(d) + t, where C, c > 0 are constants that only depend on the sub-gaussian norm of _i, i.e. _i_ψ_2. Let = [x_ij] ∈^m × n be a random matrix whose entries are independent mean-zero sub-gaussian random variables with bounded sub-gaussian norm x_ij_ψ_2≤ 1. Then for any t ≥ 0, with probability at least 1 - 2exp(-ct^2) it holds that ≤ C(√(m) + √(n)) + t, where C, c > 0 are absolute constants. Let ∈^n be a mean-zero σ^2-sub-gaussian random vector, i.e. for any fixed ∈^n - 1 the random variable ^⊤ is σ^2-sub-gaussian, or [exp(s ^⊤)] ≤exp*s^2 σ^2/2. Then, with probability at least 1 - 2exp(-t^2/2nσ^2), it holds that ≤ t. Since is σ^2-sub-gaussian, it is (nσ^2)-norm-sub-gaussian as defined <cit.>. Using Definition 3 and Lemma 1 of <cit.>, the result immediately follows.
http://arxiv.org/abs/2306.11908v1
20230620214535
Accelerating Generalized Random Forests with Fixed-Point Trees
[ "David Fleischer", "David A. Stephens", "Archer Yang" ]
stat.ML
[ "stat.ML", "cs.LG", "stat.ME" ]
Normality of k-Matching Polytopes of Bipartite Graphs [ ===================================================== Generalized random forests <cit.> build upon the well-established success of conventional forests <cit.> to offer a flexible and powerful non-parametric method for estimating local solutions of heterogeneous estimating equations. Estimators are constructed by leveraging random forests as an adaptive kernel weighting algorithm and implemented through a gradient-based tree-growing procedure. By expressing this gradient-based approximation as being induced from a single Newton-Raphson root-finding iteration, and drawing upon the connection between estimating equations and fixed-point problems <cit.>, we propose a new tree-growing rule for generalized random forests induced from a fixed-point iteration type of approximation, enabling gradient-free optimization and yielding substantial time savings for tasks involving even modest dimensionality of the target quantity (e.g. multiple/multi-level treatment effects). We develop an asymptotic theory for estimators obtained from forests whose trees are grown through the fixed-point splitting rule, and provide numerical simulations demonstrating that the estimators obtained from such forests are comparable to those obtained from the more costly gradient-based rule. § INTRODUCTION Random forests <cit.> are a popular method for non-parametric learning, conventionally presented in the context of regression/classification of a response given a set of predictors. Given samples (X_i, Y_i) ∈× of a p-dimensional predictor X_i = (X_i1,...,X_ip)^T and response Y_i, random forests are used to produce estimates of the local mean function μ^*(x) := [Y|X=x] by aggregating the individual predictions made by an ensemble of decision trees. Regression trees fit the response at a given predictor level as the mean training response of all samples appearing in the same partition of the predictor space, while the partition structure is determined through recursive application of a loss-minimizing splitting rule. <cit.>. However, despite offering a powerful method for conditional mean estimation, this procedure fundamentally relies on having access to the response in order to grow the forest of trees. Indeed, there are many statistically-interesting quantities which are not typically observed as part of a sample (e.g. treatment effects), making them unamenable towards direct loss minimization. The approach taken by <cit.> is to use forest-based estimation as the first step in a two-part procedure: Casting random forests as an adaptive kernel method, followed by using the induced weights to solve a set of estimating equations. In the absence of any feasible loss minimization, such generalized random forests (GRF) exploit much of the success of conventional forests towards estimating any quantity identifiable as a solution to local moment conditions. A distinguishing trait of GRF can be found in its use of a tree-growing scheme which targets heterogeneity-maximization in place of loss-minimization, recursively seeking partitions such as to maximize cross-split fitted heterogeneity rather than minimizing within-split fitted loss. Furthermore, <cit.> show that the heterogeneity-maximizing splitting rule is equivalent to a conventional regression tree split over pseudo-outcomes computed as a gradient-based approximation of the regional fits. The problem-specific pseudo-outcomes are re-computed for all non-terminal parent nodes within each tree of the forest, making this step of the GRF procedure potentially onerous when the dimension of the target quantity is large, e.g. multivariate treatment effects. Our goal is to offer an acceleration of the GRF tree-growing procedure by using pseudo-outcomes obtained as the result of a fixed-point approximation rather than the original gradient-based approximation. We present a general framework for the substitution of the fixed-point rule in place of the gradient-based rule, targeted towards problems for which dimension of the underlying quantity is large, and show the equivalence of the two methods in the one-dimensional setting. The primary application of the fixed-point method presented herein focuses on heterogeneous treatment effect estimation for multi-leveled treatment assignment and multivariate continuous treatments. § GENERALIZED RANDOM FORESTS §.§ Forest-Based Estimation of Heterogeneous Estimating Equations Let (X_i, O_i) ∈× denote a set of samples and suppose that θ^*(x) is any quantity identifiable by local moment conditions on an estimating function ψ_θ^*(x),ν^*(x)(·) of the form 0 = _O|X[ψ_θ^*(x),ν^*(x)(O_i)|X_i=x], for ν^*(x) denoting any optional nuisance quantities. Classically, we might think of the O_i as the outcomes associated with predictors X_i, i.e. O_i = Y_i as in the case of conditional mean estimation. However, in general, we permit the observable quantities O_i to contain more structured information, e.g. O_i = {Y_i, W_i} corresponding to an outcome Y_i and treatment W_i, or O_i = {Y_i, W_i, Z_i} for an outcome, treatment, and instrument Z_i. There is a rich body of literature studying solutions to heterogeneous estimating equations of the form (<ref>), particularly in the context of local maximum likelihood estimation (see <cit.> for a review) owing in part to the observation that, for many choices of the ψ-function, one can derive consistent non-parametric covariance estimates via the sandwich estimator <cit.>. A common theme across this scholarship is the use of a kernel function to assign greater weight to nearby samples when computing the local estimates θ̂(x), e.g. for kernel weights α_i(x), estimates of local mean of the form μ̂(x) = ∑α_i(x) Y_i. For quantities beyond conditional means, and following the two-step estimation procedure of <cit.>, we consider estimators obtained as the result of a weighted empirical version of (<ref>), (θ̂(x),ν̂(x)) ∈_θ,ν{∑^n_i=1α_i(x) ψ_θ,ν(O_i)}. Using samples (X_i, O_i) to solve local moment conditions of the form (<ref>) is a sufficiently general framework to characterize many interesting statistical problems. Nevertheless, it differs from using (X_i, Y_i) to produce least-squares estimates of a conditional mean function in at least one important way: Without access to observed values of θ^*(X_i), conventional random forests can neither produce tree-wise estimates using the regional mean θ^*(X_i) values, nor grow trees in the first place via minimization of loss function L(θ^*(X_i),θ̂_i). The generalized random forests of <cit.> address these problems separately. First, the problem of infeasible loss minimization is confronted by GRF through the use of a tree-growing procedure whose recursive partitioning scheme is instead designed to maximize training heterogeneity across the possible splits of a given parent node. Second, GRF is able to forgo traditional tree-wise estimation by instead using forests as the first step of the two-step method of <cit.>, exploiting the neighbourhoods produced by the ensemble of trees in order to cast forests an adaptive kernel method. The GRF algorithm instead produces its estimates via solutions of the form (<ref>) using kernel weights in the spirit the previously mentioned work on local maxmimum likelihood estimation. At a high level, GRF uses forests to determine its data-adaptive weights α_i(x) by averaging the partition structure across a set of independently grown trees. Given a forest of B trees, let L_b(x) denote the set of training samples appearing in the same terminal node as some x ∈. The induced weights α_i(x) are defined such as to measure the relative frequency that training sample X_i appears alongside x throughout the forest, α_i(x) := 1/B∑^B_b=1α_bi(x), for α_bi(x) := 1({X_i ∈ L_b(x)})/|L_b(x)|. We emphasize that this application of random forests differs quite substantially from their conventional interpretation. Although GRF preserves the principles of a recursively partitioned the predictor space, trees grown via observation subsampling, and randomized feature selection, we no longer interpret GRF estimators as an average of trees since GRF confers final estimation to solutions of equations of the form (<ref>). §.§ Partitions Maximizing Fitted Heterogeneity Given a set P containing samples X_i ∈ P ⊂, we seek a binary, axis-aligned partition yielding the greatest improvement in fit for the corresponding estimates of θ^*(X_i). We refer to P as the parent node for which we are presently seek the optimal split, and C_1, C_2 the candidate child nodes obtained as the result of some split of P. Similar to conventional forests, we define regional estimates of (θ^*(x),ν^*(x)) over the samples in a node N as being constant-valued over all x ∈ N. However, rather than defining the regional estimates (θ̂_N,ν̂_N) as the (infeasible) mean θ^*(X_i) over all X_i ∈ N, we define (θ̂_N,ν̂_N) as the solution to the empirical estimating equation over the samples in N, (θ̂_N,ν̂_N) ∈_θ,ν{∑_{i:X_i∈ N}ψ_θ,ν(O_i) }. In principle, we would want to find child nodes C_1, C_2 of P minimizing the squared error of with respect to the candidate child estimates θ̂_C_1,θ̂_C_2, err(C_1,C_2) := ∑_j=1,2(X ∈ C_j|X ∈ P) [(θ̂_C_j - θ^*(X))^2|X∈ C_j]. However, we are unable to minimize err(C_1,C_2) as we only require θ^*(x) to be identifiable via moment equations of the form (<ref>) and not necessarily through loss-function estimates of a least-squares criterion. To address this, <cit.> put forward the following criterion measuring the heterogeneity of a candidate split, Δ(C_1, C_2) := |C_1||C_2|/|P|^2(θ̂_C_1 - θ̂_C_2)^2, where each θ̂_C_j is found following (<ref>) over a candidate child node C_j. Proposition 1 of <cit.> establishes the asymptotic equivalence of partitions obtained as a result of minimizing the squared-error criterion (<ref>) and those obtained by maximizing the heterogeneity criterion (<ref>). §.§.§ The Gradient Tree Algorithm Maximizing the Δ-criterion (<ref>) requires computing θ̂_C_1,θ̂_C_2 via (<ref>) for all possible axis-aligned splits of parent node P. The GRF algorithm of <cit.> eschews such a burden by constructing an approximate criterion Δ(C_1,C_2) implicitly formed as the result of a gradient-based approximation for the θ̂_C_j. Given a candidate child node C of parent P, let θ̃_C denote the gradient-based approximation of θ̂_C found by taking a single gradient step away from the parent estimate θ̂_P, θ̃_C := θ̂_P -1/|C|∑_{i:X_i∈ C}ξ^T A_P^-1ψ_θ̂_P,ν̂_P(O_i), where ξ is a vector selecting the θ-components from a (θ,ν)-vector, and A_P is any consistent estimate for ∇[ψ_θ̂_P,ν̂_P(O_i)|X_i∈ P]. For example, when ψ is continuously differentiable in (θ,ν), A_P = 1/|P|∑_{i:X_i∈ P}∇ψ_θ̂_P,ν̂_P(O_i). The quantities θ̂_P and A_P^-1 remain constant over all possible splits of P, allowing them to be computed once when searching for the optimal split of P. Following an influence function heuristic, <cit.> use the above quantities to propose a less costly splitting procedure on pseudo-outcomes -ξ^T A_P^-1ψ_θ̂_P,ν̂_P(O_i), * (Labeling Step). Given parent estimates (θ̂_P,ν̂_P), compute pseudo-outcomes over P, ρ_i := -ξ^T A_P^-1ψ_θ̂_P,ν̂_P(O_i). * (Regression Step). Perform a single conventional regression tree split on the ρ_i ∈ P. An immediate consequence of the regression step is to obtain child nodes C_1,C_2 minimizing the total squared-error loss between the observed pseudo-outcomes and their sample means across the two child partition, 1/|C_1|∑_{i:X_i∈ C_1}(ρ_i - ρ̅_C_1)^2 + 1/|C_2|∑_{i:X_i∈ C_2}(ρ_i - ρ̅_C_2)^2, for ρ̅_C_j = 1/|C_j|∑_{i:X_i∈ C_j}ρ_i. Furthermore, one can show that the partitions obtained by minimizing (<ref>) are equivalent to those obtained by maximizing a criterion of the form 1/|C_1|(∑_{i:X_i∈ C_1}ρ_i)^2 + 1/|C_2|(∑_{i:X_i∈ C_2}ρ_i)^2 =:Δ(C_1,C_2). That is, the GRF algorithm grows its trees via recursive application of a splitting rule implicitly maximizing the Δ-criterion (<ref>) over the pseudo-outcomes (<ref>) within parent P. The use of this approximate splitting scheme is formalized by Proposition 2 of <cit.>, describing the asymptotic equivalence of the Δ and Δ criteria, and permitting the GRF algorithm to search for splits over the gradient-induced Δ-criterion (<ref>) as a surrogate for splits maximizing heterogeneity-measuring Δ-criterion (<ref>). § FIXED-POINT APPROXIMATION §.§ The Gradient Tree Algorithm as a Root-Finding Procedure Write U(θ):=∑_{i∈ C}ψ_θ,ν(O_i) so that we may express the gradient-based approximation (<ref>) in terms of a single Newton-Raphson root-finding iteration on U(θ) of the form θ^(t+1) = θ^(t) - ξ^T[∇ U(θ^(t))]^-1 U(θ^(t)), with initial value θ^(t) = θ̂_P, one-step update θ^(t+1) = θ̃_C, and where (<ref>) uses a consistent estimator A_P for the gradient matrix ∇ U(θ^(t)) = ∇[ψ_θ̂_P,ν̂_P(O_i)|X_i∈ P]. With this interpretation of θ̃_C as the result of a single root-finding iteration, we may be inclined to consider approximations for θ̂_C inspired by other root-finding algorithms. §.§ Estimating Equations and Fixed Point Problems Our algorithm is motivated by the analyses of <cit.>, discussing estimating equations and fixed-point problems in the parametric setting. Let U:^p→^p denote an empirical estimating function for β^* = (β_1^*,...β_p^*)^T through a finite-sample estimating equation of the form 0 = U(β). If a solution to (<ref>) exists, which we denote by β̂, then for any η > 0, U(β̂) = 0 β̂= β̂- η U(β̂). Setting f(β) := β - η U(β), we find that the search for β̂ solving (<ref>) can be expressed in terms of the equivalent fixed-point problem of searching for β̂ solving β̂= f(β̂). Although this characterization is presented in the parametric setting, we can nonetheless leverage it towards our search for non-parametric estimates of θ^*(x) through an expression of the form θ^(t+1) = f(θ^(t)), with f(θ) := θ - η U(θ), for some step size η > 0. §.§ The Fixed-Point Tree Algorithm By analogy with the Newton-Raphson interpretation for the gradient-based approximations of θ̂_C used by the GRF gradient tree algorithm, we express a novel approximation θ̃_C borne out of a single fixed-point iteration following (<ref>), θ̃_C := θ̂_P - ηξ^T ∑_{i:X_i∈ C}ψ_θ̂_P,ν̂_P(O_i), where η > 0 is some step size governing the fixed-point iterations, and, as before, (θ̂_P,ν̂_P) are the solutions of (<ref>) over the parent node and ξ is a vector selecting the θ-components of a (θ,ν)-vector. As is the case with the gradient tree algorithm, our method, which we refer to as the fixed-point tree algorithm, searches for splits optimizing an approximate criterion Δ in place of the original Δ-criterion. However, the fixed-point tree algorithm instead induces its criterion via approximations θ̃_C_j for θ̂_C_j constructed according to (<ref>). Algorithmically, a generalized random forest invoking fixed-point trees only differs from the those using the gradient trees in the labeling step of the tree-growing procedure: * (Labeling Step). Given parent estimates (θ̂_P,ν̂_P), compute pseudo-outcomes over P ρ_i := -ξ^T ψ_θ̂_P,ν̂_P(O_i). * (Regression Step). Perform a single conventional regression tree split on the ρ_i ∈ P, yielding partition {C_1,C_2} of P which maximize the criterion 1/|C_1|(∑_{i:X_i∈ C_1}ρ_i)^2 + 1/|C_2|(∑_{i:X_i∈ C_2}ρ_i)^2 =: Δ(C_1,C_2). We note that the step size parameter η found in (<ref>) is not present in the pseudo-outcome calculation, as one may expect by using the gradient tree pseudo-outcomes derivation as an analogy. However, if η remains fixed over the samples in P then the transformation of -ξ^Tψ_θ,ν(O_i) to -ηξ^T ψ_θ,ν(O_i) amounts to a rescaling of the data and would have no effect on the optimal regression-tree partition over the samples. Suppose that the assumptions of Propositions 1 and 2 of <cit.> hold, elaborated upon in Section <ref>. The induced fixed-point Δ-criterion (<ref>) and the heterogeneity-measuring Δ-criterion (<ref>) are asymptotically equivalent according to Δ(C_1,C_2) = Δ(C_1,C_2) + O_P(max{ r^2,1/|C_1|,1/|C_2|}), where r > 0 denotes the radius of the parent node P. We note that the rate (<ref>) is precisely the same as that guaranteed for the gradient tree algorithm (Proposition 2 of <cit.>). Indeed, since our proposal preserves the remainder of the GRF algorithm, we argue that all other results are left unaffected by the use of fixed-point trees. §.§.§ Example: Conditional Mean Estimation The conditional mean function θ^*(x) = [Y|X=x] can be identified through moment condition (<ref>) via estimating function ψ_θ^*(x)(y) = y - θ^*(x). We obtain fixed-point-based pseudo-outcomes ρ_i = -(Y_i - θ̂_P), and parent estimates θ̂_P given by the solution to (<ref>) over P, 0 = ∑_{i:X_i∈ P}ψ_θ̂_P(Y_i) = ∑_{i:X_i∈ P}(Y_i - θ̂_P) θ̂_P = 1/|P|∑_{i:X_i∈ P} Y_i =: Y_P, and hence, ρ_i = -(Y_i - Y_P). As splits are invariant under constant rescalings, we rewrite the fixed-point pseudo-outcomes as ρ_i = Y_i - Y_P, observing that this form is identical to the corresponding gradient tree pseudo-outcomes. Whenever θ^*(·) is a function from the p-dimensional -space to a one-dimensional parameter space θ^*:→ (e.g. conditional mean estimation, quantile regression) the gradient tree pseudo-outcomes (<ref>) simplify to the fixed-point tree pseudo-outcomes (<ref>) as the (1× 1)-dimensional A_P matrix implies that A_P^-1ψ_θ̂_P,ν̂_P(O_i) is simply a constant rescaling of the fixed-point pseudo-outcomes across the samples in P. §.§.§ Example: Heterogeneous Treatment Effects Let W_i ∈{0,1}^K denote a set of mutually exclusive indicators denoting assignment to one of K distinct treatment levels, Y_i(k) the potential outcome associated with the i-th sample had it received treatment level k ∈{1,...,K}, and X_i a set of auxiliary covariates for which we believe will plausibly account for confounding. Let τ^*_k(x) denote the conditional average treatment effect of receiving treatment level k ∈{2,...,K} over baseline treatment level 1, where, under exogeneity of the causal effects, we have τ^*_k(x) = [Y_i(k) - Y_i(1)|X_i=x]. We set τ^*_1(x) = 0 and denote τ^*(x) := (τ^*_1(x),...,τ^*_K(x))^T, and propose the following outcome model for Y_i Y_i = μ^*(X_i) + W_i ·τ^*(x) + ϵ_i, where we regard the intercept term μ^* as a nuisance quantity. The contrasts τ^*(x) can be identified using moment condition (<ref>) via the estimating function <cit.> ψ_τ^*(x),μ^*(x)(Y_i,W_i) := [ Y_i - W_i ·τ^*(x) - μ^*(x); (Y_i - W_i ·τ^*(x) - μ^*(x)) W_i ], which, following (<ref>), admits fixed-point pseudo-outcomes of the form ρ_i = (W_i - W_P) (Y_i - Y_P - (W_i - W_P)τ̂_P), where W_P and Y_P are the mean W_i and Y_i over the samples in P, and τ̂_P is the OLS solution to the regression of the Y_i - Y_P on W_i - W_P. Compare (<ref>) with the gradient-based pseudo-outcomes derived in Section 6 of <cit.> A_P^-1 (W_i - W_P) (Y_i - Y_P - (W_i - W_P)τ̂_P), for A_P = 1/|P|∑_{i:X_i∈ P} (W_i - W_P)(W_i - W_P)^T. Despite only computing A_P^-1 once while searching for the optimal partition of P, the matrix alongside the products A_P^-1ψ_θ̂_P,μ̂_P(Y_i,W_i) must nonetheless be found for every non-terminal partition across each tree of the forest, suggesting a tangible benefit through the use of the fixed-point tree algorithm. Computational Considerations. Both the fixed-point pseudo-outcomes (<ref>) and their gradient-based counterparts (<ref>) require the OLS coefficients τ̂_P = A_P^-1W^T Y for W∈^|P|× K the matrix with rows W_i - W_P and Y∈^|P|× 1 the matrix with rows Y_i - Y_P. In principle, one may compute A_P^-1 to obtain the fixed-point pseudo-outcomes in spite of the fact that the matrix is absent in the general formulation (<ref>). However, as the fixed-point expression only makes use of the A_P^-1 matrix through τ̂_P, we are free to obtain τ̂_P in ways which may not explicitly require or yield A_P^-1. Furthermore, the estimates θ̃_C obtained by the tree-growing procedure are only used to determine the weight-inducing partition structure, and are not (directly) used as part of the final GRF estimator, suggesting that approximations for τ̂_P may be sufficient. In particular, we propose a further acceleration for heterogeneous treatment effect estimation by settling for an approximate solution to the OLS coefficients τ̂_P: Rather than computing the exact τ̂_P, use an approximation given by a single gradient descent step away from the origin, using the exact line search step size. § SIMULATIONS §.§ Heterogeneous Treatment Effect Estimation We present empirical tests evaluating the performance of the fixed-point tree's ability to estimate heterogeneous treatment effects. Following the observations made in Section <ref>, we compare the fixed-point tree algorithm to the existing gradient tree implementation using 1) a Cholesky-based routine to calculate the regression coefficients τ̂_P, and 2) a routine using a one-step approximation for τ̂_P. The methods were implemented in a modified version of the package <cit.> in order for ensure that timing results be maximally comparable. In the figures and tables below, we use to abbreviate the GRF estimates obtained using the gradient tree algorithm, the GRF estimates obtained using the fixed-point tree algorithm using exact OLS coefficients τ̂_P via a Cholesky solver, and the GRF estimates obtained by the fixed-point tree algorithm using a single-step approximation for τ̂_P. §.§.§ Multi-Level Treatments Let W_i ∈{0,1}^K denote a set of K indicators corresponding to assignment to one of K different treatment levels k ∈{1,...,K}, where we treat level k = 1 as the baseline treatment. We draw X_i ∼ U([0,1]^p), W_i ∼Multinom(1,(π_1,...,π_K)) with uniform probabilities π_k = 1/K, noise ϵ_i ∼(0, 1), and generate outcomes according to Y_i = W_i ·τ^*(X_i) + ϵ_i. In all data-generating regimes we seek to provide estimates of τ^*(x) := (τ^*_1(x),...,τ^*_K(x)) and fit the forests via (or modifications thereof to implement the fixed-point methods). The true treatment effects were set as τ^*_1(x) := 0 for the baseline treatment, and, for treatments k ∈{2,...,K} we set τ^*_k(x) := β_k x_1 for fixed constants β_k ∈ so that the local treatment effects are linearly heterogeneous in the first auxiliary covariate. The constants β_k were randomly generated across each replication as β_k ∼(0, 1). Letting w_i ∈{1,...,K} denote the treatment level received by the i-th observation (corresponding to the non-zero entry of W_i) we can express our outcome model as Y_i = ∑^K_k=2β_k X_i1 1({w_i = k}) + ϵ_i, so that, for counterfactual outcomes Y_i(k), the true conditional average treatment effects are identified by the causal contrasts (relative to the baseline) [Y_i(k) - Y_i(1) |X_i=x] = β_k x_1 = τ^*_k(x). Single Tree Timings. The fixed-point proposal only differs from the existing gradient-based method in how individual trees are grown, and so we begin by presenting single tree timings to highlight the benefit of the fixed-point method. We reserve any evaluations of prediction accuracy for the full forest predictors as the individual tree learners are assumed to be quite noisy. For the same reason we made no effort to tune the model parameters beyond the default settings of the function, with exception to the subsampling and honest sample splitting parameters which were effectively disabled ( and ), as well as the centering arguments ( and ) which would otherwise cause unrelated overhead. Results over 500 replications of the simulated study are summarized in Figure <ref>. For both the exact and one-step approximate implementations of the fixed-point tree algorithm we find an appreciable decrease in fit times relative to the gradient-tree algorithm. A natural question to ask is whether this benefit is a consequence of a corresponding decrease in fitted tree complexity. We choose to measure tree complexity by the total number of splits, and display the relative split counts of the different methods in the lower panel of Figure <ref>. Forest Predictor Performance. To illustrate the accuracy of the two fixed-point methods we once again draw samples according to outcome model (<ref>) under the original unconfounded regime of uniform probabilities π_k = 1/K, as well as under confounding between the treatment and the auxiliary covariates. In the confounded setting we use class probabilities dependent on the covariate features π_k ≡π_k(x) according to π_k(x) = x_1, k = 1, 1 - x_1/K-1, k ∈{2,...,K}. All forests use B = 2000 trees with all other settings left as the default, with exception to the centering arguments and which were computed outside of to the causal forest function according to the usual implementation (via and ) so that identical values could be passed to each of the three methods. An example of the fits provided by this heterogeneous treatment effect model is given in Figure <ref>. Simulation results are described in Figure <ref> in terms of a normalized ℓ^2 loss for the treatment effect over a test set, i.e. estimates of [(τ^*(X) - τ̂(X))/K_2] for X ∼ U([0,1]^p) from a test set of 1000 samples, and where normalization by K^-1 was done for the sake of comparison across treatment level settings. Due to computational constraints we limit the simulations to 50 replications. Despite the improved fit times of both fixed-point tree implementations, neither method suffers from any discernible loss in prediction accuracy relative to the original gradient-based method. §.§.§ Continuous Treatments We generate samples following the same structural model (<ref>), where W_i ∈^K now denotes a set of K continuous treatment regressors. We draw X_i ∼( 0, 1_p), W_i ∼( 0, 1_K), and noise ϵ_i ∼(0, 1). We set the true local effects τ^*(x) := (τ^*_1(x),...τ^*_K(x)) of the K regressors as τ^*_k(x) := β_k x_1, so, as in Section <ref>, the effects are linearly heterogeneous in the first auxiliary covariate. All forests were fit via (or modifications thereof to implement the fixed-point method). Single Tree Timings. Results over 500 replications of the simulated study are summarized in Figure <ref>. We find the performance benefit gained through the use of the fixed-point algorithms to be even more dramatic than the previous case of W_i corresponding to discrete treatment assignment. For continuous W_i we observe considerably larger trees (for all methods) than trees obtained for binary W_i, and so we find a corresponding increase in the applications of the gradient tree/fixed-point tree splitting rules per tree. As before, the bottom panel in Figure <ref> verifies that the performance increase of fixed-point trees is not a consequence of fitting less complex trees, which, if anything, tend fit slightly more complex trees than the gradient tree algorithm. Forest Predictor Performance. Using the same the data-generating design as above, we repeat the forest test procedure used to assess predictor accuracy for multi-leveled treatments carried out in Section <ref>. Forests were fit using (or modifications thereof) with all settings left as the default, with exception to and which, as before, were computed outside of the main forest function in accordance with their default implementations (via for both and ). Simulation results are displayed in Figure <ref> and once show that the dramatic decrease in fit times found in Figure <ref> do not come at the cost of a corresponding decrease in prediction accuracy, relative to the gradient-tree algorithm. § THEORETICAL ANALYSIS The primary goal of this work is to offer a computationally frugal alternative to the splitting rule and pseudo-outcomes induced by the gradient-based approximations of θ̂_C. In doing so we mirror the notation & structure of the relevant analyses made by <cit.> to show that the approximations θ̃_C (<ref>) which induce the fixed-point splitting rule are asymptotically coupled to the original estimates θ̂_C which define the Δ(C_1, C_2) and err(C_1,C_2) splitting rules. Moreover, the consistency of the final GRF estimates θ̂(x) defined by (<ref>) for the true θ^*(x) is given by Theorem 3 of <cit.>. Any mechanism producing the kernel weights α_i(x) used to specify θ̂(x) will not affect consistency so long as they satisfy the assumptions imposed by Theorem 3 on the forest of weight-inducing trees. For this reason, our focus is directed towards results coupling θ̃_C with θ̂_C in order to establish the equivalence of trees grown by optimizing the Δ criterion and those grown via the Δ criterion. In particular, we prove fixed-point tree equivalents to Lemma 4 and Proposition 2 of <cit.> which are used to establish the large sample properties of the gradient tree algorithm. §.§ Notation & Assumptions * The predictor and parameter spaces are both subsets of Euclidean space, = [0,1]^p and (θ, ν) ∈⊂^k for p, k > 0, and a compact subset of ^k. In accordance with the arguments of <cit.>, assume that the covariate features X have density f bounded away from 0 and ∞, i.e. c ≤ f_X(x) ≤ C for some c > 0, C < ∞, for all x ∈. * The expected estimating function M_θ,ν(x) := _O|X[ψ_θ,ν(O_i)|X_i=x], is smooth in (θ,ν). * For fixed (θ,ν), the M-function (<ref>) is Lipschitz continuous in x. * For fixed x, the M-function is twice-differentiable in (θ,ν) with uniformly bounded second derivative, ∇^2_(θ,ν) M_θ,ν(x) < ∞, where · denotes the appropriate tensor norm for the second derivative of M_θ,ν taken with respect to (θ,ν). Moreover, assume V(x) := ∇_(θ,ν) M_θ,ν(x)|_θ=θ^*(x),ν=ν^*(x) is invertible for all x ∈. * The score functions ψ_θ,ν(O_i) have a continuous covariance structure in the following sense: For γ(·,·) denoting the worst-case variogram, γ( [ θ_1; ν_1 ],[ θ_2; ν_2 ]) := sup_x∈{_O|X( ψ_θ_1,ν_1(O_i) - ψ_θ_2,ν_2(O_i) | X_i = x)_F }, then, for some L > 0, γ( [ θ_1; ν_1 ],[ θ_2; ν_2 ]) ≤ L ‖[ θ_1; ν_1 ] - [ θ_2; ν_2 ]‖_2, for all (θ_1,ν_1), (θ_2,ν_2). * The estimating ψ-function can be written as ψ_θ,ν(O_i) = λ(θ,ν;O_i) + ζ_θ,ν(g(O_i)), for λ a Lipschitz-continuous function in (θ,ν), g:{O_i}→ a univariate summary of the observables O_i, and ζ_θ:→ any family of monotone and bounded functions. * For any weights α_i, ∑α_i = 1, the minimizer (θ̂,ν̂) of the weighted empirical estimation problem (<ref>) is at least an approximate root of the objective function ∑^n_i=1α_i ψ_θ,ν(O_i), ‖∑^n_i =1α_i ψ_θ̂,ν̂(O_i) ‖_2 ≤ C max_1≤ i≤ n{α_i}, for C ≥ 0. We present the final assumption and model specification for completeness, which are only required to guarantee the consistency of the final GRF estimates θ̂(x) with the true θ^*(x), and not the coupling of fixed-point/gradient approximations θ̃_C with θ̂_C. * The estimating function ψ_θ,ν(O_i) is a negative sub-gradient of a convex function, and the expected estimating function M_θ,ν(X_i) is the negative gradient of a strongly convex function. * Individual tree predictors are symmetric, balanced, and randomized. The forest predictor is built with subsampled trees as well as honest sample splitting. We provide a summary of these notions below, but for a more detailed description see <cit.>. * (Symmetric). Tree predictions do not depend upon the order in which training samples are indexed. * (Balanced/ω-Regular). Each split puts at least a fraction ω > 0 of parent observations into each child node. * (Randomized/Random-split). The probability that a split is made on feature j is bound below by π > 0. * (Subsampled). Trees are trained using a subsample of size s drawn from the n training samples (without replacement), with s/n→ 0 and s→∞ as n→∞. * (Honest sample splitting). Tree growth and tree estimation carried out using mutually exclusive subsets of the subsampled data. For each tree, * randomly partition the subsampled training data into disjoint sets _1 and _2 of size |_1| = ⌊ s/2⌋ and |_2| = ⌈ s/2 ⌉, * determine the tree's splits using the data from _1, * set the fitted leaf-wise responses using the data from _2. In the context of computing GRF weights α_i(x), we interpret the final step as using the samples from _2 to determine the neighbourhood sets L_b(x), i.e. the set of training samples X_i ∈_2 appearing in the same leaf as test point x for tree b. §.§ Approximating Forests of Fixed-Point Trees with Regression Forests Let ρ^*_i(x) denote an infeasible version of the fixed-point pseudo-outcomes (<ref>), now defined using the true values (θ^*(x),ν^*(x)) rather than the parent estimates (θ̂_P,ν̂_P), ρ^*_i(x) := -ξ^T ψ_θ^*(x),ν^*(x)(O_i). For any forest weights α_i(x) used to specify the final GRF predictor θ̂(x) via (<ref>), define θ̃^*(x) := θ^*(x) + ∑^n_i=1α_i(x) ρ^*_i(x). By writing Y_i = θ^*(x) + ρ^*_i(x), <cit.> argue that θ̃^*(x) is precisely a regression forest predictor with weights α_i(x) and response Y_i. Moreover, given forest weights α_i(x), the GRF estimator μ̂(x) for the conditional mean function μ^*(x) is found as the solution to μ̂(x) ∈_μ{∑^n_i=1α_i(x) ψ_μ(Y_i) }μ̂(x) = ∑^n_i=1α_i(x) Y_i. where it is easy to show that ψ_μ(y) := y - μ is the estimating function identifying the true conditional mean at μ = μ^*(x) via moment equations (<ref>). In general, GRF estimators are not easily expressed as average of trees, as is the case for conventional forests. However, in the case of a GRF regression forest of the form (<ref>) we find μ̂(x) = ∑^n_i=1α_i(x) Y_i, = ∑^n_i=1(1/B∑^B_b=1 1({X_i∈ L_b(x)})/|L_b(x)|) Y_i, = 1/B∑^B_b=1∑^n_i=1 1({X_i∈ L_b(x)})/|L_b(x)|Y_i, = 1/B∑^B_b=1∑^n_i=1α_bi(x) Y_i, = 1/B∑^B_b=1μ̂_b(x), for μ̂_b(x) := ∑^n_i=1α_bi(x) Y_i. Therefore, any θ̃^*(x) defined as (<ref>) can be expressed as the result of a regression forest in terms of an average of B pseudo-tree predictors θ̃^*_b(x), θ̃^*(x) = 1/B∑^B_b=1θ̃^*_b(x), for θ̃^*_b(x) := ∑^n_i=1α_bi(x)(θ^*(x) + ρ^*_i(x)). Under Assumptions 1-7 and given a forest trained satisfying Specification 1, suppose that the final GRF estimator θ̂(x) is consistent for θ^*(x), guaranteed under the conditions of Theorem 3 of <cit.>. Then, θ̂(x) and the fixed-point-inspired θ̃^*(x) are asymptotically equivalent as √(n/s)(θ̃^*(x) - θ̂(x)) = O_P(max{(s/n)^1/6, s^-π/2log((1-ω)^-1)/log(ω^-1)}) § PROOFS §.§ Proof of Lemma <ref> Given any forest weights α_i(x), define Ψ(θ,ν) := ∑^n_i=1α_i(x) ψ_θ,ν(O_i). By definition of the fixed-point-based θ̃^*(x), [ θ̂(x) - θ̃^*(x); ν̂(x) - ν̃^*(x) ] = [ θ̂(x) - θ^*(x); ν̂(x) - ν^*(x) ] + ∑_i=1^n α_i(x) ψ_θ^*(x),ν^*(x)(O_i) , = [ θ̂(x) - θ^*(x); ν̂(x) - ν^*(x) ] + Ψ(θ^*(x),ν^*(x)) . This expression is analogous to that of the gradient-based approximation found in Lemma 4 of <cit.>, differing only in that the fixed-point expression no longer includes the V(x)^-1 matrix in the second term: V(x)^-1Ψ(θ^*(x),ν^*(x)). However, as the V(x)^-1 is well-behaved and non-random it does not affect the asymptotic behaviour coupling the θ̂(x) and θ̃^*(x) quantities. Therefore, by the same arguments in Lemma 4 of <cit.> we obtain the desired coupling between the fixed-point-based θ̃^*(x) and the final GRF predictor θ̂(x). §.§ Proof of Proposition <ref> We wish to couple the estimates θ̂_C_j fit over samples from C_j via (<ref>) with a fixed-point-based approximation θ̃_C_j expressed as the result of a single fixed-point iteration taken away from the parent estimate via (<ref>). To do so we will establish the equivalence of both quantities to an analogue of the infeasible fixed-point approximation (<ref>), taken over the samples in child C_j, evaluated at x = x_P the center of mass of parent P, and with uniform forest weights α_i(x_P) = 1/|C_j|. That is, we consider θ̃^*_C_j(x_P) = θ(x_P) + 1/|C_j|∑_{i:X_i∈ C_j}ρ^*_i(x_P), and let r := sup_{i:X_i∈ C_j}X_i - x_P denote the radius of leaf C_j. Bounding _O,X[θ̃^*_C_j(x_P) - θ^*(x_P)] = (r). We have _O,X[θ̃^*_C_j(x_P) - θ^*(x_P)] = _O,X[( θ^*(x_P) + 1/|C_j|∑_{i:X_i∈ C_j}ρ^*_i(x_P)) - θ^*(x_P)], = 1/|C_j|∑_{i:X_i∈ C_j}_O,X[ ρ^*_i(x_P)], = 1/|C_j|∑_{i:X_i∈ C_j}_O,X[ -ξ^Tψ_θ^*(x_P),ν^*(x_P)(O_i)], = -ξ^T_O,X[ ψ_θ^*(x_P),ν^*(x_P)(O_i)], = -ξ^T _X[ _O|X[ ψ_θ^*(x_P),ν^*(x_P)(O_i) | X_i = X_i ]], = -ξ^T _X [ M_θ^*(x_P),ν^*(x_P)(X_i)]. By the Lipschitz continuity of the M-function in x, for any r > 0 we have M_θ,ν(x + r) - M_θ,ν(x)≤ Lr. Moreover, by definition (<ref>), the M-function will vanish when it is evaluated at point x and the true (θ^*(x),ν^*(x)), also evaluated at the same point x, M_θ^*(x),ν^*(x)(x) = [ψ_θ^*(x),ν^*(x)(O_i)|X_i=x] = 0. Hence, Lr ≥M_θ^*(x_P),ν^*(x_P)(x) - M_θ^*(x_P),ν^*(x_P)(x_P), = M_θ^*(x_P),ν^*(x_P)(x) - 0, = M_θ^*(x_P),ν^*(x_P)(x). The radius r depends on the variability of the samples within parent P, and so, taking the expectation of the previous norm, _X[ M_θ^*(x_P),ν^*(x_P)(X_i) ] = (r). Hence, continuing from the initial expansion, _O,X[θ̃^*_C_j(x_P) - θ^*(x_P)] = -ξ^T _X[ M_θ^*(x_P),ν^*(x_P)(X_i)], = (r). Bounding (θ̃^*_C_j(x_P)) = (1/|C_j|). We have (θ̃_C_j^*(x_P)) = (θ(x_P) + 1/|C_j|∑_{i:X_i∈ C_j}ρ^*_i(x_P)), = (1/|C_j|∑_{i:X_i∈ C_j}ρ^*_i(x_P)), = (-1/|C_j|∑_{i:X_i∈ C_j}ξ^T ψ_θ^*(x_P),ν^*(x_P)(O_i) ), = ξ^T(∑_{i:X_i∈ C_j}1/|C_j|ψ_θ^*(x_P),ν^*(x_P)(O_i))ξ Once again, this argument only differs from the gradient-based counterpart in the proof of Proposition 2 of <cit.> through its lack of the V(x_P)^-1 matrix. In the case of the gradient-based approximation, the matrix is non-random and comes out of the variance as ξ^T V(x_P)^-1(⋯) (ξ^T V(x_P)^-1)^T, leaving us with the same quantity inside the variance as well as the same bound on both fixed-point and gradient approximations, (θ̃_C_j^*(x_P)) = (1/|C_j|). Coupling θ̃^*_C_j(x_P) and θ̂_C_j. We have θ̃^*_C_j(x_P) - θ̂_C_j = (θ^*(x_P) + 1/|C_j|∑_{i:X_i∈ C_j}ρ^*_i(x_P) ) - θ̂_C_j = (θ^*(x_P) - θ̂_C_j) + 1/|C_j|∑_{i:X_i ∈ C_j}ρ^*_i(x_P), = O_P(max{ r , 1/√(|C_j|)}), where the θ^*(x_P) - θ̂_C_j = O_P(r) follows from the consistency of θ̂(x) for θ^*(x) and |C_j|^-1∑_{i:X_i ∈ C_j}ρ^*_i(x_P) = O_P(1/√(|C_j|)) follows from the second moment bounds established above. Coupling θ̃^*_C_j(x_P) and θ̃_C_j. By definition of θ̃_C_j and θ̃^*_C_j(x_P), θ̃_C_j - θ̃^*_C_j(x_P) = (θ̂_P - η∑_{i:X_i ∈ C_j}ξ^T ψ_θ̂_P,ν̂_P(O_i)) - (θ^*(x_P) - 1/|C_j|∑_{i:X_i ∈ C_j}ξ^T ψ_θ^*(x_P),ν^*(x_P)(O_i) ), = θ̂_P - θ^*(x_P) - ηξ^T ∑_{i:X_i ∈ C_j}ψ_θ̂_P,ν̂_P(O_i) + 1/|C_j|ξ^T ∑_{i:X_i ∈ C_j}ψ_θ^*(x_P),ν^*(x_P)(O_i), = θ̂_P - θ^*(x_P) - ηξ^T∑_{i:X_i ∈ C_j}ψ_θ̂_P,ν̂_P(O_i) + 1/|C_j|ξ^T ∑_{i:X_i ∈ C_j}( ψ_θ^*(x_P),ν^*(x_P)(O_i) + ψ_θ̂_P,ν̂_P(O_i) - ψ_θ̂_P,ν̂_P(O_i) ) , = θ̂_P - θ^*(x_P) - 1/|C_j|ξ^T ∑_{i:X_i ∈ C_j}( ψ_θ̂_P,ν̂_P(O_i) - ψ_θ^*(x_P),ν^*(x_P)(O_i) )† - ξ^T (η - 1/|C_j|) ∑_{i:X_i ∈ C_j}ψ_θ̂_P,ν̂_P(O_i). Following the arguments used in the proof of Proposition 2 of <cit.> we find that (<ref>) is bound by O_P(r), once again recognizing that the above expression differs only from its gradient-based counterpart up to the non-random V(x_P)^-1 matrix. Meanwhile, since -ξ^T(η-1/|C_j|) is non-random we once again mirror the bound offered by <cit.> to bound (<ref>) by O_P(max{r, 1/√(|C_j|)}). Therefore, θ̃_C_j - θ̃^*_C_j(x_P) = θ̂_P - θ^*(x_P) + O_P(max{r, 1/√(|C_j|)}), and as θ̂(x_P) is itself O_P(r)-consistent for θ^*(x_P), it follows that θ̃_C_j - θ̃^*_C_j(x_P) = O_P(max{r, 1/√(|C_j|)}). This is precisely the form of the corresponding term found in <cit.>'s proof of Proposition 2 for the gradient-based approximation. The remainder of the argument remains unchanged and so follows the desired result.
http://arxiv.org/abs/2306.03207v1
20230605192834
H2-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation
[ "Chenxing Jiang", "Hanwen Zhang", "Peize Liu", "Zehuan Yu", "Hui Cheng", "Boyu Zhou", "Shaojie Shen" ]
cs.RO
[ "cs.RO" ]
A positivity-preserving unigrid method for elliptic PDEs [ July 31, 2023 ======================================================== empty empty Constructing a high-quality dense map in real-time is essential for robotics, AR/VR, and digital twins applications. As Neural Radiance Field (NeRF) greatly improves the mapping performance, in this paper, we propose a NeRF-based mapping method that enables higher-quality reconstruction and real-time capability even on edge computers. Specifically, we propose a novel hierarchical hybrid representation that leverages implicit multiresolution hash encoding aided by explicit octree SDF priors, describing the scene at different levels of detail. This representation allows for fast scene geometry initialization and makes scene geometry easier to learn. Besides, we present a coverage-maximizing keyframe selection strategy to address the forgetting issue and enhance mapping quality, particularly in marginal areas. To the best of our knowledge, our method is the first to achieve high-quality NeRF-based mapping on edge computers of handheld devices and quadrotors in real-time. Experiments demonstrate that our method outperforms existing NeRF-based mapping methods in geometry accuracy, texture realism, and time consumption. The code will be released at <https://github.com/SYSU-STAR/H2-Mapping>. § INTRODUCTION Using robots to build highly-detailed dense maps in real-time benefits advanced robot autonomous navigation, AR/VR, and digital twins applications. These maps enable robots to perform high-level tasks and provide humans with real-time feedback on the environment, allowing them to adjust the robot's tasks promptly as needed. Besides, high-fidelity maps serve as critical assets for AR/VR and digital twins. The automatic and faithful recreation of environments in real-time using robots can be more efficient and time-saving than manual or offline reconstruction methods. To be suitable for real-time and high-quality robot mapping in unknown environments with limited onboard computation power, a mapping system must meet four key requirements: (1) Adaptability to growing scenes, allowing the robot to dynamically expand the map without prior knowledge of the scene; (2) High level of detail; (3) Real-time capability and high memory efficiency; and (4) Novel view synthesis ability, which allows rendering high-quality images from views apart from the sparse input views. This is particularly important for creating scenes for AR/VR applications. In robotics, mapping has been studied for decades. Previous works utilize explicit scene representations like occupancy grids<cit.>, TSDF <cit.>, surfels<cit.>, and meshes<cit.> to achieve real-time performance. However, these methods face challenges in balancing memory consumption and mapping accuracy<cit.> and are weak in novel view synthesis. In recent years, implicit representations have gained popularity following the introduction of NeRF<cit.>. Several works<cit.> employ NeRF to overcome limitations associated with explicit representations and achieve better mapping results in various aspects. These NeRF-based methods can produce high-fidelity reconstructions using less memory and generate high-quality images from novel views by continuously querying the scene attributes. However, the implicit representation describes the scene as high-dimensional features and neural networks that lack physical meaning, resulting in a long time for training. As a result, these methods cannot run in real-time even on the most powerful edge computers like AGX Orin (as evaluated in Sec.<ref>). Aiming to design a real-time and high-quality robot mapping method that fulfills the four requirements mentioned above, we propose a NeRF-based mapping method using a hierarchical hybrid representation. Our approach accelerates the optimization of implicit representation with the aid of an easy-to-optimize explicit representation, describing the scene at different levels of detail. For the coarse scene geometry, we describe it with explicit octree SDF priors. Specifically, we incrementally build a sparse voxel octree with a large voxel size, where we store the optimizable SDF of each leaf node's vertex. To represent geometry details and texture, we use implicit multiresolution hash encoding<cit.> to encode high-resolution scene properties in a memory-efficient way. By using octree SDF priors to capture coarse geometry efficiently, the multiresolution hash encoding can focus solely on the residual geometry, which is much simpler to learn than the complete geometry, thereby improving the geometry accuracy and convergence rate. To further speed up, we leverage a simple yet effective method to initialize the octree SDF priors. We project the voxel vertices to the depth image and calculate the associated SDF values. This initialization is based on the observation that a single measurement is usually sufficient to provide a promising estimation of coarse SDF values. Therefore, such a representation can obtain accurate geometry early on, which accelerates the optimization of texture with higher fidelity. Besides, to realize higher mapping accuracy, we propose a coverage-maximizing keyframe selection strategy to address the crucial forgetting issue in the online mapping task. Our method avoids redundant sample calculations across all keyframes<cit.> and ensures quality in marginal areas, without increasing the number of training samples<cit.>. Our method achieves faster and higher-quality NeRF-based mapping. To summarize, contributions are as follows: * A hierarchical hybrid representation with an effective initialization technique enables real-time dense mapping with high-fidelity details and dynamical expansion ability, even on edge computers. * An effective coverage-maximizing keyframe selection strategy that mitigates the forgetting issue and improves quality, especially in marginal areas. * Extensive experiments show our method achieves superior mapping results with less runtime compared to existing NeRF-based mapping methods. To the best of our knowledge, our method is the first to run a NeRF-based mapping method onboard in real-time. § RELATED WORKS §.§ Explicit Dense Mapping Various explicit representations have been used to store scene information for dense mapping. Octomap<cit.> uses probabilistic occupancy estimation to represent occupied, free, and unknown space. As a pioneer in using SDF for dense mapping, Kinect-Fusion<cit.> leverages volumetric SDF to enable real-time tracking and mapping. Following works improve the scalability<cit.>, the efficiency<cit.>, and the global consistency<cit.>. Moreover,<cit.> stores surfel to represent the environment, and<cit.> represents the robot's surrounding as a watertight 3D mesh. These methods are well known for their fast processing speed, which can be attributed to the physical meaning of explicit representations that make them easy to optimize. However, they require large amounts of memory to handle high-detailed mapping<cit.> and are incapable of realistically rendering from novel views. §.§ Implicit Dense Mapping Implicit representations utilize latent features and neural networks to represent a 3D scene in a high-dimensional space. DeepSDF<cit.> and Occupancy Networks<cit.> have shown the potential of implicit representation to model geometry. Recently, NeRF<cit.> further shows promising results in realistic novel view synthesis from sparse input views. Numerous studies<cit.> have been inspired by NeRF<cit.> and utilize implicit representation for incremental dense mapping. These methods achieve more compact and accurate results than explicit representations. The NeRF-based mapping pipeline consists of two main components: (1) Scene representation; and (2) Keyframe selection strategy. §.§.§ Scene representation iMap<cit.> demonstrates, for the first time, that an MLP can serve as the only scene representation. To overcome the limited representation capacity of a single MLP, NICE-SLAM<cit.> introduces multi-resolution dense grids to store encoded features of the scene, and MLPs are used to unfold the hidden information. But the pre-allocated grids make NICE-SLAM less scalable and memory inefficient. Vox-Fusion<cit.>, instead, only allocates voxels to the area containing the surface, forcing the network to learn more details in those regions. Nonetheless, due to the difficulty in optimizing implicit representations, it is challenging for these methods to meet real-time requirements for robotics applications. In contrast, our method utilizes a hierarchical hybrid representation for acceleration and accuracy improvement. This approach enables the implicit representation only to handle the residual geometry and texture, by taking the benefit of explicit structure. Optimizing the residual geometry is generally easier and faster. In order to speed up, some previous works aim to accelerate geometry convergence by incorporating geometry priors. INGeo<cit.>, for instance, scales up the initial density prediction by a factor to increase density as it approaches the surface. However, it requires manual configuration and does not provide a reasonable way to set the scaling factor. Go-surf<cit.> initializes its feature grid and geometry decoder to ensure that the initial SDF can represent a sphere centered at the scene origin, but this initialization process cannot adapt to map expansion and has little effect on the observed region. However, due to the hybrid representation, our method can directly initialize the explicit representation by projecting to the input depth image, which can speed up the texture optimization process with higher fidelity by providing accurate geometry in the early stage. Therefore, our method can be deployed to robots for accurate mapping in real time. §.§.§ Keyframe selection strategy iMap<cit.> allocates samples to every keyframe and calculates the loss distribution for selecting keyframes, which can be redundant. NICE-SLAM<cit.> selects optimized keyframes based on the overlap with the current frame. This strategy can keep the geometry outside the current field of view static by using a fixed, pre-trained decoder, but it cannot perform well in marginal areas that are seldom observed. Vox-Fusion<cit.> adds a new keyframe to be optimized based on the ratio of newly allocated voxels to the currently observed voxels. All keyframes are selected to sample the same number of pixels for ray casting, leading to the increasing number of training samples over time. However, our coverage-maximizing keyframe selection strategy ensures that all allocated voxels are covered with minimal iteration rounds, thereby improving the mapping quality, especially in edge regions. § H_2-MAPPING In this work, we propose a real-time and high-quality mapping method, as outlined in Fig.<ref>. Given a set of sequential poses and RGB-D frames, we utilize a hierarchical hybrid representation (Sec.<ref>) to depict the scene geometry and appearance. By employing a coverage-maximizing keyframe selection strategy (Sec.<ref>), we use the volume rendering approach like NeRF<cit.> to obtain the depth and color of each sampled ray (Sec.<ref>) and then optimize the hierarchical hybrid representation (Sec.<ref>). §.§ Hierarchical Hybrid Representation To accelerate the optimization of implicit representation, we propose a hierarchical hybrid representation that explicitly stores SDF priors in an expanded octree and uses the implicit multiresolution hash encoding to only handle residual geometry and texture. §.§.§ Expanded Octree SDF Priors Octree SDF priors When a new frame is received, we allocate new voxels based on the given pose and depth image and incrementally maintain a sparse voxel octree that covers all visible areas. We only add voxels containing more than ten points to the sparse voxel octree to reduce the impact of measurement noise. For each voxel, we store the optimizable SDF in every vertex to represent the coarse geometry of the scene. The coarse SDF s^c of any sample point in a leaf node is obtained from its surrounding eight vertices through the trilinear interpolation function TriLerp(·): s^c = TriLerp(𝐩, {s^c_k}), k ∈ V, where 𝐩 is the position of the sample point, s^c_k is the optimizable SDF of its surrounding vertex, and V is the set of eight vertices in the leaf node. To accelerate the convergence rate, we provide an initial SDF to each s^c_k when allocating new voxels. As shown in the left figure of Fig.<ref>, we project every vertex of each voxel onto the corresponding pixel in the RGB-D camera's frame to obtain an approximate SDF at that position: s^c_prior = 𝐃(u) - d_p, where d_p is the z-axis distance between the sensor and the vertex position 𝐩, 𝐮 is the projected pixel, and 𝐃(u) is the depth value at the pixel 𝐮. To avoid unreasonable SDF priors due to occlusion, we only provide the prior to the vertices where (𝐃(𝐮) - d_𝐩) < √(6)×(VOXEL SIZE). The right figure in Fig.<ref> shows the reconstruction results using only the SDF priors without any optimization. These coarse geometry priors accelerate the geometry optimization and then enhance the the scene's appearance by providing accurate geometry in the early stage, which is evaluated in Sec.<ref>. Expanded Voxels Allocation If the surface is close to the voxel's boundary, the accurate SDF at the position of the vertex near the surface will be close to 0. Therefore, it is possible for the SDF priors stored in that vertex to be optimized to the wrong sign, leading to the loss of the surface. To ensure that a surface will be created, we expand a new voxel if all the points obtained from back-projecting the depth image in the voxel are located at the edge. In Fig.<ref>, for example, the accurate SDF priors of the upper vertices should be positive but are close to 0 (Fig.<ref>(a)). Any slight disturbance in the optimization may cause these values to become negative, resulting in no surface being reconstructed (Fig.<ref>(b)). However, if we allocate an extra voxel on top of it, regardless of the sign to which the vertex near the surface is optimized, a surface will always be built (Fig.<ref>(c)(d)). §.§.§ Multiresolution Hash Encoding In Sec.<ref>, we efficiently obtain a coarse SDF of the scene. In order to obtain the scene's appearance and more detailed geometry, we employed a multiresolution hash encoding approach inspired by Instant-NGP<cit.>. Differing from the SDF implementation in Instant-NGP<cit.>, we only utilize the multiresolution hash encoding to handle the residual SDF which is easy to learn than the complete SDF of the scene. The multiresolution hash encoding works by arranging the surrounding voxels of a particular sample point at L resolution levels. At each level, F dimensional features are assigned to the corners of the voxels by looking up a hash table. To obtain the feature of the sample point, tri-linear interpolation is performed, and the feature at each level is concatenated. We employ two multiresolution hash encoding and shallow MLP attached to individually represent the color and residual SDF of the scene in a compact manner: s = s^c + ℳ_s(ϕ ^s;θ^w_s), 𝐜 = ℳ_c(ϕ ^c;θ^w_c). where ϕ ^c and ϕ ^s are L × F dimensional features obtained from the multiresolution hash encoding, ℳ_s and ℳ_c, parameterized by θ^w_s and θ^w_c, are MLPs to output the residual SDF prediction s and color prediction 𝐜 (three dimensions for R, G, B), respectively. §.§ Coverage-maximizing Keyframe Selection For a new input RGB-D frame, we insert this frame as a new keyframe if the ratio N_o/(N_c+N_l) is smaller than a threshold, where N_c is the number of currently observed voxels, N_l is the number of voxels observed at the last inserted keyframe, and N_o is the number their mutual voxels. Our keyframe insertion strategy ensures that the frames in the keyframe set have relatively little overlap. To select the optimized keyframes from the keyframe set, we employ a coverage-maximizing keyframe selection strategy, as illustrated in Fig. <ref>. At the initial time step t_0, all voxels are labeled as unobserved. We begin by selecting K keyframes that cover the largest number of voxels from the entire keyframe set. We mark these covered voxels as observed, and then optimize these selected keyframes and the current frame jointly. In the next time step t_1, we use the same coverage-maximizing strategy but only for voxels that are still labeled as unobserved. If all voxels have been labeled as observed, we reset the voxels that were previously marked as observed to unobserved and repeat the above process. By using this strategy iteratively, all the scene areas can be covered. As shown in Fig. <ref>(a), most of the voxels are covered in the first time step. In Fig. <ref>(b) and (c), the strategy continues to cover other remaining parts of the scene, ensuring the reconstruction quality of the edge regions. In Sec.<ref>, we further evaluate this strategy. §.§ SDF-based Volume rendering Like Vox-Fusion<cit.>, we only sample points along the ray that intersects with any voxel. And then get rendered color 𝐂 and depth D for each ray as follows: w_j = σ(s_j/tr) ·σ(-s_j/tr) 𝐂 = 1/∑^N-1_j=0w_j∑^N-1_j=0w_j·𝐜_j, D = 1/∑^N-1_j=0w_j∑^N-1_j=0w_j· d_j, where σ(·) is the sigmoid function, s_j and 𝐜_j are the predicted SDF and color obtained from the hierarchical hybrid representation described in Sec. <ref>, N is the number of samples along the ray, tr is a truncation distance and d_j is the sample's depth along the ray. §.§ Optimization Process §.§.§ Loss Function We apply loss functions like Vox-Fusion<cit.>: RGB Loss (ℒ_rgb), Depth Loss (ℒ_d), Free Space Loss (ℒ_fs) and SDF Loss (ℒ_sdf) on a batch of rays R. 0.98!ℒ_fs =1/| R|∑_r∈ R1/P_r^fs∑_p ∈ P_r^fs(s_p - tr)^2 , ℒ_sdf =1/| R|∑_r∈ R1/P_r^tr∑_p ∈ P_r^tr(s_p - s_p^gt)^2 ℒ_d = 1/| R|∑_r∈ R‖ D_r - D^gt_r‖ , ℒ_rgb = 1/| R|∑_r∈ R‖𝐂_r -𝐂^gt_r‖ where P_r^fs is a set of points on the ray r that lies between the camera and the truncation region of the surface measured by the depth sensor, P_r^tr is a set of points within the truncation area. (D_r, D^gt_r) and (C_r, 𝐂^gt_r) are rendered and input depth and color. s_p is the predicted SDF and s_p^gt is the difference between the distance to point p on the ray r and the depth measurement of that ray. The final loss function is a weighted sum of these loss functions. The weights are determined by α_sdf, α_fs ,α_d, and α_rgb. ℒ = α_sdfℒ_sdf + α_fsℒ_fs +α_dℒ_d +α_rgbℒ_rgb §.§.§ Adaptive Early Ending In the training process, the current frame and selected keyframes will be used for optimization several times. As shown in Fig. <ref>, the average iteration time that achieves PSNR convergence varies in different scenarios. Therefore, to adaptively choose an appropriate iteration time that can balance the time consumption and mapping precision in various scenarios, we employ an early stopping policy if the total loss exceeds twice the average total loss of the current training round, indicating that further optimization can only result in a little improvement. § EXPERIMENT To evaluate the performance of our proposed method, we compare its reconstruction accuracy and time consumption with other NeRF-based RGB-D mapping systems on both the synthetic Replica dataset<cit.> and the real-world ScanNet dataset<cit.>. Additionally, we conduct an ablation study to demonstrate the effectiveness of each module in our approach. Furthermore, we deploy our method on a handheld device and quadrotors with limited computational power to test its mapping performance. §.§ Mapping and Rendering Evaluation §.§.§ Implementation Details In our method, the voxel size of the octree's leaf node is 10cm, tr=5cm, and the maximum number of iterations is 10. For the multi-resolution hash encoding, L=4, F=2, T=2^19, and the scale difference of the adjacent level is 2. Due to octree SDF priors, we only use one-layer MLP of size 64 to decode the geometry features. The appearance decoder is a two-layer MLP of size 64. 4096 pixels are selected for each iteration to generate rays and the distance between adjacent sampled points is 1cm. The number of keyframes that are selected to be optimized is K = 10. §.§.§ Baselines We select two advanced NeRF-based dense RGB-D SLAM methods currently open-source, NICE-SLAM<cit.> and Vox-Fusion<cit.> for comparison. However, since we solely focus on incremental mapping, we remove their tracking component and instead provide the ground truth pose. All other aspects remain unchanged. §.§.§ Metrics To evaluate scene geometry, we use the Depth L1 Error[cm] Accuracy[cm], Completion[cm] and Completion Ratio[<5cm%] of reconstructed mesh. Besides, We use SSIM and PSNR to evaluate scene appearance on rendered images from all training views (Interpolation) and distant novel views (Extrapolation). Only the portions of the mesh included in voxels are considered, and non-depth regions are not measured in Depth L1 Error, SSIM, and PSNR. §.§.§ Evaluation on Replica<cit.> In <ref>, we present a quantitative comparison of the reconstruction and rendering performance of our method and the baselines. The results demonstrate that our approach outperforms the baselines for both 2D and 3D metrics. Additionally, we provide a qualitative analysis of the reconstructed mesh and rendered images. Notably, in Fig. <ref>, the mesh obtained by our method appears smoother in areas such as the sofa and floor, while our approach exhibits enhanced geometric details, particularly for smaller objects like chair legs and vases. Fig. <ref> shows that our method can generate renderings with more realistic details from both training views and novel views. §.§.§ Evaluation on ScanNet<cit.> Since ground truth meshes for the ScanNet<cit.> are unavailable, we only provide qualitative analysis on the textured mesh of Scene0000 and Scene0050 as shown in Fig. <ref>. Compared to the baselines, our methods can get sharper object outline, fewer artifacts, and higher-fidelity textures like the pattern on the carpet. §.§.§ Runtime Analysis We select Room0 from Replica<cit.> and Scene0000 from ScanNet<cit.> to evaluate the runtime by comparing our method with the baselines in <ref>. We report the average frame processing time (FPT) on RTX 4090, RTX 2080Ti, AGX Orin, Orin NX separately. Our method is much faster than previous work on various devices. §.§ Ablation Study §.§.§ Octree SDF Priors Octree SDF priors represent an easy-to-optimize explicit structure with a computationally efficient initialization process. As a result, the reconstructed geometry depicted in Fig. <ref> is more accurate at the beginning of optimization, leading to a rapid PSNR increase for the first frame during the early stage. Furthermore, by promptly providing a coarse geometry and utilizing the implicit multiresolution hash encoding exclusively for the residual part, the accuracy and completion metrics in Fig. <ref> eventually converge to a lower level, leading to higher PSNR for the same average iteration times on the entire sequence in Fig. <ref>. The PSNR arises because accurate geometry ensures the gradient of color prediction mainly affects the surface region during backpropagation. Therefore, by enabling a faster and more accurate geometry reconstruction, this hybrid representation achieves the same reconstruction quality with fewer training iterations. This is particularly meaningful for robotic applications with limited computing power. §.§.§ Expanded Voxels Allocation Table <ref> shows the expanded voxel allocation technique has a greater impact on completion. Besides, Fig. <ref> shows the visualization results about Office3 and Office4 of Replica<cit.>. In the left part of (a) and (b), holes are generated in regions where the surface is close to the voxel's boundary, making it easy for the SDF to be optimized to the wrong sign. However, as shown in the right part of (a) and (b), the expansion technique can reduce the holes caused by optimization sensitivity. §.§.§ Coverage-maximizing Keyframe Selection  <ref> demonstrate our keyframe selection strategy can significantly improve the PSNR and Acc. metrics. As shown in Fig. <ref>, our strategy can better optimize the ceiling area in Office2 of Replica<cit.>. Since the number of images corresponding to this area is low in the overall keyframe set, it is rare for previous methods to optimize this region using the random strategy. However, our keyframe selection strategy achieves complete coverage of all the voxels in the keyframe set with minimal iteration rounds, greatly increasing the probability of optimizing the edge regions. §.§.§ Adaptive Early Ending The blue curve in Fig. <ref> shows that increasing the number of iterations leads to higher PSNR values, but the rate of improvement gradually slows down. As the number of iterations varies when using the adaptive early ending, we calculate the average iteration time and the corresponding PSNR, represented by the red star. The results demonstrate this strategy adaptively leads to different average iteration times that are close to convergence in various scenarios, which helps to reduce optimization time without compromising accuracy. §.§ Real-World SLAM Demonstration We demonstrate our mapping method with a tracking module, which completes a SLAM system on a handheld device and quadrotors. Specifically, we employ the Realsense L515 as the vision sensor to provide RGB-D images, and a modified VINS-Mono<cit.> incorporating depth constraints as the tracking module to estimate the pose. The handheld device is powered by AGX Orin, and the quadrotor is equipped with Orin NX. All the programs are running onboard. Fig. <ref> illustrates the results of our real-world experiments. We use the handheld device to reconstruct an apartment (Mesh surface: ≈127m^2) and use the quadrotors for mapping a part of the fight arena (Mesh surface: ≈58m^2). The final mesh is extracted by marching cube<cit.> and all optimization was performed within the mapping procedure without any post-processing or additional training time. To the best of our knowledge, our method is the first to achieve high-quality NeRF-based mapping in real-time on edge computers. More details can be found in the attached video. § CONCLUSION We propose H_2-Mapping, a novel NeRF-based dense mapping system that utilizes hierarchical hybrid representation and can be deployed on edge computers for real-time and high-quality robot mapping. The coarse geometry is represented explicitly using octree SDF priors for fast initialization and convergence, while high-resolution geometry details and texture are encoded implicitly using multiresolution hash encoding in a memory-efficient manner. Furthermore, we propose a coverage-maximizing keyframe selection strategy to improve the reconstruction quality in marginal areas. Baseline comparisons demonstrate that our method outperforms both mapping quality and time consumption. Besides, ablation studies show that the hierarchical hybrid representation effectively accelerates geometry and texture optimization, and the proposed keyframe selection strategy guarantees reconstruction accuracy even in edge areas. However, currently, our method cannot handle dynamic objects and long-term pose drifting, and further speed-up is required.
http://arxiv.org/abs/2306.01405v1
20230602095204
Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping
[ "Baorui Ma", "Yu-Shen Liu", "Zhizhong Han" ]
cs.CV
[ "cs.CV" ]
[ Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping Baorui MaTHU Yu-Shen LiuTHU Zhizhong HanWSU THUSchool of Software, Tsinghua University, Beijing, China WSUDepartment of Computer Science, Wayne State University, Detroit, USA Yu-Shen [email protected] Machine Learning, ICML 0.3in ] < g r a p h i c s > figureWe introduce to learn signed distance functions (SDFs) for single noisy point clouds. Our method does not require ground truth signed distances, point normals or clean points as supervision for training. We achieve this via learning a mapping from one noisy observation to another or even on a single observation. Our novel learning manner is supported by modern Lidar systems which capture 10 to 30 noisy observations per second. We show the SDF learned from (a) a single real scan containing 10M points, (b) the denoised point cloud and (c) the reconstructed surface. Fig. <ref> demonstrates our superiority over the latest surface reconstructions in this case. Learning signed distance functions (SDFs) from 3D point clouds is an important task in 3D computer vision. However, without ground truth signed distances, point normals or clean point clouds, current methods still struggle from learning SDFs from noisy point clouds. To overcome this challenge, we propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision for training. Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy point cloud observations. Our novel learning manner is supported by modern Lidar systems which capture multiple noisy observations per second. We achieve this by a novel loss which enables statistical reasoning on point clouds and maintains geometric consistency although point clouds are irregular, unordered and have no point correspondence among noisy observations. Our evaluation under the widely used benchmarks demonstrates our superiority over the state-of-the-art methods in surface reconstruction, point cloud denoising and upsampling. Our code, data, and pre-trained models are available at <https://github.com/mabaorui/Noise2NoiseMapping/> . § INTRODUCTION 3D point clouds have been a popular 3D representation. We can capture 3D point clouds not only on unmanned vehicles, such as self-driving cars, but also from consumer level digital devices in our daily life, such as the iPhone. However, the raw point clouds are discretized and noisy, which is not friendly to downstream applications like virtual reality and augmented reality requiring clean surfaces. This results in a large demand of learning signed distance functions (SDFs) from 3D point clouds, since SDFs are continuous and also capable of representing arbitrary 3D topology. Deep learning based methods have shown various solutions of learning SDFs from point clouds <cit.>. Different from classic methods <cit.>, they mainly leverage data-driven strategy to learn various priors from large scale dataset using deep neural networks. They usually require the signed distance ground truth <cit.>, point normals <cit.>, additional constraints <cit.> or no noise assumption <cit.>. These requirements significantly affect the accuracy of SDFs learned for noisy point clouds, either caused by poor generalization or the incapability of denoising. Therefore, it is still challenging to learn SDFs from noisy point clouds without clean or ground truth supervision. To overcome this challenge, we introduce to learn SDFs from noisy point clouds via noise to noise mapping. Our method does not require ground truth signed distances and point normals or clean point clouds to learn priors. As demonstrated in Fig. <ref>, our novelty lies in the way of learning a highly accurate SDF for a single object or scene from its several corrupted observations, i.e., noisy point clouds. Our learning manner is supported by modern Lidar systems which produce about 10 to 30 corrupted observations per second. By introducing a novel loss function containing a geometric consistency regularization, we are enabled to learn a SDF via a task of learning a mapping from one corrupted observation to another corrupted observation or even a mapping from one corrupted observation to the observation itself. The key idea of this noise to noise mapping is to leverage the statistical reasoning to reveal the uncorrupted structures upon its several corrupted observations. One of our contribution is the finding that we can still conduct statistical reasoning even there is no spatial correspondence among points on different corrupted observations. Our results achieve the state-of-the-art in different applications including surface reconstruction, point cloud denoising and upsampling under widely used benchmarks. Our contributions are listed below. * We introduce a method to learn SDFs from noisy point clouds without requiring ground truth signed distances, point normals or clean point clouds. * We prove that we can leverage Earth Mover's Distance (EMD) to perform the statistical reasoning via noise to noise mapping and justify this idea using our novel loss function, even if 3D point clouds are irregular, unordered and have no point correspondence among different observations. * We achieved the state-of-the-art results in surface reconstruction, point cloud denoising and upsampling for shapes or scenes under the widely used benchmarks. § RELATED WORK Learning implicit functions for 3D shapes and scenes has made great progress <cit.>. We briefly review methods with different supervision below. Learning from 3D Supervision. It was explored on how to learn implicit functions, i.e., SDFs or occupancy fields, using 3D supervision including signed distances <cit.> and binary occupancy labels <cit.>. With a condition, such as a single image <cit.> or a learnable latent code <cit.>, neural networks can be trained as an implicit function to model various shapes. We can also leverage point clouds as conditions <cit.> to learn implicit functions, and then leverage the marching cubes algorithm <cit.> to reconstruct surfaces <cit.>. To capture more detailed geometry, implicit functions are defined in local regions which are covered by voxel grids <cit.>, patches <cit.>, 3D Gaussian functions <cit.>, learnable codes <cit.>. Learning from 2D Supervision. We can also learn implicit functions from 2D supervision, such as multiple images. The basic idea is to leverage various differentiable renderers <cit.> to render the learned implicit functions into images, so that we can obtain the error between rendered images and ground truth images. Neural volume rendering was introduced to capture the geometry and color simultaneously <cit.>. Learning from 3D Point Clouds. Some methods were proposed to learn implicit functions from point clouds without 3D ground truth. These methods leverage additional constraints <cit.>, gradients <cit.>, differentiable poisson solver <cit.> or specially designed priors <cit.> to learn signed <cit.> or unsigned distance fields <cit.>. One issue here is that they usually assume the point clouds are clean, which limits their performance in real applications due to the noise. Our method falls into this category, but we can resolve this problem using statistical reasoning via noise to noise mapping. Deep Learning based Point Cloud Denoising. PointCleanNet <cit.> was introduced to remove outliers and reduce noise from point clouds using a data-driven strategy. Graph convolution was also leveraged to reduce the noise based on dynamically constructed neighborhood graphs <cit.>. Without supervision, TotalDenoising <cit.> inherits the same idea as Noise2Noise <cit.>. It leveraged a spatial prior term that can work for unordered point clouds. More recently, downsample-upsample architecture <cit.> and gradient fields <cit.> were leveraged to reduce noise. We were inspired by the idea of Noise2Noise <cit.>, our contribution lies in our finding that we can still leverage statistical reasoning among multiple noisy point clouds with specially designed losses even there is no spatial correspondence among points on different observations like the one among pixels, which is totally different from TotalDenoising <cit.>. § METHOD Overview. Given N corrupted observations S={N_i|i∈[1,N],N≥ 1} of an uncorrupted 3D shape or scene S, we aim to learn SDFs f of S from S without ground truth signed distances, point normals, or clean point clouds. Here, N_i is a noisy point cloud. SDFs f predicts a signed distance d for an arbitrary query location q∈ℝ^1× 3 around S, such that d=f(q,c), where c is a condition denoting S. We train a neural network parameterized by θ to learn f, which we denote as f_θ. After training, we can further leverage the learned f_θ for surface reconstruction, point cloud denoising, and point cloud upsampling. Our key idea of statistical reasoning is demonstrated in Fig. <ref>. Using a noisy point cloud N_i as input, our network aims to learn SDFs f_θ via learning a noise to noise mapping from N_i to another noisy point cloud N_j, where N_j is also randomly selected from the corrupted observation set S and j∈[1,N]. Our loss not only minimizes the distance between the denoised point cloud N_i' and N_j using a metric L but also constrains the learned SDFs f_θ to be correct using a geometric consistency regularization R. A denoising function F conducts point cloud denoising using signed distances d and gradients ∇ f_θ from f_θ. Reducing Noise. A common strategy for estimating the uncorrupted data from its noise corrupted observations is to find a target that has the smallest average deviation from measurements according to some loss function L. The data could be a scalar, a 2D image or a 3D point cloud etc.. Here, to reduce noise on point clouds, we aim to find the uncorrupted point cloud N' from its corrupted observations N∈ S below, _N'𝔼_N{L(N',N)}. As a conclusion of Noise2Noise <cit.> for 2D image denoising, we can learn a denoising function F by pushing a denoised image F(x) to be similar to as many corrupted observations y as possible, where both x and y are corrupted observations. This is an appealing conclusion since we do not need the expensive pairs of the corrupted inputs and clean targets to learn the denoising function F. We want to leverage this conclusion to learn to reduce noise without requiring clean point clouds. So we transform Eq. (<ref>) into an equation with a denoising function F, _F∑_N_i∈ S∑_N_j∈ S L(F(N_i),N_j). One issue we are facing is that the conclusion of Noise2Noise may not work for 3D point clouds, due to the irregular and unordered characteristics of point clouds. For 2D images, multiple corrupted observations have the pixel correspondence. This results in an assumption that all noisy observations at the same pixel location are random realizations of a distribution around a clean pixel value. However, this assumption is invalid for point clouds. This is also the reason why TotalDenoising <cit.> does not think Eq. (<ref>) can work for point cloud denoising, since the noise in 3D point clouds is total. Differently, our finding is in opposite direction. We think we can still leverage Eq. (<ref>) to reduce noise in 3D point clouds, and the key is how to define the distance metric L, which is regarded as one of our contributions. Another issue that we are facing is how we can learn SDFs f_θ via point cloud denoising in Eq. (<ref>). Our solution is to leverage f_θ to define the denoising function F. This enables to conduct the learning of SDFs and point cloud denoising at the same time. Next, we will elaborate on our solutions to the aforementioned two issues. Denoising Function F. The denoising function F aims to produce a denoised point cloud N' from a noisy point cloud N, so N'=F(N). To learn SDFs f_θ of N, we want the denoising procedure can also perceive the signed distance fields around N. The essence of denoising is to move points floating off the surface of an object onto the surface. As shown in Fig. <ref> (a), there are many potential paths to achieve this, but only one path is the shortest to the surface. If we leverage this shortest path to denoise point cloud N, we could involve the SDFs f_θ to define the denoising function F, since f_θ can determine the shortest path. Here, inspired by the idea of NeuralPull <cit.>, we also leverage the signed distance d=f_θ(n,c) and the gradient ∇ f_θ(n,c) to pull an arbitrary point n on the noisy point cloud N onto the surface. So we define the denoising function F below, F(n,f_θ)=n-d×∇ f_θ(n,c)/||∇ f_θ(n,c)||_2. With Eq. (<ref>), we can pull all points on the noisy point cloud N onto the surface, which results in a point cloud N'=F(N,f_θ). But one issue remaining is how to constrain N' to converge to the uncorrupted surface. Distance Metric L. We investigate the distance metric L so that we can constrain N' to reveal the uncorrupted surface by a statistical reasoning among the corrupted observations S={N_i} using Eq. (<ref>). Our investigation conclusion is summarized in the following Theorem. Theorem 1. Assume there was a clean point cloud G which is corrupted into observations S={N_i} by sampling a noise around each point of G. If we leverage EMD as the distance metric L defined in Eq. (<ref>), and learn a point cloud G' by minimizing the EMD between G' and each observation in S, i.e., min_G'∑_N_i∈ SL(G',N_i), then G' converges to the clean point cloud G, i.e., L(G,G')=0. L(G,G')=min_ϕ:G→G'∑_g∈G||g-ϕ(g)_2. We prove Theorem 1 in the following appendix. We believe the one-to-one correspondence ϕ found in the calculation of EMD in Eq. (<ref>) plays a big role in the statistical reasoning for denoising. This is very similar to the pixel correspondence among noisy images in Noise2Noise although point clouds are irregular, unordered and have no spatial correspondence among points on different observations. We highlight this by comparing the point cloud G' optimized with EMD and Chamfer Distance (CD) as L based on the same observation set S in Fig. <ref>. Given noisy point clouds N_i like in Fig. <ref> (a), Fig. <ref> (b) demonstrates that the point cloud G' optimized with CD is still noisy, while the one optimized with EMD in Fig. <ref> (c) is very clean. According to this theorem, we can learn the denoising function F using Eq. (<ref>). F produces the denoised point cloud N_i'=F(N_i,f_θ) using EMD as the distance metric L. This also leads to one term in our loss function below, min_θ∑_N_i∈ S∑_N_j∈ S L(F(N_i,f_θ),N_j). Geometric Consistency. Although the term in Eq. (<ref>) can work for point cloud denoising well, as shown in Fig. <ref> (c), we found that the SDFs f_θ may not describe a correct signed distance field. With f_θ either learned with CD or EMD, the surfaces reconstructed using marching cubes algorithms <cit.> in Fig. <ref> (d) and (e) are poor. This is because Eq. (<ref>) only constrains that points on the noisy point cloud should arrive onto the surface but there are no constraints on the paths to be the shortest. This is caused by the unawareness of the true surface which however is required as the ground truth by NeuralPull <cit.>. The issue is further demonstrated in Fig. <ref>, one situation that may happen is shown in Fig. <ref> (b). With the wrong signed distances f_θ and gradient ∇ f_θ, noises can also get pulled onto the surface, which results in a denoised point cloud with zero EMD distance to the clean point clouds. This is much different from the correct signed distance field that we expected in Fig. <ref> (c). To resolve this issue, we introduce a geometric consistency to constrain f_θ to be correct. Our insight here is that, for an arbitrary query n around a noisy point cloud N_i, the shortest distance between n and the surface can be either predicted by the SDFs f_θ or calculated based on the denoised point cloud N_i'=F(N_i,f_θ), both of which should be consistent to each other. Therefore, the absolute value |f_θ(n,c)| of the signed distance predicted at n should equal to the minimum distance between n and the denoised point cloud N_i'=F(N_i,f_θ). Since the point density of N_i' may slightly affect the consistency, we leverage an inequality to describe the geometric consistency, |f_θ(n,c)| ≤min_n'∈ F(N_i,f_θ) ||n-n'||_2. The geometric consistency is further illustrated in Fig. <ref> (d). Noisy points above/below the wing can be correctly pulled onto the upper/lower surface without crossing the wing using the geometric consistency. It achieves the same denoising performance, and leads to a much more accurate SDF for surface reconstruction than the one without the geometric consistency. Loss Function. With the geometric consistency, we can penalize the incorrect signed distance field shown in Fig. <ref> (b) while encouraging the correct one in Fig. <ref> (c). So, we leverage the geometric consistency as a regularization term R, which leads to our objective function below by combining Eq. (<ref>) and Eq. (<ref>), min_θ∑_N_i∈ S(∑_N_j∈ S L(F(N_i,f_θ),N_j)+λ/|N_i|∑_n∈N_iR(E), where |N_i| is the number of n on N_i, E is the difference defined as (|f_θ(n,c)|-min_n'∈ F(N_i,f_θ) ||n-n'||_2), λ is a balance weight, and R(E)=max(0,E). The effect of the geometric consistency is demonstrated in Fig. <ref> (f) and (g). The denoised point cloud in Fig. <ref> (f) shows points that are more uniformly distributed, compared with the one obtained without the geometric consistency in Fig. <ref> (c). More importantly, we can learn correct SDFs f_θ to reconstruct plausible surface in Fig. <ref> (g), compared to the one obtained without the geometric consistency in Fig. <ref> (e) and the ground truth in Fig. <ref> (h). More Details. We sample more queries around the input noisy point cloud N_i using the method introduced in NeuralPull <cit.>. We randomly sample a batch of B queries as input, and also randomly sample the same number of points from another noisy point cloud N_j as target. Using batches enables us to process large scale point clouds, makes it possible to leverage noisy point clouds with different point numbers even we use EMD as the distance metric L, and more importantly, does not affect the performance. We train f_θ to overfit to a single shape or scene or overfit to multiple shapes or scenes using conditions c to indicate different shapes or scenes. We visualize the optimization process in 4 epochs in Fig. <ref> (a). We show how the 3 queries (black cubes) get pulled progressively onto the surface (Cyan). For each query, we also show its corresponding target in each one of 100 batches in the same color (red, green, blue), and each target is established by the mapping ϕ in the metric L. The essence of statical reasoning in each epoch is that each query will be pulled to the average point of all targets from all batches since the distance between the query and each target should be minimized. Although the targets are found all over the shape in the first epoch, the targets surround the query more tightly as the query gets pulled to the surface in the following epochs. This makes queries get pulled onto the surface which results in an accurate SDF visualized in the surface reconstruction and level-sets in Fig. <ref> (b). One Noisy Point Cloud. Although we prove Theorem 1 based on multiple noisy point clouds (N>1), we surprisingly found that our method can also work well when only one noisy point cloud (N=1) is available. Specifically, we regard the queries sampled around the noisy point cloud N_i as input and regard N_i as target. We believe the reason why N=1 works is that the knowledge learned via statistical reasoning in the batch based training can be well generalized to various regions. We will report our results learned from multiple or one noisy point clouds in experiments. Noise Types. We work well with different types of noises in Fig. <ref>. We use zero-mean noises in our proof of Theorem 1, but we find we work well with unknown noises in real scans in experiments. In evaluations, we also use the same type of noises in benchmarks for fair comparisons. § EXPERIMENTS AND ANALYSIS We evaluate our method in two steps. We first evaluate our method in applications that only care about points, such as point cloud denoising and upsampling. So, we only leverage Eq. (<ref>) to produce the denoised or upsampled point clouds. Then, we evaluate our method trained with the loss in Eq. (<ref>) in surface reconstruction, where λ=0.1. §.§ Point Cloud Denoising Dataset and Metric. For the fair comparison with the state-of-the-art results, we follow SBP <cit.> to evaluate our method under two benchmarks named as PU and PC that were released by PUNet <cit.> and PointCleanNet <cit.>. We report our results under 20 shapes in the test set of PU and 10 shapes in the test set of PC. We use Poisson disk to sample 10K and 50K points from each shape respectively as the ground truth clean point clouds in two different resolutions. The clean point cloud is normalized into the unit sphere. In each resolution, we add Gaussian noise with three standard deviations including 1%, 2%, 3% to the clean point clouds. We leverage L2 Chamfer Distance (L2CD) and point to mesh distance (P2M) to evaluate the denoising performance. For each test shape, we generate N=200 noisy point clouds to train our method. We sample B=250 points in each batch. We report our results and numerical comparison in Tab. <ref>. The compared methods include Bilateral <cit.>, Jet <cit.>, MRPCA <cit.>, GLR <cit.>, PCNet <cit.>, GPDNet <cit.>, DMR <cit.>, TTD <cit.>, and SBP <cit.>. These methods require learned priors and can not directly use multiple observations. The comparison with different conditions indicates that our method significantly outperforms traditional point cloud denoising methods and deep learning based point cloud denoising methods in both supervised and unsupervised (“-Un”) settings. Error map comparison with TTD <cit.> and SBP <cit.> in Fig. <ref> further demonstrates our state-of-the-art denoising performance. §.§ Point Cloud Upsampling Dataset and Metric. We use the PU dataset mentioned before to evaluate the f_θ learned in our denoising experiments in point cloud upsampling. Following SBP <cit.>, we produce an upsampled point cloud with an upsampling rate of 4 from a sparse point cloud by denoising the sparse point cloud with noise. We compare the denoised point cloud and the ground truth, and report L2CD and P2M comparison in Tab. <ref>. We compared with PU-Net <cit.> and SBP <cit.>. The comparison demonstrates that our method can perform the statistical reasoning to reveal points on the surface more accurately. §.§ Surface Reconstruction for Shapes ShapeNet. We first report our surface reconstruction performance under the test set of 13 classes in ShapeNet <cit.>. The train and test splits follow COcc <cit.>. Following IMLS <cit.>, we leverage point clouds with 3000 points as clean truth, and add Gaussian noise with a standard deviation of 0.005. For each clean point cloud, we generate N=200 noisy point clouds with a batch size of B=3000. We leverage L1 Chamfer Distance (L1CD), Normal Consistency (NC) <cit.>, and F-score <cit.> with a threshold of 1% as metrics. We compare our methods with methods including PSR <cit.>, PSG <cit.>, R2N2 <cit.>, Atlas <cit.>, COcc <cit.>, SAP <cit.>, OCNN <cit.>, IMLS <cit.> and POCO <cit.>. The numerical comparison in Tab. <ref> demonstrates our state-of-the-art surface reconstruction accuracy over 13 classes. Although we do not require the ground truth supervision, our method outperforms the supervised methods such as SAP <cit.>, COcc <cit.> and IMLS <cit.>. We further demonstrate our superiority in the reconstruction of complex geometry in the visual comparison in Fig. <ref>. More numerical and visual comparisons can be found in the following appendix. FAMOUS and ABC. We further evaluate our method using the test set in FAMOUS and ABC dataset provided by P2S <cit.>. The clean point cloud is corrupted with noise at different levels. We follow NeuralPull <cit.> to report L2 Chamfer Distance (L2CD). Different from previous experiments, we only leverage single N=1 noisy point clouds to train our method with a batch size of B=1000. We compare our methods with methods including DSDF <cit.>, Atlas <cit.>, PSR <cit.>, P2S <cit.>, NP <cit.>, IMLS <cit.>, PCP <cit.>, POCO <cit.>, and OnSF <cit.>. The comparison in Tab. <ref> demonstrates that our method can reveal more accurate surfaces from noisy point clouds even we do not have training set, ground truth supervision or even multiple noisy point clouds. The statistical reasoning on point clouds and geometric regularization produce more accurate surfaces as demonstrated by the error map comparison under FAMOUS in Fig. <ref>. D-FAUST and SRB. Finally, we evaluate our method under the real scanning dataset D-FAUST <cit.> and SRB <cit.>. We follow SAP <cit.> to evaluate our result using L1CD, NC <cit.>, and F-score <cit.> with a threshold of 1% using the same set of shapes. We use single N=1 noisy point clouds to train our method with a batch size of B=5000. We compare our methods with the methods including IGR <cit.>, Point2Mesh <cit.>, PSR <cit.>, SAP <cit.>. We report numerical comparison in Tab. <ref> and Tab. <ref>. Although we only do statistical reasoning on a single noisy point cloud and do not require point normals as SAP <cit.>, our method still handles the noise in real scanning well, which achieves much smoother and more accurate structure. The comparison in Fig. <ref> and Fig. <ref> shows that our method can produce more accurate surfaces without missing parts on both rigid and non-rigid shapes. §.§ Surface Reconstruction for Scenes 3D Scene. We evaluate our method under real scene scan dataset <cit.>. We sample 1000 points per m^2 from Lounge and Copyroom, and only leverage N=1 noisy point cloud to train our method with a batch size of B=5000. We leverage the pretrained models of COcc and LIG and retrain NP and DeepLS to produce their results with the same input. We also provide LIG and DeepLS with the ground truth point normals. Numerical comparison in Tab. <ref> demonstrates that our method significantly outperforms the state-of-the-art. Fig. <ref> further demonstrates that we can produce much smoother surfaces with more geometry details. Paris-rue-Madame. We further evaluate our method under another real scene scan dataset <cit.>. We only use N=1 noisy point cloud with a batch size of B=5000. We split the 10M points into 50 chunks each of which is used to learn a SDF. Similarly, we use each chunk to evaluate IMLS <cit.> and LIG <cit.> with their pretrained models. Our superior performance over the latest methods in large scale surface reconstruction is demonstrated in Fig. <ref>. Our denoised point clouds in a smaller scene are detailed in Fig. <ref>. §.§ Ablation Studies We conduct ablation studies under the test set of PU. We first explore the effect of batch size B, training iterations, and the number N of noisy point clouds in point cloud denoising. Tab. <ref> indicates that more points in each batch will slow down the convergence. Tab. <ref> demonstrates that more training iterations help perform statistical reasoning better to remove noise. Tab. <ref> indicates that more corrupted observations are the key to increase the performance of statistical reasoning although one corrupted observation is also fine to perform statistical reasoning well. We further highlight the effect of EMD as the distance metric L and geometric consistency regularization R in denoising and surface reconstruction in Tab. <ref>. The comparison shows that we can not perform statistical reasoning on point clouds using CD, and EMD can only reveal the surface in statistical reasoning for denoising but not learn meaningful signed distance fields without R. Moreover, we found the λ weighting R slightly affects our performance. More additional studies are in the following appendix. § CONCLUSION We introduce to learn SDFs from noisy point clouds via noise to noise mapping. We explore the feasibility of learning SDFs from multiple noisy point clouds or even one noisy point cloud without the ground truth signed distances, point normals or clean point clouds. Our noise to noise mapping enables the statistical reasoning on point clouds although there is no spatial correspondence among points on different noisy point clouds. Our key insight in noise to noise mapping is to use EMD as the metric in the statistical reasoning. With the capability of the statistical reasoning, we successfully reveal surfaces from noisy point clouds by learning highly accurate SDFs. We evaluate our method under synthetic dataset or real scanning dataset for both shapes or scenes. The effectiveness of our method is justified by our state-of-the-art performance in different applications. § ACKNOWLEDGEMENT We thank reviewers who gave useful comments. This work was supported by National Key R&D Program of China (2022YFC3800600), the National Natural Science Foundation of China (62272263, 62072268), and in part by Tsinghua-Kuaishou Institute of Future Media Data. icml2023 § NETWORK ARCHITECTURES We employ a network that is modified based on OccNet <cit.>. Since the output of OccNet is a value with a range of [0,1], we replace the sigmoid function that produces this output with the tanh function, which can output a signed distance value with a range of [-1,1], where the sign indicates the inside or outside of the 3D shape. In addition, we also replace the Resblock used in OccNet by simple fully connected layers to simplify the OccNet, which highlights the advantage of our method. § QUERY SAMPLING We sample more queries around a noisy point cloud if there is only one noisy point cloud available. We leverage a method introduced by NeuralPull <cit.> to sample queries around each point on the noisy point cloud. § SURFACE RECONSTRUCTION Numerical Comparison. We report more detailed comparison under ShapeNet <cit.>. Due to the text limit in the main body, we only report the mean metric over all 13 classes under ShapeNet. We compare our methods with methods including PSR <cit.>, PSG <cit.>, R2N2 <cit.>, Atlas <cit.>, COcc <cit.>, SAP <cit.>, OCNN <cit.>, and IMLS <cit.>. We report the numerical comparison in terms of L1CD, NC, and F-score in Tab. <ref>, Tab. <ref>, and Tab. <ref>, respectively. Visual Comparison. We report more surface reconstruction results under ShapeNet <cit.> in Fig. <ref>, Fig. <ref> and Fig. <ref>. This comparison demonstrates that our method can reconstruct more geometry details than the state-of-the-art methods. We also highlight our performance on point denoising and surface reconstructions on a large scale real scan in our video. § POINT CLOUD DENOISING Additionally, we visualize our results with larger noises which we use to learn an SDF in point cloud denoising in Fig. <ref>. We tried noises with different variances including {2%,4%,6%,8%,10%}. We can see that our method can reveal accurate geometry with large noises. While our method may fail if the noises are too large to observe the structures, such as the variance of 10 percent. Note that variances larger than 3 percent are not widely used in evaluations in previous studies. § RESULTS ON KITTI Additionally, we report our reconstruction on a road from KITTI in Fig. <ref>. Our method can also reconstruct plausible and smooth surfaces from a single real scan containing sparse and noisy points, please see our reconstruction § COMPUTATIONAL COMPLEXITY We report our computational complexity in the following table. We report numerical comparisons with the latest overfitting based methods including NeuralPull (NP) and PCP using different point numbers including {20K,40K,80K,160K} in Tab. <ref>, where all methods search the nearest neighbors for queries online. NerualPull does not use learned priors while PCP uses learned priors parameterized by a neural network, both of which require the nearest neighbor search as ours. We report the time used to train these methods in 50K iterations. The comparisons indicate that our method uses less storage and less time than its counterparts. Since NP and PCP can not handle noises well, their reconstructions contain severe artifacts on the surface. While our method can handle that well. Please see more numerical comparisons with these methods in our paper. In addition, our results may get more improvements if we train our method more iterations. § ABLATION STUDIES Number of Noisy Point Clouds. We report additional ablation studies to explore the effect of the number of noisy point clouds in all the three tasks including point cloud denoising, point cloud upsampling, and surface reconstruction under the PU test set below. We can see we achieve the best performance with 200 noisy point clouds in all tasks, and the improvement over 100 point clouds is small. So we used 200 to report our results with multiple noisy point clouds in our paper. Point Density. We report the effect of point density in all the three tasks including point cloud denoising, point cloud upsampling, and surface reconstruction under the PU test set below. We learn an SDF from a single noisy point cloud. With more noises, our method can achieve better performance in all the three tasks. One Observation vs. Multiple Observations. Since our method can learn from multiple observations and single observation, we investigate the effect of learning from these two training settings. Here, we combine multiple noisy observations into one noisy observation by concatenation, where we keep the total number of points the same. Table. <ref> indicates that there is almost no performance difference with these two training settings. The reason i § OPTIMIZATION VISUALIZATION We visualize the optimization process in our video. We visualize the noisy points matched by EMD for each query in each epoch. In addition, we also visualize the denoised points using the gradient in the learned SDF in different epochs. § PROOF We proof Theorem 1 in our submission in the following. Theorem 1. Assume there was a clean point cloud G which is corrupted into observations S={N_i} by sampling a noise around each point of G. If we leverage EMD as the distance metric L defined in Eq. (<ref>), and learn a point cloud G' by minimizing the EMD between G' and each observation in S, i.e., min_G'∑_N_i∈ SL(G',N_i), then G' converges to the clean point cloud G, i.e., L(G,G')=0. L(G,G')=min_ϕ:G→G'∑_g∈G||g-ϕ(g)_2, where ϕ is a one-to-one mapping. Proof: Suppose each corrupted observation N_i in the set S={N_i|i∈[1,N]} is formed by m points, and N_i={n_i^k|k∈[1,m],m≥ 1}. With the same assumption, either G or G' is also formed by m points, G={g^k|k∈[1,m],m≥ 1}, G'={g'^k|k∈[1,m],m≥ 1}. Assuming each noise n_i^k is corrupted from the clean g^k, we leverage this assumption to justify the correctness of our proof. L(G',S)=∑_N_i∈ SL(G',N_i). (a) When m=1, this is similar to Noise2Noise <cit.>, L(G',S)= ∑_i=1^N(g'^1-n_i^1)^2. ∂ L(G',S)/∂G' =2∑_i=1^N(g'^1-n_i^1). ∂ L(G',S)/∂G'=0 → g'^1 = 1/N∑_i=1^Nn_i^1. Since S={N_i} is a set corrupted from the clean point cloud G, g^1=1/N∑_i=1^Nn_i^1. Furthermore, we also get g'^1=g^1. From Eq. (<ref>), we can also get the following conclusion, min_G'L(G',S)↔G'=𝔼(ϕ(G')), where ϕ={ϕ_i|i∈[1,N]} is a set of one-to-one mapping ϕ_i which maps G' to each corrupted observation N_i in S. (b) When m≥2, assuming that we know which noisy point n_i^k on each point cloud N_i is corrupted from the clean point g^k. We regard the correspondence c_i between {n_i^k|i∈[1,N]} and g^k as the ground truth, so that we can verify the correctness of our following proof. Note that we did not use this assumption in the proof process. So, we can represent the correspondence using the following equation, 𝔼(n(k))=1/N∑_i=1^Nn_i^k=g^k, where n(k)={n_i^k|i∈[1,N]}. As defined before, ϕ_i is the one-to-one mapping established in the calculation of EMD between G' and N_i. Therefore, the distance between G' and noisy point cloud set S is, L(G',S)=∑_k=1^m(∑_i=1^N((g'^k-ϕ_i(g'^k))^2)), There are two cases. One is that the one-to-one mapping ϕ_i is exactly the correspondence ground truth c_i. The other is that ϕ_i is not the correspondence ground truth. Case (1): When ϕ_i(g'^k)=n_i^k, i∈[1,N], this is consistent with (a), so the Theorem 1 gets proved. Case (2): When ϕ_i(g'^k)≠ n_i^k, assuming ϕ_i(g'^k)=n_i^a_k,i, A_k={n_i^a_k,i|i∈[1,N]}, A_k is a set corresponding to g'^k. When minimizing L(G',S)=∑_k=1^m∑_i=1^N(g'^k-ϕ_i(g'^k))^2, according to Eq. (<ref>), g'^k=𝔼(ϕ_i(g'^k)), so Var(A_k)=1/N∑_i=1^N((g'^k-𝔼(ϕ_i(g'^k)))^2). When m=2, min_G'L(G',S)=min(Var(A_1)+Var(A_2)). We assume A_1=n_s^1+n_cs^2 to simply the following proof, where s is a subset of set [1,N], cs is the complement of set s, so A_2=n_s^2+n_cs^1. Assuming 𝔼(A_1)=g^1+Δ, Δ is the point offset of g^1, because of 𝔼(A_1)+E(A_2)=g^1+g^2, so 𝔼(A_2)=g^2-Δ, L(G',S) = (Var(A_1)+Var(A_2)) = 𝔼(A_1-(g^1+Δ))^2+𝔼(A_2-(g^2-Δ))^2 = 1/N(∑_i=1^N(n_i^a_1,i)^2+∑_i=1^N(n_i^a_2,i)^2+N(g^1+Δ)^2 +N(g^2-Δ)^2-2∑_i=1^Nn_i^a_1,i(g^1+Δ) -2∑_i=1^Nn_i^a_2,i(g^2-Δ)) = 𝔼((n(1))^2)+𝔼((n(2))^2)+𝔼^2(n(1))+ 𝔼^2(n(2))+2Δ^2+2g^1Δ-2g^2Δ- 2/N(g^1∑_i=1^Nn_i^a_1,i+g^2∑_i=1^Nn_i^a_2,i+ Δ∑_i=1^Nn_i^a_1,i-Δ∑_i=1^Nn_i^a_2,i) = 𝔼((n(1))^2)+𝔼((n(2))^2)+𝔼^2(n(1))+ 𝔼^2(n(2))+2/N(Δ(n_s^1+n_cs^1)-Δ(n_s^2+n_cs^2) -Δ(n_s1+n_cs^2)+Δ(n_cs^1+n_s^2)- g^1∑_i=1^Nn_i^a_1,i-g^2∑_i=1^Nn_i^a_2,i) = 𝔼((n(1))^2)+𝔼((n(2))^2)+𝔼^2(n(1))+ 𝔼^2(n(2))+2Δ^2+2Δ/N(2n_cs^1- 2n_cs^2)-2/N(g^1N(g^1+Δ)+g^2N(g^2-Δ)) = 𝔼((n(1))^2)+𝔼((n(2))^2)-𝔼^2(n(1))- 𝔼^2(n(2))+2Δ^2+ 2Δ/N(2n_cs^1-2n_cs^2-n_cs^1-n_s^1+n_cs^2+n_s^2) = 𝔼((n(1))^2)+𝔼((n(2))^2)-𝔼^2(n(1))- E^2(n(2))+2Δ(g^2-g^1)-2Δ^2 = Var(n(1))+Var(n(2))+2Δ(g^2-g^1)-2Δ^2 Because the first two terms of the formula are constants, the entire formula becomes a quadratic formula, so when Δ=0 or Δ=g^2-g^1, the value of L(G',S) is minimized. Δ=0 is consistent with Case (1). Δ=g^2-g^1, ϕ_i(g^1)=n_i^2, ϕ_i(g^2)=n_i^1, this is also the same correspondence as the ground truth, so Theorem 1 gets proved. When m2. We can extend the proof from the two sets A_1 and A_2 to multiple sets A_1,A_2,⋯,A_m, and the proof process is similar to the above.
http://arxiv.org/abs/2306.06303v1
20230609234119
Torus skin outflow in a near-Eddington quasar revealed by spectropolarimetry
[ "Nadia L. Zakamska", "Rachael M. Alexandroff" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Validation of the Scientific Program for the Dark Energy Spectroscopic Instrument [ July 31, 2023 ================================================================================= Even when the direct view toward the active nucleus is obscured, nuclear emission propagating along other directions can scatter off surrounding material, become polarized and reach the observer. Spectropolarimetry can thus be an important tool in investigating the circumnuclear geometry and kinematics of quasars on scales that cannot yet be probed via direct observations. Here we discuss an intriguing class of quasars where the polarization position angle swings by large amounts (∼ 90) within an emission line. We investigate a kinematic model in which the scattering dust or electrons are in an axisymmetric outflow. We propagate Stokes parameters in a variety of geometries of emitter, scatterer and observer. We use these models to predict polarization fraction, line profiles and polarization position angles and compare them to observations. We demonstrate that the swinging polarization angle can be a result of the geometry of the outflow and the orientation of the observer. Polarization properties of a near-Eddington extremely red quasar SDSS J1652 can be successfully explained by a model in which the quasar is surrounded by a geometrically thick disk, whose `skin’ is outflowing at ∼ 1000 km s^-1 and acts as the scatterer on scales of a few tens of pc. The line of sight to the observer in this source is within or close to the skin of the torus, in agreement with multi-wavelength data. Spectropolarimetric data and models presented here strongly support the thick-disk geometry of circumnuclear material suggested by recent numerical simulations of high-rate accretion flows onto black holes. galaxies: active – polarization – quasars: emission lines – quasars: general § INTRODUCTION Active galactic nuclei (AGN) powered by accreting supermassive black holes present with a wide range of observational phenomenology. One of the first successful classifications of AGN was based on the presence or absence of broad permitted emission lines and blue continua in their optical spectra <cit.>, resulting in type 1 and type 2 designations. Subsequent spectropolarimetry of type 2 AGN revealed the presence of broad lines and blue continua in the polarized spectra <cit.>. This key observation gave rise to the geometric unification model of AGN <cit.>, which successfully explains many phenomenological differences between AGN by varying the orientation of the observer relative to optically-thick circumnuclear obscuration. Even if the broad-line region of the AGN cannot be directly seen by the observer due to intervening clouds of gas and dust, some of its emission escapes along other directions, scatters off the surrounding material, becomes polarized and reaches the observer who then detects the nuclear spectrum in the reflected polarized light. Scattered light observed using spectropolarimetry has now been used to probe the geometry of AGN in a wide range of objects, from nearby classical Seyfert galaxies <cit.> to more powerful quasars at moderate redshifts <cit.> to high-redshift universe <cit.>. Some of these studies focused on the dichotomy between the broad-line region and the narrow-line region which are separated by a wide range of scales, with the scattering regions directly visible in high-resolution images of the host galaxy <cit.>. In contrast, spectropolarimetry of broad absorption line quasars and type 1 quasars can constrain geometry on nuclear scales which cannot yet be probed by any other methods <cit.>. It is often difficult to interpret polarimetric observations. Even the dominant scattering agent – electrons vs dust – is sometimes problematic to pin down. Electron scattering is favored by a largely wavelength independent scattering efficiency and by the high values of polarization seen in some type 2 AGN, which are in some tension with those achievable in dust scattering. On the other hand, for a standard gas-to-dust ratio dust scattering is more efficient than electrons, and observations of kpc-scale scattering regions where dust is unlikely to be destroyed <cit.> suggest that dust scattering dominates. The situation can get even more complex in spectropolarimetric observations of certain emission lines. In addition to dust and electron scattering, resonant scattering may be important for producing line polarization <cit.>. Objects with powerful jets can also be highly polarized, both in the radio and in the optical <cit.>, but this is due to the synchrotron emission mechanism rather than scattering, and we do not discuss these cases further. Velocity structure of the polarization fraction and the polarization position angle has been seen both in narrow emission lines in type 2 AGN <cit.> and in broad emission and absorption lines in type 1s <cit.>. In type 2 AGN, the suppression of the polarization fraction within the narrow emission lines relative to the continuum likely indicates that the scatterer and the narrow-line emitter are on similar physical scales, so that the polarization is suppressed by geometric cancellation <cit.>. In contrast, in type 1s the polarization fraction can be suppressed or enhanced across the broad lines <cit.>, meaning that the continuum-emitting region, the broad-line region and the scatterer can have a variety of size hierarchies. Clearly there is a wealth of information about both the emitter and the scatterer in spectropolarimetric data, but the diversity of observational signatures, geometries and scattering mechanisms can make the interpretation of spectropolarimetric observations very complicated. In this paper we develop a kinematic model of an axisymmetric scattering region or wind which allows us to model the velocity structure of the scattered and polarized light as seen in optical and ultra-violet (UV) emission lines for comparison with spectropolarimetric observations. A class of objects of particular interest to us is extremely red quasars (ERQs), a fascinating exclusively high-redshift (z∼ 2) population which was identified by their high infrared-to-optical ratios, extremely high bolometric luminosities reaching 10^48 erg s^-1 and peculiar rest-frame UV spectra with oddly shaped, high equivalent width emission lines <cit.>. Upon follow-up observations, the ERQs turned out to have the fastest outflows of ionized gas seen in the [OIII]λ5008Å emission line of any quasar population <cit.>. These outflows are now unambiguously detected on galaxy-wide scales <cit.> and are therefore extremely powerful and suspected of undergoing the long-sought `blow-out' phase of quasar feedback on the host galaxy <cit.>. Both their extremely high luminosities and their extreme outflow activity suggest that they are near- or super-Eddington sources <cit.>. These objects are also very highly polarized in the rest-frame UV, with peculiar kinematic structure likely reflecting the geometry of the circumnuclear gas flows <cit.>. With spectropolarimetric observations and modeling we can hope to resolve the internal kinematics of the emission and scattering regions which may not be accessible via any other techniques. In Section <ref> we introduce the observational phenomenology we are aiming to explain. In Section <ref> we present the model setup, in Section <ref> we discuss model results and comparisons with observations, in Section <ref> we discuss the implications of our results and we conclude in Section <ref>. Emission line wavelengths are given in vacuum. Ground-based observations are converted onto the vacuum wavelength scales using <cit.>. The orientation of polarization is defined by the orientation of the electric field E⃗ in the scattered electromagnetic wave. Polarization position angles (PAs) β are measured East of North, with Q=1 and U=0 corresponding to the E-vector of polarization oriented in the North-South direction. We use lower case q and u for fractional polarization. The scattering angle ψ is defined as the angle between the wave vectors of the incident and the scattered photons. We use a flat Ω_m=0.3, Ω_Λ=0.7, h=0.7 cosmology for computing distances and luminosities. § OBSERVATIONAL MOTIVATION §.§ Extremely red, near-Eddington quasar SDSS J1652 Our prototypical target is SDSS J165202.64+172852.3, hereafter SDSS J1652, an ERQ <cit.> at z=2.9 originally selected by its high infrared-to-optical ratio. When classified based on its UV and optical emission-line properties, it is a high-redshift type 1.8-2 (obscured) quasar candidate, with a high equivalent width of CIVλ1550Å and a high [OIII]λ5008Å/Hβ ratio <cit.>. The width of its CIV emission line (full width at half maximum=2400 km s^-1) places it just above the standard cutoff (<2000 km s^-1) for type 2 quasar selection <cit.>, and there is a weak broad component in Hβ <cit.> and a stronger broad component in Hα <cit.>. X-ray observations confirm that the source is highly obscured, with a column density between the observer and the X-ray emitting corona of N_H≃ 10^24 cm^-2 <cit.>. As other ERQs, SDSS J1652 shows a blueshifted and broad (velocity width containing 80% of line power w_80=1760 km s^-1; ) [OIII]λ5008Å emission line indicative of outflow activity on scales ≫ 100 pc where this line cannot get collisionally de-excited. The object was therefore observed by James Webb Space Telescope in its first weeks of science operations as part of an Early Release Science program (“Q-3D”, PI: Wylezalek) to investigate quasars with strong galaxy-scale outflows. High-velocity (several hundred km s^-1) outflows in this source have now been detected over the entire extent of the galaxy <cit.>. The object inhabits a massive (a few L_*) galaxy with multiple companions and tidal tails indicating merging activity <cit.>. Spatially resolved kinematic maps from JWST allow identification of narrow emission lines associated with the ionized gas in the host galaxy, giving a precise redshift of z=2.9489 <cit.>. It is in excellent agreement with that based on the narrow component of [OIII] seen in the spatially integrated ground-based data <cit.>. All velocities are hereafter measured relative to this frame. The bolometric luminosity of the source of 5× 10^47 erg s^-1 <cit.> corresponds to the Eddington luminosity of a 4× 10^9 M_⊙ black hole. While there is no independent method for measuring black hole masses for objects with strong outflow activity and unknown and possibly appreciable obscuration levels, this value is at or above the maximal mass of black holes at present epoch <cit.>, so we assume that SDSS J1652 cannot be much more massive and the Eddington ratio must be close to or exceed unity. In Figures <ref> and <ref> we show spectroscopic and spectropolarimetric observations of SDSS J1652 obtained using Keck's Low Resolution Imaging Spectrometer. Data acquisition and processing to obtain F_λ, Q_λ and U_λ are described by <cit.>. We identify the following important polarization properties: * All emission lines are blueshifted relative to the host galaxy rest frame known from other data; the velocity offsets of the peaks from the expected position range between -450 and -850 km s^-1. * While the UV and optical continuum and emission lines are well detected, the high level of continuum polarization (up to ∼ 20%) suggests that the entire UV emission may be due exclusively to scattered light without any direct contributions from the broad-line region and continuum. * The polarization position angle varies dramatically within the emission lines, `swinging' by as much as 90 . * As a function of velocity, the pattern is similar within all emission lines: the polarization position angle is the same in the red part of the line as the position angle of the continuum, and the `swing' affects the blue part. * The degree of polarization is smaller in the blue part of the emission line than in the red part. * The peak of the polarized intensity is redshifted in comparison to the peak of the total line flux. In the rest-frame UV continuum imaging of the object using Hubble Space Telescope, there is a clear detection of an extended nebula to the Southwest of the nucleus, with a possible fainter counterpart to the Northeast <cit.>. This nebula is orthogonal to the continuum polarization position angle (125 ), and therefore it is interpreted as the scattered light from the region of the host galaxy illuminated along the polar direction of circumnuclear obscuration, by analogy to low-redshift type 2 quasars <cit.>. The gas with the highest ionization levels – photoionized by direct quasar emission – is oriented in the same direction, further supporting this interpretation. The Southwestern region is likely tilted toward the observer and appears brighter and detectable in ground-based data <cit.>, whereas the Northeastern region is pointed away from the observer and required JWST data to identify <cit.>, likely because it is extincted by the intervening dust. Thus the orientation of the polar axis of circumnuclear obscuration is known from the spatially resolved observations. We take this axis to be at position angle β_0=45 East of North and we apply a rotational transformation to the observed Q and U Stokes parameters to obtain Q' and U' relative to this new axis, which we show in Figure <ref>. The continuum polarization has values Q'<0, which means it is orthogonal to the chosen axis of origin, but within emission lines Q' `swings' to positive values on the blue wings. U' values – which reflect values of polarization at ± 45 to the chosen axis – are close to zero. §.§ Other objects with swinging polarization angle Some of the tell-tale spectropolarimetric properties listed above are manifested by other quasars. Of the four objects other than SDSS J1652 presented by <cit.>, two more show dramatic changes in the polarization position angle across the emission lines. One (SDSS J1515+1757) is a classical type 2 quasar both at rest-frame UV and at optical wavelengths as identified by the width of the emission lines <cit.>. The other one (SDSS J1623+3122) satisfies both type 2 and ERQ selection criteria and shows clear evidence of [OIII]λ5008Å ionized gas outflow in its infrared spectrum. Both objects are polarized at the ∼ 10% level. There are interesting differences between the properties of these two sources and SDSS J1652 – in particular, in both cases the `swinging' part of the line is redshifted – by 300-600 km s^-1 – relative to the total line centroid, instead of blueshifted as it is SDSS J1652. Mrk 231 is a nearby reddened FeLoBAL quasar which has an UV absorption system blueshifted by ∼4,600 km s^-1 with respect to the emission lines <cit.>, indicative of a dusty, outflowing BAL screen with a covering factor of 90% <cit.>. In spectropolarimetric observations of a broad Hα emission line, the polarization position angle changes by 10^∘ over line in a characteristic `S'-shaped pattern, with the polarization position angle of the redshifted wing of Hα roughly matching the continuum polarization position angle. The polarization fraction is about 3.5% in the continuum, dips to 2.5% on the blue wing of Hα and increases to nearly 5% on the red wing. As a result, the polarized line intensity is redshifted compared to the overall line intensity <cit.>. PG1700+518 is a low-redshift (z=0.288) BAL quasar with a set of UV absorption systems at velocities ranging between 7,000 and 18,000 km s^-1 <cit.>. In optical spectropolarimetry, both the broad Hα and to a lesser extent Hβ show features similar to those of SDSS J1652. Relative to the 1% continuum polarization, the polarization fraction dips to 0.5% on the blue wing of the line and increases on the red wing of the line to 1.5%, resulting in a net redshift of nearly 4,000 km s^-1 of the polarized line profile relative to the total line intensity <cit.>. The polarization position angle (PA) rotates by ∼90 within the emission line, though unlike SDSS J1652, the PA within the emission line does not match the continuum PA either on the redshifted or on the blueshifted wing of the line profile. The presence of broad Balmer emission lines and the relatively low levels of polarization seen in the latter two sources (1-2%) suggest that despite intervening absorption and extinction, a large fraction of the observed optical and UV emission is due to directly seen nuclear continuum and circumnuclear broad-line region. This component is in general unpolarized, so it dilutes the scattered light signal to relatively low polarization values. In obscured quasars of <cit.>, the direct unpolarized light from the nucleus is hidden and the levels of polarization are correspondingly higher (∼ 10% and above). Previously some of these spectropolarimetric features were modeled by a rotating disk wind (e.g., in the models of Seyfert 1 galaxies and PG1700+518 by ) or by multiple scattering components (e.g., in Mrk 231 by ). In light of the extreme polarization properties of SDSS J1652 (large PA swing, high levels of polarization) we revisit spectropolarimetric modeling in this paper and explore various scattering geometries in search of a natural explanation for the observed spectropolarimetric phenomenology. As some of the objects of interest show strong outflows, we are particularly interested in models that can simultaneously account for the outflow activity and the polarization properties. §.§ Polarization mechanism In the quasars we discuss here, which are not dominated by jet emission, polarization is expected to be dominated by scattering. Scattering can be produced by free electrons (Thomson scattering), dust <cit.>, or, for emission lines, by partly ionized gas which resonantly scatters incident photons if they arrive with the right range of energies to match the transition in question. There are some qualitative similarities between all three mechanisms – in particular, the dominant emerging polarization is perpendicular to the scattering plane. This property of scattering allowed for major break-throughs in understanding the geometry of AGN obscuration <cit.>. But there are also qualitative and quantitative differences. Resonant scattering can have high optical depth per small column density of gas, but does not work for continuum emission. When dust is present, even in fully ionized gas it is a more efficient scatterer than electrons, by up to two orders of magnitude depending on the wavelength, but because of the admixture of particle sizes it is a less efficient polarizer than electrons. All three types of scatterers are known to be important in active nuclei, depending on spatial scales and observed wavelengths: electron scattering may dominate on small scales close to the nucleus where dust is destroyed and / or at wavelengths where dust is an inefficient scatterer <cit.>, dust scattering may dominate in the optical for scattering on galactic scales <cit.>, and Lyα resonant scattering in the surrounding gas-rich galaxy and halo may have a major effect on the observed emission line profiles <cit.>. In this paper we analyze both line and continuum polarization of a class of quasars defined by certain similarities in their polarization properties, so our first task is to determine which, if any, scattering process is likely to dominate. In <cit.>, we noted the similarity between the overall polarization fraction and polarization position angle in the continuum and in the red wings of the emission lines, which argued in favor of the same polarization mechanism for the lines and the continuum and therefore against resonant scattering. We then found that resonant scattering optical depth could be high, e.g., for CIVλ1550Å emission τ_ res/τ_ dust≃ 300×η_ CIV for a typical range of gas velocities (∼ 3000 km s^-1). This calculation assumes that gas and dust are well mixed, and η_ CIV is the fraction of carbon in the relevant ionization state, up to 0.3. Because η is sensitive to the location and physical conditions of the scatterer, we left the question of the importance of resonant scattering unresolved. We tackle the contribution of resonant scattering again, now using observations of multiple different emission lines in SDSS J1652 as a new constraint (Figure <ref>). For resonant scattering the polarization fraction as a function of scattering angle ψ is p_ res(ψ)=p_0sin^2ψ/1+p_0cos^2ψ. Here p_0, the maximum level of polarization achieved at ψ=90, depends on the angular momentum quantum numbers J_e and J_g of the excited and the ground state and is tabulated by <cit.> and <cit.>. Crucially, p_0=0 for the following three combinations of J_g and J_e: (i) J_e=0, J_g=0; (ii) J_e=0, J_g=1; and (iii) J_e=1/2, J_g=3/2. Therefore, if there are polarized transitions with these values in our spectra, we can rule out resonant scattering as the dominant mechanism. We use the NIST atomic spectra database[<https://www.nist.gov/pml/atomic-spectra-database>] to record the J_e and J_g values for all transitions shown in Figures <ref> and <ref>. Several features (Lyβ, Lyα, NV and CIV) are mixes of J_e=1/2→ J_g=1/2 (p_0=0) and J_e=3/2→ J_g=1/2 (p_0=0.429) transitions with equal or similar wavelengths and Einstein coefficients, so the resulting scattering can be polarized and therefore they do not provide a clean test of the resonant scattering mechanism. The `smoking gun' feature turns out to be the blend of SiIV 1393.8Å (J_e=3/2→ J_g=1/2 with p_0=0.429), SiIVλ1402.8Å (J_e=1/2→ J_g=1/2 with p_0=0) and OIVλ1401.4Å with J_e=1/2→ J_g=3/2 (p_0=0), J_e=1/2→ J_g=1/2 (p_0=0) and J_e=3/2→ J_g=5/2 (p_0=0.015). In Figure <ref>(c), the line is clearly highly polarized, with the redder component (made up of SiIVλ1402.8 and OIVλ1401.4) showing polarization levels (close to 20%) similar to those or higher than those of the bluer component. In contrast, if resonant scattering were the dominant polarization mechanism, the red part would be expected to be unpolarized or polarized at a very low level, since the only potentially polarized scattering would be in the J_e=3/2→ J_g=5/2 transition of OVI whose Einstein coefficient is an order of magnitude below those of other transitions within the red wing of the blended feature and whose maximal polarization p_0 is 1.5%. The high polarization of the SiIV+OIV blend leads us to conclude that resonant scattering is not the dominant polarization mechanism in SDSS J652. In what follows we therefore only consider dust and electron scattering. § MODEL SETUP §.§ Overall geometry of the problem In our model, both the emission-line region and the scattering region are axisymmetric. The model allows for a variety of conical morphologies for both, i.e., a polar scattering region, or an equatorial scattering region, or one that's confined to a narrow range of polar angles, and similar morphologies for the emission region. The key simplifications of our model are (i) that the emission-line region is point-like compared to the scatterer, (ii) that the velocity of the scatterer is purely radial and constant as a function of distance, (iii) that multiple scattering events are negligible, and (iv) that there is no extinction before or after scattering. Under these assumptions, each radial shell of the scatterer produces scattered light with the same kinematic pattern: the same kinematic structure of the scattered line, the same polarization fraction and the same polarization position angle. The impact of multiple scattering has been considered by <cit.> and <cit.>, and the effects of extinction within the scattering region by <cit.>. Here we instead would like to specifically focus on the effects of outflow geometry and connect them to observations. If we are interested in the kinematic structure of the scattered emission and in the fractional polarization, but not in the overall scattered intensity, in our models with conical symmetry of scatterer we can consider only one radial shell. An additional integration of all Stokes parameters over the radial shells, each with its own geometry, would allow for any axisymmetric scattering structures. While we do not consider rotating winds here <cit.>, the code can be amended to include axisymmetric rotation as well, by incorporating the rotational component of velocity into Doppler shift equations (<ref>)-(<ref>). The coordinate system is set up in Figure <ref>. The object's axis of symmetry is along the x axis, and the observer is in the x-z plane. y axis is then added to produce a right-handed coordinate system x-y-z. To translate between spherical and Cartesian systems as necessary, the spherical system is set up with x as its polar axis, polar angles θ are counted from the x-axis and azimuthal angles φ are counted from the y-axis. In this coordinate system, the unit vector toward the observer is n_ obs=(cosθ_ obs, 0, sinθ_ obs). Photons originate in a compact emission region at the center of the coordinate system and propagate along the directions with unit vectors n_ s=(cosθ_ s,sinθ_ scosφ_ s,sinθ_ ssinφ_ s). The scattering angle ψ is the angle between the initial direction of propagation and the scattered direction, i.e., the direction toward the observer, so that cosψ= n_ obs· n_ s. §.§ Phase function and polarization fraction Regardless of the scattering mechanism (electron scattering, dust scattering, resonant scattering), both the phase function of scattering – i.e., the angular dependence of the scattering cross-section – and the polarization fraction of scattered light depend only on the scattering angle ψ. We define the phase function through the differential cross-section as dσ/ dΩ=1/4πσ_ totalg(ψ), so that ∫ g(ψ) dΩ=4π. Both for electron (Thomson) scattering and for Rayleigh scattering (when the dust particles have sizes much smaller than the wavelength of light), the phase function of scattering is g_ TR(ψ)=3(1+cos^2ψ)/4. Polarization fraction p(ψ) defined here specifically for unpolarized incident light is a signed value (I_⊥-I_∥)/(I_⊥+I_∥), where I_⊥ and I_∥ are the intensities of scattered light in polarization modes perpendicular and parallel to the scattering plane <cit.>. Therefore, the sign of p(ψ) tells us the dominant direction of the emerging polarization. In Thomson and Rayleigh scattering p_ TR(ψ)=sin^2ψ/(1+cos^2ψ)≥ 0, so the emerging polarization is always perpendicular to the scattering plane. Astrophysical dust contains particles of many different sizes, not necessarily small compared to the wavelength, so the Rayleigh approximation is insufficient. <cit.> uses Mie approximation (spherical dust particles) to calculate the phase function g_ dust(ψ) and the polarization fraction p_ dust(ψ) for dust size distributions appropriate for the Milky Way and the Magellanic Clouds dust. We interpolate over the values in these numerical tables to obtain continuous phase and polarization functions. While p_ dust(ψ) is almost always positive for dust scattering, it can be negative for obtuse scattering angles (backward scattering) for some combinations of size distributions and wavelengths, resulting in polarization within the scattering plane (the effect is best illustrated in Figure 5 of ). We retain the sign information in p_ dust(ψ) to correctly incorporate it into the calculation of the polarization position angle. §.§ Calculating the line-of-sight velocity distribution The point-like emission-line source has its own velocity structure, which would be blocked from the observer in an obscured (type 2) active nucleus but could be seen to an unobscured (type 1) observer. Following the model for Mrk 231 by <cit.>, we consider line emission (e.g., CIV) to be isotropically produced by the gas outflowing with velocity v_ in in an axisymmetric geometry, which is then scattered on much larger scales by a wind moving with a potentially different velocity v_ s in a potentially different axisymmetric geometry. To calculate the Doppler shift resulting from scattering, we start with an emitter moving with v_ in which makes a photon propagating along n_ s, which in turn hits the scatterer moving with v_ s. Because we only consider radial motions for the scatterer, n_ s and v_ s are co-directional. In the rest-frame of the emitter, the photon has wavelength λ_0, the laboratory wavelength of the emission line in question. To the first order in v/c (sufficient for winds with velocities ∼ a few thousand km s^-1) the scatterer sees this photon coming at it with λ'=λ_0(1+v_ s- v_ in· n_ s/c). As seen in the scatterer frame, the photon arrives and is then scattered with the same wavelength. But the observer sees another Doppler shift due to the motion of the scatterer: λ_ obs=λ'(1- v_ s· n_ obs/c). Therefore, the observer would infer the line-of-sight velocity v_ LOS=v_ s(1-cosψ)- v_ in· n_ s (positive for redshift and negative for blueshift). We assume that the cosmological redshifts have already been taken into account and that we are considering an observer which is at a large distance from the nucleus, but is in the rest frame of its host galaxy. To calculate the complete line profile of scattered intensity we integrate over the distribution function for each emitting and each scattering direction: I(v_ LOS)=∫sinθ dθ dφsinθ_s dθ_s dφ_s g(ψ) × δ(v_ LOS-v_ s(1-cosψ)+ v_ in· n_ s). This is a four-dimensional integral over the polar and azimuthal directions of the initial gas velocity (producing the intrinsic nuclear line profile) and over the polar and azimuthal directions of the scatterer. The Dirac delta function indicates that for a given observed velocity v_ LOS and given outflow velocities v_ in and v_ s, there is only a small subset of angles in the parameter space (θ, φ, θ_s, φ_s) that would result in this observed velocity. To calculate polarization, we use Stokes parameters, which have the advantage of being additive, unlike polarized intensity, which can cancel if we add up scattered beams with different polarization angles. To define Stokes parameters, we set the observer's projection of the x-axis on the plane of the sky to be astronomical North. The positive y-direction then becomes astronomical East and we can measure polarization position angles β in their standard way to be East of North, with β=0 if the electric field of the polarized light as seen by the observer is along the cone axis projected on the plane of the sky. If polarized intensity is P, then Stokes parameters are defined as Q=Pcos2β; U=Psin 2β. For most scattering processes, the polarization position angle of the electric field of the polarized light is perpendicular to the scattering plane (an interesting exception to this rule is discussed in Section <ref>). But as the propagation vector n_ s sweeps its allowed directions, the orientation of the scattering plane changes (Figure <ref>). Therefore, to determine β for every incidence of scattering, we need to measure the projection of n_ s onto the plane of the sky x'-y'. Applying a rotation transformation in the x-z plane, we find the projected components to be n'_x=sinθ_ obscosθ_ s-cosθ_ obssinθ_ ssinφ_ s; n'_y=sinθ_ scosθ_ s. Since Stokes parameters are additive, they can be summed up as they arise in different parts of the scattering region, so that the Stokes parameters of the line profiles are Q(v_ LOS)=∫sinθ dθ dφsinθ_s dθ_s dφ_s g(ψ) p(ψ)× (-(n'_x)^2-(n'_y)^2/(n'_x)^2+(n'_y)^2)δ(v_ LOS-v_ s(1-cosψ)+ v_ in· n_ s); U(v_ LOS)=∫sinθ dθ dφsinθ_s dθ_s dφ_s g(ψ) p(ψ)× (-2n'_xn'_y/(n'_x)^2+(n'_y)^2)δ(v_ LOS-v_ s(1-cosψ)+ v_ in· n_ s). Here p(ψ) is again the polarization fraction, g(ψ) is the same phase function of scattering (angular dependence of scattering) as the one used in the intensity profile, and the terms dependent on n'_x and n'_y are the same as the β terms in equations (<ref>). They account for the geometric dilution of polarization: if scattering occurs over a wide range of position angles of the incident photons as seen in the plane of the sky, even with high per-scattering polarization, the net polarization is lowered, until in the extreme case of centro-symmetric scattering the net polarization is zero. For dust scattering, expressions (<ref>) correctly take into account the unusual case of polarization sign reversal when p_ dust(ψ) is used a signed function. With our notation, Q is positive when polarization position angle is along the polar axis of the object. Therefore, for narrow cones where the scattering planes are close to x-z, net Q values should be negative because the dominant direction of scattering is perpendicular to the scattering plane and parallel to the y axis, with β=90. Non-zero U values reflect polarization orientation at 45 and 135 degrees to the cone axis. Because of the symmetry of our problem we expect U to be zero, so we show it here only as a check on the accuracy of our calculations. The model is numerically implemented in [<https://github.com/zakamska/polarized_outflows>]. The multi-dimensional integrals (<ref>) and (<ref>) are calculated using a Monte Carlo integrator adapted from [<https://pypi.org/project/mcint/>], with the Dirac delta function approximated by a Gaussian. To quickly explore the parameter space of our models, we use 10^4 Monte Carlo trials with the Gaussian dispersion set to 0.01-0.05 of the outflow velocity v_ s. Smaller widths result in noisier curves (and would therefore require more Monte Carlo steps to calculate the integral with a higher accuracy), but larger widths degrade the achievable velocity resolution. If higher quality profiles are desired, the width of the Gaussian may be decreased while the number of Monte Carlo trials is simultaneously increased. For the final plots presented in the paper we use 10^5 Monte Carlo trials and Gaussian dispersion of 0.01v_ s. For θ_ obs=90, i.e., the system seen edge-on, it is enough to compute the Stokes parameters from only one side (θ_ s≤ 90) because the other side contributes equal I, Q and U and therefore does not affect the calculated polarization kinematics. If the observer is at θ_ obs<90, then in principle the two sides of the outflow should be considered separately and then their Stokes parameters added together, but in practice we only take into account the approaching side on the assumption that the back side of the outflow is heavily obscured. Continuum polarization can be obtained from equations (<ref>) and (<ref>) by integrating over v_ LOS (eliminating the delta function). The incident emission is comprised both of lines and continuum, and their ratio is a free parameter of the model. We take advantage of the additive nature of the Stokes parameters to calculate the overall scattered intensity, polarization fraction and position angle for the line + continuum combination. § MODEL RESULTS AND COMPARISON WITH OBSERVATIONS In this Section, we explore a few example geometries using our model, explain some of the phenomena that arise in the resulting model profiles, and compare the model profiles to observations. In Section <ref> we investigate the case of polar outflows with electron scattering, and in Section <ref> equatorial and thick disk skin outflows with electron scattering. In Section <ref> we discuss similarities and differences between electron and dust scattering. §.§ Polar scattering outflow In most our setups, the nuclear line profile is created by a point-like source, in which the emitting gas moves radially with velocity v_ in and uniformly fills a cone with θ=0-θ_ max. None of the main qualitative results depends sensitively on this choice and in principle another model for the source can be implemented (e.g., one that includes a more realistic distribution of velocities) at the expense of more computational complexity since one would then have to integrate over the distribution of v_ in values as well. In our first model – polar scattering outflow observed edge-on – this emission is then scattered into the line of sight by the larger-scale wind moving within a range of θ_ s=0-θ_ max and with the same velocity v_ s=v_ in. We then envision a type 2 AGN observed edge-on, with θ_ obs=90. The nuclear line profile is obscured, so the spectrum that we see is entirely due to the scattered light, part of which is polarized. In Figure <ref> we present the results of calculations for θ_ max=60 and a phase function and polarization fraction appropriate for electron scattering. The line-of-sight velocity profile that would have been seen in the absence of obscuration is shown in the top panel. This profile can be calculated by analogy to equation (<ref>): I_ unobsc(v_ LOS)=∫sinθ dθ dφδ(v_ LOS+ v_ in· n_ obs) and for θ_ obs=90 can be calculated analytically: I_ unobsc(v_ LOS)=2arccos(v_ incosθ_ max/√(v_ in^2-v_ LOS^2))/v_ in for |v_ LOS|<v_ insinθ_ max and 0 otherwise. Here θ_ max≤ 90; the cases of θ_ max>90 or equatorial emitters can be reduced to the linear combinations of I_ unobsc. Since Stokes parameters are additive, we can add I, Q and U values for the scattered emission line and the scattered continuum to obtain the total observed spectrum (2nd panel) and its polarization fraction (3rd panel). Compared to the profile that would have been seen directly from the emitter, the scattered line profile (2nd panel) shows a redshifted tail. This part of the spectrum originates from the back part of the outflow which is redshifted away from both the observer and the emitter, combining the Doppler effects. In the bottom panel we show the Stokes intensities for the emission line alone, without the continuum. Due to the symmetry of the problem, the U Stokes intensity is supposed to be exactly zero in our model, and it is being displayed only as a check on the calculations and to demonstrate the accuracy of the numerical integration in equations (<ref>). The Q Stokes intensity is negative across the entire emission-line profile, and so is its velocity integral which is proportional to the continuum polarization. Therefore Q for the total line+continuum spectrum is negative everywhere, so that the polarization position angle – as expected – is orthogonal to the symmetry axis as seen in the plane of the sky (Figure <ref>a). In Figure <ref> we show the same polar scatterer, but now viewed along a line of sight within the outflow. The net level of polarization is now significantly lower than in the edge-on case: this is due both to the smaller polarization fraction p_ TR(ψ) for forward-scattering and to the partial geometric cancellation of polarization (in the extreme case of on-axis view the polarization is exactly zero). The profiles now display some of the interesting features we highlighted for SDSS J1652: the polarization fraction dips on the blue side of the line and the polarized profile is redshifted by comparison to the scattered profile by about 0.5v_ s. Both of these effects are due to the fact that the blue wing of the polarized line is formed by the most forward-scattering part of the outflow where p_ TR is nearly zero. Q values for the continuum and for the line are negative (meaning the polarization position angle is orthogonal to the projected axis of symmetry); this is also consistent with observations of SDSS J1652. However, the tantalizing `swing' of the polarization position angle does not appear in the polar model regardless of the observer's orientation. §.§ Equatorial scattering outflow Equatorial dusty winds lifted off the obscuring material surrounding the AGN have been proposed on the basis of many observations <cit.>, as well as by theoretical work <cit.>. In this section we consider the emitter to be expanding within a filled cone (polar emitter, as before), which determines the unobscured velocity profile, but the scatterer is now in an outflow confined between angles θ_ min and θ_ max. If θ_ max for this outflow is close to 90, then such outflow would be reasonably called `equatorial'. Another situation of astrophysical interest is that of a `disk skin' outflow confined between two angles significantly smaller than 90, it is relevant for outflows along the surface of geometrically thick disks. The model with an emitter expanding with 0<θ<30 and a scatterer expanding with 30<θ_ s<90 (Figure <ref>) is at first glance quite promising for explaining several observed features of SDSS J1652: (i) the redshifting of the peak of the polarized line profile relative to the peak of the scattered profile; (ii) a 90 swing of the polarization position angle; (iii) the net level of polarization within the line is lower on the blue side of the line and higher on the red side of the line. In this realization of the model, the swing of the polarization angle is due to the projected orientation of the scatterer as seen by the observer in different parts of the emission line. The blue side of the scattered line comes from the part of the outflow moving toward the observer. Given a sufficient thickness of the equatorial outflow, the gas producing the blueshifted part of the outflow may end up with an orientation along the axis of symmetry, so we expect negative Q for this part (Figure <ref>b). In contrast, the red side of the scattered line comes from the sides which have a large scattering volume and are moving away from the emitter resulting in redshifted scattered emission. We expect positive Q values for this part of the outflow, as well as for the continuum. The parameter space for the `swing' to occur in the edge-on view is somewhat limited: a thinner `skin' outflow with 30<θ_ s<50 behaves like a polar scatterer in Figure <ref> with Q<0 everywhere across the emission line, and a thinner equatorial outflow with 60<θ_ s<90 has Q>0. The key observable in SDSS J1652 which contradicts the model in Figure <ref> is the orientation of the net polarization: the position angle of the projected axis of SDSS J1652 is known from direct imaging and integral-field observations, and it is well-measured that the polarization of the continuum and of the red wing of the line are orthogonal to that axis (negative Q' values in Figure <ref>), whereas the model in Figure <ref> unsurprisingly predicts that the net polarization should be aligned with the projected axis (positive q_ c values and Q>0 on the red wing). Our last class of geometries shown in Figure <ref> is the `skin' equatorial outflow viewed close to or through the outflow. This model qualitatively matches all of the features of line polarization we highlighted in SDSS J1652 in Section <ref>. It reproduces the `swing' of the polarization position angle with the signs of Q in agreement with those seen in SDSS J1652 (going from positive on the blue side to negative on the red side and in the continuum). The projected geometry responsible for these orientations as viewed in the observer's plane is illustrated in Figure <ref>c. Mixing Q<0 continuum values with Q>0 blue wing values results in a low polarization fraction on the blue wing of the line, as observed. The polarized flux is redshifted by about v_ s relative to the scattered flux. All these qualitative features are retained within some range of assumed parameters (e.g., 50<θ_ s<70 and 20<θ_ s<30 models show all these features as well, as long as the observer is within the θ_ s range), with quantitative changes in the net level of polarization (lower for smaller θ_ s) and velocity structure (smaller velocity range for smaller θ_ s). Thus, the `skin' outflow models viewed within the outflow are our primary models of interest for SDSS J1652. §.§ Effects of the phase function and of the polarization function The phase function g_ dust(ψ) and the polarization fraction p_ dust(ψ) for dust are sensitive to the dust size distribution <cit.>. Qualitatively, dust is strongly forward-scattering, with cross-section for scattering being over an order of magnitude higher at ψ=0 than at 180. At a fixed dust size distribution, scattering cross-section decreases as the wavelength increases. For the Magellanic Clouds and for the Milky Way dust size distributions, the polarization curve is qualitatively similar to p_ TR(ψ) at wavelengths 6000Å and 1400Å, though peaking at 20-90% depending on the wavelength instead of at 100% for ψ=90 as is the case for Thomson scattering. At intermediate wavelengths, p_ dust(ψ) is very sensitive to the size distribution of the particles, it becomes dissimilar from p_ TR(ψ) and peaks at values <20%. Another distinct feature of dust scattering in <cit.> is that at large scattering angles ψ∼ 150 the polarization position angle can be in the scattering plane, resulting in p_ dust<0, which is in contrast to Rayleigh, Thomson and resonant scattering which always results in polarization perpendicular to the scattering plane. This polarization reversal has been studied in the context of back-scattered light from comets and debris disks, but usually with dust agglomerates ( and references therein). Nonetheless, the reversal is present in Mie theory when purely spherical dust grains are considered, due to interference effects that arise when the particle size is comparable to the wavelength of light <cit.>, so we explore whether this reversal can cause any qualitative changes in the observed spectropolarimetry of quasar outflows. We use the Small Magellanic Cloud (SMC) phase function g_ dust(ψ) and polarization fraction p_ dust(ψ) curves at 4685Å from <cit.>. While this is not the correct wavelength for our particular observation, we keep in mind that the dust size distribution in quasar outflows is unknown and can be quite distinct from those available in <cit.>. These particular curves were chosen because they have a relatively high polarization fraction peak at 25%, while simultaneously showing the polarization sign reversal described above. In Figure <ref>, we show the results for dust scattering by the outflow with the same geometry as our best `skin outflow' model shown in Figure <ref> with electron scattering. Our main feature of interest – the swing of the polarization angle – is still apparent, since it is largely due to the geometry of the projected scattering regions producing different parts of the emission lines (Figure <ref>c). The biggest issue with dust scattering models is that the net polarization is significantly lower than that in electron scattering models, to the point that they are in tension with observed values which reach 20% in SDSS J1652. This is especially true for the continuum: the scattered continuum is produced primarily by gas moving closest to the line of sight to the observer since the scattering efficiency is much higher for forward scattering, yet the polarization fraction is very small at these angles. We have tried a variety of plausible dust scattering curves and geometries, and in none of the combinations does the polarization fraction exceed 10%, and in most it is below 5%. Another related trend is that dust-scattered emission lines tend to be more blueshifted relative to the intrinsic profiles and narrower than electron-scattered ones because of the low efficiency of scattering from the parts of the outflow receding from the observer. The polarization fraction sign reversal for backward scattering can produce a swing in the polarization position angle. Indeed, for a polar emitter 0<θ<80 and equatorial scatterer 80<θ_ s<90 viewed edge-on (θ_ obs=90), the electron-scattering model predicts purely positive Q values across the entire line profile. In contrast, the dust-scattering model shows negative Q values at v_ LOS 2v_ s, i.e., on the reddest part of the line profile which is produced by the part of the outflow receding from the observer and scattering at obtuse angles. Therefore, in principle this peculiarity of the dust polarization curves can result in the swing of the polarization position angle across the emission lines. But the effect is quite weak because backward scattering by dust is inefficient compared to the forward scattering, so the back of the polar outflow is barely visible – and in practice would be even less so because of the intervening extinction. The polarization angle swing is not seen in the full line + continuum profile due to the addition of the positive Stokes parameters for the continuum. The polarization position angle of the continuum is matched on the blueshifted side of the velocity profile and swings on the redshifted part, whereas the opposite is observed in SDSS J1652, so this particular model for the swing is not a good match for SDSS J1652. Finally, the allowed range of geometries for this effect is very narrow: thicker equatorial outflows result in geometric polarization position angle swings as shown in Figure <ref> and Figure <ref>b. In conclusion, although the polarization position angle swing due to peculiarities of the dust polarization curve is a potentially interesting feature, it does not provide a plausible explanation for the observed polarization of the “swinging” objects due to the low scattering efficiency at the relevant obtuse scattering angles. Instead of relying on peculiarities of scattering and polarization curves to reproduce the swing, we need to rely on the geometry of the outflow. Furthermore, the low net polarization of the dust-scattered lines and continuum is in appreciable tension with the values observed in SDSS J1652. This feature of dust scattering is difficult to eliminate by adjusting the particle size distribution due to the forward-scattering nature of dust: it is the forward-scattered light that dominates the scattered signal, but the fractional polarization is at its lowest for these angles. § DISCUSSION §.§ Geometric unification of high accretion rate sources Polarization position angle swings <cit.> – appear quite common in type 1 quasars. In these sources, spectropolarimetry helps reveal the kinematic structures within the broad-line region: the scatterer appears to be equatorial and may be rotationally supported <cit.>. Here we present a different kind of models for polarization swings – models in which the scatterer is dominated by the outflow activity, without appreciable rotation. Thanks to the high levels of polarization and exquisite data, we can place stronger geometric constraint in the case of SDSS J1652 specifically. In particular, the polarization position angle within the emission line and the orientation of the large-scale scattered-light nebula (which directly gives us the orientation of the projected symmetry axis) can only be reconciled if the observer's line of sight is within the outflow and if the outflowing material is distributed within a relatively narrow range of polar angles (Figure <ref>). This leads us to propose a unification model of high accretion rate sources – such as Mrk 231 and SDSS J1652 – shown in Figure <ref>. Eddington ratios for these sources are not well known due to the difficulty of obtaining dynamical masses or applying standard scaling relationships to sources with strong outflows, but estimates range between 0.5-5 <cit.>, although at black hole masses that may differ by up to two orders of magnitude (from a few× 10^7M_⊙ in the case of Mrk 231 to a few× 10^9M_⊙ in ERQs). Multiple lines of evidence, including spectropolarimetry, suggest that both Mrk 231 and SDSS J1652 are viewed through outflowing material, but rest-frame optical spectra and the overal spectral energy distributions of ERQs are significantly different from those of Mrk 231: ERQs have higher infrared-to-optical ratios <cit.> and rarely show evidence for circumnuclear emission of FeII <cit.> characteristic of Mrk 231. This leads us to suggest that the line of sight in ERQs lies closer to the equatorial plane than it does in Mrk 231 (Figure <ref>), making them on average more dust-obscured. The column densities measured from X-ray observations in ERQs in general <cit.> and in SDSS J1652 specifically <cit.> are in the Compton-thick regime (N_H∼ 10^24 cm^-2), which at face value would correspond to a visual extinction of A_V=45 mag <cit.> and prevent us from seeing any UV/optical emission, but this column density can be reconciled with our UV/optical data and Figure <ref> if the emission-line region is co-spatial with the outflow <cit.> and is distributed on much larger spatial scales than the compact X-ray emitting region. Another intriguing and closely related population are hot dust-obscured galaxies (HotDOGs). These objects are selected to be near-infrared dropouts <cit.>, and as a result they have even more extreme infrared-to-optical ratios than ERQs at comparable bolometric luminosities <cit.> and at implied near- or super-Eddington accretion rates <cit.>. Despite the extremely high levels of obscuration suggested by these spectral energy distributions <cit.>, some rest-frame UV emission is detected well in excess of any expected direct emission from the nucleus. It has now been confirmed by direct imaging and polarimetric observations that some of the light escapes from the nuclear regions along the polar opening in the obscuring material, scatters off and reaches the observer <cit.>. The multi-wavelength properties of these sources are therefore well explained by a geometry similar to that suggested for ERQs, but with lines of sight that are closer to edge-on (Figure <ref>). <cit.> and <cit.> proposed that a model similar to that shown in Figure <ref> can explain some of the phenomenology of the broad absorption line (BAL) quasars, if the BAL features arise due to passage through the outflowing `skin' of the dusty torus. What is new in the data and observations presented here is that the same region can now be identified with the scattering region in SDSS J1652, and therefore the kinematic structure of the polarized emission lines can be used to probe the geometry and physical conditions of this region. It has long been known from analytical calculations that high accretion rates onto black holes are associated with geometrically thick disks (also known as `slim' disks, ). This is now firmly established by many numerical simulations which take into account the metric near the black hole and the dynamical effects of radiation <cit.>. These models explicitly predict `skin' outflows with velocities ∼ 0.1c <cit.>. The high quality of spectropolarimetric data in SDSS J1652 allows us to estimate the opening angle of the torus (∼ 20-30) and the outflow velocity in the `skin' of the torus: in the data, the polarized peak is offset from the overall intensity peak by 450-850 km s^-1, whereas in the models this offset is ∼ v_ s. This angles agree well with the estimate of <cit.> who constrain the line of sight in Mrk 231 to be within ∼ 10-26 of the polar axis. If the opening angle were much higher, then we cannot obtain an agreement between the position angle of polarization on the red wing of the line and that inferred from the orientation of the large-scale scattered light nebula <cit.>. If the opening angle is smaller, then the geometry changes from equatorial to polar scattering with a relatively round projected scatterer and it becomes hard to produce a pronounced polarization angle swing or high levels of polarization. These estimates can now be compared from the results of theoretical models for near- and super-Eddington accretion – although only in a qualitative sense, because the existing simulations tend to be focused on hyper-Eddington accretion. Nonetheless, the small opening angle of the torus, the low-density polar region and a higher density `skin' outflow clearly inferred from our data for SDSS J1652 are in excellent qualitative agreement with theoretical expectations for the geometry of super-Eddington accretion. Our inferred outflow velocities are slower than those in simulations by about an order of magnitude likely because we are observing the outflow at distances of several tens of pc, as opposed to to the outflows seen in numerical simulations on scales of several tens of R_g, or ∼ 0.01 pc. The outflow decelerates both due to its ballistic motion out of the black hole potential well and due to the entrainment of extra mass. §.§ Physical conditions in the scattering region We have presented electron scattering models as our primary models for comparison with observations. In Sec. <ref>, we ruled out resonant scattering as the dominant scattering mechanism on the basis of the high polarization seen in the continuum and in emission lines which cannot resonantly scatter. This is somewhat surprising, given the high efficiency of the resonant scattering <cit.> and the likely presence of the relevant ions in the scattering region. A possible explanation is geometric self-shielding: if the scattering medium is clumpy, with partially ionized clouds producing UV emission lines, then both line production and resonant scattering may be happening on the sides of the clouds facing the nucleus and not the observer. We further find in Sec. <ref> that the high levels of polarization seen in SDSS J1652 are in some tension with dust scattering and therefore electron scattering is preferred. Again, it is somewhat surprising: even in a fully ionized medium (as long as the conditions are not too harsh to destroy the dust), the efficiency of dust scattering is two orders of magnitude higher than the efficiency of Thomson scattering <cit.>. It is possible that the polarization efficiency of the dust models can be increased somewhat by adjusting the size distribution, but any model with dominant forward-scattering will yield relatively low net polarization. We therefore discuss what kinds of physical conditions in the scattering outflow may make electron scattering possible. In observations of SDSS J1652, the fractional polarization of emission lines maxes out at the same level (∼ 20%) as the continuum polarization; this equality is achieved on the red wings of the lines. The net polarization of the blue wings of the lines is lower, likely due to the mixing of the continuum and line scattering with opposite Stokes values, as discussed in Sec. <ref>. In contrast, in electron-scattering models the peak fractional polarization is significantly higher in the lines than in the continuum, reaching 40% in the best geometric model in Figure <ref>. One possible real-world complication which would suppress the line polarization is the finite size of the emission region. In our models we assumed a point-like emission source which allows a major simplification of the calculations in that it is only necessary to integrate over the solid angles and not distances (Sec. <ref>). In practice, as described in Sec. <ref>, the outflowing skin of the torus is likely acting as both the emitter and the scatterer, so that the size of the scattering region is not that much larger than that of the emission region. Under these conditions, a well-known geometric cancellation lowers the average polarization level of the emission lines <cit.>. The models of SDSS J1652 in which the observer's line of sight is within the forward-scattering outflowing material naturally explain the net blueshift of the UV emission lines (Figure <ref>, top) compared to their wavelengths expected from the redshift determined by the optical forbidden emission lines in Gemini <cit.> and JWST <cit.> data. The fact the polarized line flux is then redshifted by comparison with the total line flux is an indication that the scatterer is outflowing relative to the emitter. Thus our best-fitting models include only the hemisphere pointed toward the observer, which means we have implicitly assumed that the second hemisphere is highly obscured by circumnuclear dust. This is likely a safe assumption: even on much larger scales affected only by the obscuration of the host galaxy, the scattering and ionization counter-cone is much fainter than the cone directed toward the observer <cit.>. But in principle the presence of the backward-pointing hemisphere can be incorporated into the models by computing Stokes parameters for θ, θ_ s<90 and θ_ obs>90 models and adding them to the corresponding θ_ obs<90 calculation. One can even include an adjustable extinction factor to mimic the partial obscuration of the back-facing flow. Based on their extremely high luminosity L_ bol=10^47-10^48 erg/sec, we suspect that extremely red quasars, of which SDSS J1652 is an example, are near-Eddington objects. The necessary mass accretion rate to produce an Eddington luminosity at radiative efficiency ε would be Ṁ_ Edd=4π GM_ BHm_p/ε c σ_T= 22M_⊙/ year×(M_ BH/10^9M_⊙)(ε/0.1)^-1. In near-Eddington or super-Eddington sources we expect that the total kinetic energy of the outflow may constitute a significant fraction of the quasar's bolometric output and that that a non-negligible fraction of the mass fails to accrete and is ejected, with typical velocities ∼ 0.1c <cit.>. Furthermore, the radiative efficiency of near-Eddington flows could be ≪ 0.1. Therefore, eq. (<ref>) likely provides a lower limit on the mass accretion rate. We now compare this accretion rate with the minimal mass ejection rate necessary to explain spectropolarimetric observations. We estimate the scattering efficiency – the fraction of the photons originating near the nucleus that are then scatterer by the outflowing scattering material – at f_ scat=0.01 based on the ratio of the UV flux observed in ERQs to that in an unobscured quasar of similar luminosity <cit.>. This efficiency is proportional to the number density of particles and to the solid angle spanned by of the scatterer ΔΩ (eq. 2 in ), but so is the outflowing mass Ṁ_ out=ΔΩ n_H m_p r_0^2 v_ s/X, where X=0.7 is the hydrogen fraction by mass, m_p is the proton mass and r_0 is the typical distance of the outflowing material from the source. Refurbishing this as a function of scattering efficiency, we find Ṁ_ out=8π m_p r_0 v_ s f_ scat/(1+X)σ_ Tg_ TR(ψ). Here we explicitly assumed that scattering is by electrons and are using Thomson scattering cross-section σ_ T. For dust scattering, the combination σ_ Tg_ TR(1+X)/(8π X) would need to be replaced by dC_ scat/ dΩ(ψ), the differential cross-section for dust scattering per hydrogen atom at the dominant scattering angle ψ. In <cit.>, we argued that r_0 should be around 10-30 pc based on the typical lengthscales of obscuration and dust sublimation. The velocity of the scatterer v_ s is ∼ 800 km s^-1 from comparing the results of our models with the observed offset between the peaks of the scattered and polarized emission; it is also just above the escape velocity from a 10^9M_⊙ black hole at 30 pc. With these values at ψ=30, we obtain Ṁ_ out=34M_⊙/ year(f_ scat/0.01)(r_0/30 pc)(v_ s/800^-1). This outflow rate is comparable to that expected near the black hole, where momentum conservation dictates Ṁ_ wind0.1c≃ L/c <cit.> and therefore the outflowing mass is comparable to the Eddington rate. In practice the outflow at 10-30 pc scales could be expected to be much more massive than near the nucleus due to entrainment of material and additional acceleration of winds by radiation pressure on the dust. We also expect these intermediate scale outflows we are probing with polarimetry to be clumpy, with the dense clouds dominating the mass budget but not necessarily dominating the scattering. The velocity dispersion σ_v of scattered lines ∼ 1000 km s^-1 constrains the temperature of the scattering electrons to be m_e σ_v^2/(2 k_B)≃ 3× 10^4 K <cit.>, which is unexpectedly cool for volume-filling, dust-free gas. If the scattering is due to dust after all, then depending on the size distribution of particles, the required mass is at least an order of magnitude smaller than that given by eq. (<ref>). § CONCLUSIONS In this paper we present spectropolarimetric observations for an extremely red quasar SDSS J1652 at z=3. The object is likely a near- or super-Eddington source, and it has a known powerful galaxy-wide quasar-driven ionized gas wind <cit.>. It is highly polarized in the rest-frame UV (∼ 20%) and shows several intriguing features in the kinematic distribution of polarization within UV emission line. In particular, the polarization polarization angle changes dramatically within the lines, `swinging' from being aligned with the projected axis on the outflow on the blue side to a position angle that's perpendicular to the outflow axis on the red side and in the continuum. As near-Eddington sources in general and SDSS J1652 in particular are known for their outflow activity on a variety of spatial scales, we develop a theoretical model of scattering which can explain the salient features of our observations with radial motions alone. We discuss key features of the observed polarized line profiles and their relationship with outflow geometry. Unsurprisingly, a polar outflow produces polarization with position angle perpendicular to the projected axis of the outflow, and this is not affected by the orientation of the observer. The velocity offset between the total scattered profile and the polarized profile can be used to probe the kinematics of the scatterer. In contrast, an equatorial outflow or a `torus skin' outflow can produce polarization either parallel or perpendicular to the projected axis, depending on the geometry of the outflow and orientation of the observer. The key spectropolarimetric properties of SDSS J1652 – the polarization position angle swing, the geometric relationship between the position angles of the UV polarization and the large-scale illuminated nebula and the kinematic offset between the polarized and the scattered line – are well explained by a `skin outflow' model for the scatterer, with the observer's line of sight within the outflow. The typical inner opening angle of the outflow suggested by the models is 20-30, which is comparable to constraints for another super-Eddington source Mrk 231 based on other types of data <cit.>. Skin winds from thick disks are in qualitative agreement with models of near- and super-Eddington accretion, although the observations probe them on much larger spatial scales (a few to a few tens of pc) than those that are accessible to numerical simulations (0.01 pc). The scattering mechanism remains difficult to pin down. Line resonant scattering does not appear to dominate as it cannot explain the high polarization of the continuum and of one particular line blend. Electron scattering is consistent with high observed values of polarization, but requires a relatively large scattering mass and dust-free scattering material at temperatures ≪ 10^5K (otherwise the lines would be thermally broadened). Dust scattering requires only modest mass, but it is in tension with observed high values of polarization. The exact values depend on the particle size distribution, which is unknown, but any model of dust which results in strong forward scattering is likely to also result in low polarization, so it may be difficult to reach ∼ 20% polarization values in an outflow pointed directly toward the observer by adjusting the dust size distribution. Because most scattering results in polarization perpendicular to the scattering event plane, the qualitative kinematic features of the geometric model are independent of the exact scattering mechanism. § ACKNOWLEDGEMENTS NLZ is grateful to R. Antonucci, B.T. Draine, J.E. Greene, J.F. Hennawi, J.H. Krolik, J.M. Stone and R.A. Sunyaev for useful discussions, to the Institute for Advanced Study, Princeton, NJ for hospitality during the sabbatical when much of this work was done. The authors are grateful to S.Veilleux for permission to adapt his Mrk 231 cartoon. Support for this work was provided in part by the National Aeronautics and Space Administration (NASA) through Chandra Award Number GO6-17100X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. RMA was supported in part by NASA Jet Propulsion Laboratory subcontract 1520456 associated with the NASA Keck time allocation. NLZ was supported by the Catalyst Award of the Johns Hopkins University and by the Deborah Lunder and Alan Ezekowitz Founders' Circle Membership at the Institute for Advanced Study. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and NASA. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. § DATA AVAILABILITY The raw data for SDSS J1652 used in this article is publicly available on the Keck archive. The numerical calculations and example models are available at <https://github.com/zakamska/polarized_outflows>. mnras
http://arxiv.org/abs/2306.01890v1
20230602195148
Kernel Metric Learning for Clustering Mixed-type Data
[ "Jesse S. Ghashti", "John R. J. Thompson" ]
cs.LG
[ "cs.LG", "stat.CO", "stat.ME", "stat.OT", "62G07, 65D10", "I.5.3; G.3; I.6.6" ]
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System Abanob Soliman^0000-0003-4956-8580, Fabien Bonardi^0000-0002-3555-7306, Désiré Sidibé^0000-0002-5843-7139, and Samia Bouchafa^0000-0002-2860-8128 All authors are with Université Paris-Saclay, Univ Evry, IBISC Laboratory, 91020, Evry, France. Correspondence: [email protected] July 31, 2023 ========================================================================================================================================================================================================================================================================================================== § ABSTRACT Distance-based clustering and classification are widely used in various fields to group mixed numeric and categorical data. A predefined distance measurement is used to cluster data points based on their dissimilarity. While there exist numerous distance-based measures for data with pure numerical attributes and several ordered and unordered categorical metrics, an optimal distance for mixed-type data is an open problem. Many metrics convert numerical attributes to categorical ones or vice versa. They handle the data points as a single attribute type or calculate a distance between each attribute separately and add them up. We propose a metric that uses mixed kernels to measure dissimilarity, with cross-validated optimal kernel bandwidths. Our approach improves clustering accuracy when utilized for existing distance-based clustering algorithms on simulated and real-world datasets containing pure continuous, categorical, and mixed-type data. Keywords: Mixed-type data, metric learning, clustering, smoothing, kernel distance, similarity measure § INTRODUCTION Datasets comprising continuous, ordered, and unordered categorical data are known as mixed-type data and are prevalent across various disciplines, and the availability of such heterogeneous data types continues to increase. Although several approaches have been employed to calculate the distance for mixed-type data points, there is no broadly accepted definition. The challenge of quantifying distance is balancing the contributions of each variable–particularly between discrete and continuous–to the overall difference between data entries. In this paper, we develop a data-driven distance method that estimates the importance of discrete and continuous variables to the difference between entries. Many existing distances homogenize mixed-type data to single-type by projecting all data to either discrete or continuous, through methods such as discretization or dummy coding before calculating distance (see, for example, Guha et al., 2000; Dougherty et al., 1995). While these distances are computationally efficient and well-known, they can inaccurately calculate the meaningful differences between data points and overweight variables in continuous or discrete domains. This overweighting can severely affect the accuracy of any methodology that requires distances through a significant loss of information on the homogenized data types. Clustering is a fundamental technique in data analysis that involves grouping similar data points together based on distance or similarity. When clustering mixed-type data, choosing an appropriate distance metric that can handle the heterogeneity of the data types and scales is crucial. The metric should be able to capture the unique characteristics of each data type and accurately reflect the distances between data points meaningfully. The choice of metric can have a significant impact on the accuracy, reliability, and interpretability of clustering results; thus, it is essential to carefully consider the metric used in clustering mixed-type data and to evaluate its performance using the standard clustering metrics of clustering accuracy (CA) and Adjusted Rand Index (ARI) (Hubert & Arabie, 1985). In this paper, we propose a novel kernel distance for mixed-type data, that is an effective way to handle mixed-type data in clustering applications. We estimate the distribution of the data using kernel density estimation with optimal bandwidth selection and then calculate the kernel distance between each data point. The advantage of this method is that the importance of each variable to the difference between is determined through data-driven cross-validation when selecting the bandwidth. We apply kernel distance to agglomerative hierarchical clustering and demonstrate the utility of our Kernel distance metric (KDSUM) through agglomerative hierarchical clustering, using both simulated and real-world datasets. We find that using a kernel distance almost unilaterally improves clustering performance compared to other common mixed-type distances, such as Gower's distance. A kernel distance provides researchers and practitioners with a unified, robust, effective, and efficient distance for mixed-type data, aiding in informed decision-making from the more accurately characterized clusters. The paper is structured as follows: Section <ref> discusses existing homogenized and non-homogenized approaches for mixed-type distances. Section <ref> outlines the methodology for the KDSUM metric, and Section <ref> describes the simulated and real-life data. In Section <ref>, we present the results and discussion of the clustering process based on the KDSUM kernel distance applied to agglomerative hierarchical clustering. Finally, Section <ref> offers a conclusion and insights for future work. § LITERATURE REVIEW Consider a n × p mixed-type data matrix X consisting of n observations with p many variables that are a combination of continuous, unordered and ordered categorical variables. Assume that the p variables are arranged such that the first continuous variables p_c are first, followed by unordered categorical variables p_u, and then the ordered categorical variables p_o such that p = p_c+p_u+p_o. §.§ Mixed Distances Typical methodologies for calculating mixed-type distance require the user to homogenize data to purely numerical or categorical type, and then calculate differences. Discretization and dummy-coding are two common methods of data homogenization in statistical analysis. Discretization is a process of converting continuous variables into discrete categories or intervals such as binning–dividing the range of a continuous variable into intervals and assigning each observation to the corresponding interval (Dougherty et al., 1995). For example, a person's age can be binned into discrete ordered categories such as "0-18", "19-30", "31-50", and so on. Let 𝐱_i and 𝐱_j be two observations with mixed-type variables. To calculate their distance, we first homogenize the data using discretization or dummy-coding. For discretizating the k^ continuous variable, we divide 𝐱_i and 𝐱_j into c_k ordered categories 𝒵_1, 𝒵_2, …, 𝒵_c_k, and replace each value x_i,k and x_j,k with their corresponding category label vectors 𝐳_i,k and 𝐳_j,k of length c_k. The elements of 𝐳_i,k are 1 if x_i,k falls within the corresponding interval or 0 if it does not. The distance d(𝐱_i, 𝐱_j) can then be calculated using any distance metric designed for categorical data. Discretization is often useful in cases where the data is highly skewed. However, it leads to a loss of information, and choosing an optimal interval width can be challenging and may affect the analysis results. Dummy-coding involves representing categorical variables as binary (0 or 1) variables. Let 𝐱_i and 𝐱_j be two data points with categorical variables. For dummy-coding, we create a separate binary variable for each category of each categorical variable. Let 𝒵_h = {z_h,1, z_h,2, …, z_h,k} be the set of categories for categorical variable h, where k is the number of categories. For each category z_h,m∈𝒵_h, create a binary variable x_i,h,m and x_j,h,m, where the binary variable assumes value 1 if 𝐱_i and 𝐱_j are both in category c_h,m, and 0 otherwise. The distance d(𝐱_i, 𝐱_j) between the dummy-coded data points is then calculated using any binary distance metric (for more information, see, e.g., Choi et al., 2010). Dummy-coding has the advantage of preserving all the information in the categorical variable and ease of interpretation. However, such an approach can dramatically increase the dimensions of the feature space. In addition to discretization, data needs to be scaled, and the choice of scaling also affects clustering performance. Hennig et al. (2015) noted that distance-based clustering methods are not invariant to affine transformations, and Foss et al. (2016) showed that the choice of scaling can affect clustering performance. Foss et al. (2019) illustrated the inadequacy of dummy coding, noting that the expectation of the interval scale variable is always greater than 1, while the expectation from the categorical is always less than 1. This means that the choice of coding can lead to different interpretations of the data and may affect the analysis results. Various mixed distance metrics do not require the homogenization or scaling of the data. The quadratic distance proposed by Lindsay et al. (2008) extends the chi-squared measures of distance between two distributions and requires the choice of a nonnegative definite kernel. De Leon & Carriere (2005) use a general mixed-data model to define the distance between two populations of mixed unordered categorical, ordered categorical, and interval scale data. Krzanowski (1983) proposes a distance based on Matusita's distance, as mixtures of these location models and generalizations are not identifiable without further conditions on some of the parameters. Recently, van de Welden et al., (2023) introduced a framework that allows for an implementation of distances between observations that can be extended to many existing distances for categorical variables. For mixed distances, consider that the rows of X are observation vectors 𝐱_j, and the dissimilarity or distance between any two observations 𝐱_i and 𝐱_j is denoted d(𝐱_i,𝐱_j), whereas the similarity between the observations is denoted s(𝐱_i,𝐱_j). For any arbitrary variable l, denote w_i,j,l = 0 if variable l has missing data, otherwise w_i,j,l = 1. Denote δ_i,j,l≡δ_c(x_i,l - x_j,l) for categorical variables, where δ_i,j,l = 1 if the two observations for the lth variables are the same, and 0 otherwise. Modha and Spangler (2003) propose a method similar to k-prototypes that includes estimating a suitable weight that scales the relative contribution of the interval and categorical variables. However, the brute-force search to cluster repeatedly for a range of values for the weight that minimizes its objective function is computationally exhaustive. The metrics in Table <ref> will be used as benchmarks for the analysis of metrics herein. Gower's distance (Gower, 1971) is a common hybrid distance function that calculates the distance between two vectors of the same length. It uses a weighted combination of interval and categorical distances, where the categorical distance is based on whether the categories match or not, and the interval distance is scaled based on the range of the variable. The user-specified weights for each variable may lead to intractable solutions and varying results based on the data. k-prototypes (Huang, 1998) is another hybrid distance technique that uses a similar approach to Gower's distance, except the squared Euclidean distance is used for the interval scale variables. Unlike Gower's distance, it does not require variable-specific weights, but rather a single weight used for the entire categorical contribution of the distance function. The Podani distance metric (1999) extends Gower's general coefficient of similarity to ordinal variables, while the Wishart (2003) metric is similar to the Podani metric, except it makes use of the sample standard deviation for continuous variables, rather than the range of the continuous variables. The latter three distances can be calculated in R using the kmed package (Budiaji, 2022), while Gower's distance is calculated using the daisy function in the package cluster (Maechler et al., 2022) §.§ Kernel Density Estimation and Bandwidth Selection Procedures Kernel functions are weighting functions that can be used to map data points from a high-dimensional sample space to a low-dimensional space. This paper uses kernel functions to calculate the similarities between observations within a mixed-type dataset. The kernel function can be used to define a similarity function between data points, and can be used to define a distance metric for various data types. We denote ℒ: ℝ^p ×ℝ^p →ℝ as an arbitrary similarity function with the following two properties: for any observation x_i, ℒ(x_i,x_i) = 1, and as the difference between two observations x_i and x_j increases, ℒ(x_i,x_j) decreases. The similarity ℒ can be cast as any symmetric kernel function satisfying these properties. We denote the kernel functions specific to datatypes as K, L, and ℓ for continuous, unordered and ordered categorical variables, respectively. For each kernel, we denote bandwidths associated with the kernel functions as λ≡{λ^c, λ^u, λ^o} where λ^c ≡{λ_i}_i=1^p_c, λ^u ≡{λ_i}_i=p_c+1^p_c+p_u, and λ^o ≡{λ_i}_i=p_c+p_u+1^p. Common kernel functions used in the smoothing literature are in Table <ref>. However, as shown, fewer unordered and categorical kernel functions are typically used. The Aitchison & Aitken kernel is commonly used for unordered categorical data, while the Wang & Van Ryzin kernel is often used for ordered data. In this paper, we select a Gaussian kernel for continuous variables, an Aitken kernel for unordered categorical variables, and a Wang & Van Ryzin kernel for ordered categorical. A mixed-type joint kernel function between a random vector x_j ≡{x_j^c,x_j^u,x_j^o} and an arbitrary point x_i,j is written as ℒ_λ(x_i,x_i,j)= ∏_k=1^p_c1/λ_kK(x_k^c-x_i,k^c/λ_k)∏_k=1^p_uL(x_i,k^u,x_k^u,λ^u_k)∏_k=1^p_oℓ(x_i,k^o,x_k^o,λ_k^o). Optimal bandwidth selection methods are designed to preserve estimator convergence while having several other desirable properties, including smoothing out irrelevant variables (Loader, 1999). There is a wide range of methods for optimal bandwidth selection, including Akaike Information Criterion (Hurvich et al., 1998), Least Squares Cross-Validation (e.g., Sain et al., 1994), Rule of Thumb (Silverman, 1986), and Maximum-Likelihood Cross-Validation (MLCV) (e.g., Hall 1981). In this paper, we use MLCV through the R package np for our bandwidth selection criterion (Hayfield & Racine, 2008), where the MLCV objective function to be minimized is CV(λ) = ∑_i=1^nln(1/(n-1)∑_j=1, j i^nℒ_λ(x_i,k,x_j,k)) = ∑_i=1^nln(ℒ̂_-i(x_i)), where ℒ̂_-i(x_i) is the leave-one-out estimator of ℒ_λ(·) in Equation (<ref>). § MIXED KERNEL DISTANCES This concept of distance is fundamental in many fields, including machine learning, data mining, and computer vision. Consider the set of observations X. A real-valued function ℒ(𝐱_1, 𝐱_2) on the Cartesian product X × X is a similarity function if, for any points 𝐱_1, 𝐱_2, 𝐱_3 ∈ X, it satisfies four conditions (Chen et al., 2009): (S1) Symmetry: ℒ(𝐱_1,𝐱_2) = ℒ(𝐱_2,𝐱_1), (S2) Indiscernible: ℒ(𝐱_1,𝐱_2) = ℒ(𝐱_1,𝐱_1) = ℒ(𝐱_2,𝐱_2) 𝐱_1 = 𝐱_2, (S3) Nonnegative self-similarity: ℒ(𝐱_1,𝐱_2) ≥ℒ(𝐱_1,𝐱_1) ≥ 0, (S4) Similarity triangle inequality: ℒ(𝐱_1,𝐱_2)+ℒ(𝐱_2,𝐱_3) ≤ℒ(𝐱_1,𝐱_3) + ℒ(𝐱_2,𝐱_2). A metric for mixed-type data can be constructed using kernel functions based on the underlying density of the individual variables. To transform kernel similarities into distances, we extend the metric described in Phillips and Venkatasubramanian (2011) to the multivariate setting, which uses a well-defined kernel function to measure similarity between points 𝐱_1 and 𝐱_2. The distance between 𝐱_1 and 𝐱_2 is then defined as d(𝐱_1, 𝐱_2) = ℒ(𝐱_1, 𝐱_1) + ℒ(𝐱_2, 𝐱_2) - ℒ(𝐱_1, 𝐱_2) - ℒ(𝐱_2, 𝐱_1). If a symmetric kernel function is elected, the formula reduces to d(𝐱_1, 𝐱_2) = ℒ(𝐱_1, 𝐱_1) + ℒ(𝐱_2, 𝐱_2) - 2ℒ(𝐱_1, 𝐱_2). Equation (<ref>) represents the difference between the self-similarities of the two points and their cross-similarity. The multiplicative factor of two ensures that the distance between an object and itself equals zero and satisfies the identity of indiscernibles. Theorem # 1: (<ref>) is a well-defined distance metric, i.e, it satisfies the following properties (Chen et al., 2009): Proof: Nonnegativity (d(x_1,x_2) ≥ 0): note by (S3) that ℒ(𝐱_1,𝐱_1) - ℒ(𝐱_1,𝐱_2) ≥ 0 and ℒ(𝐱_2,𝐱_2) - ℒ(𝐱_2,𝐱_1) ≥ 0. Adding yields ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_2,𝐱_2) - ℒ(𝐱_1,𝐱_2) - ℒ(𝐱_2,𝐱_1) ≥ 0 and thus, d(𝐱_1,𝐱_2) ≥ 0. Symmetry (d(x_1,x_2) = d(x_2,x_1)): note d(𝐱_1,𝐱_2) = ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_2,𝐱_2) - ℒ(𝐱_1,𝐱_2) - ℒ(𝐱_2,𝐱_1) = ℒ(𝐱_2,𝐱_2) + ℒ(𝐱_1,𝐱_1) - ℒ(𝐱_2,𝐱_1) - ℒ(𝐱_1,𝐱_2) = d(𝐱_2,𝐱_1) Identity of indescernibles (d(x_1, x_2) = 0 x_1=x_2): suppose that d(𝐱_1,𝐱_2) = 0, thus ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_2,𝐱_2) - ℒ(𝐱_1,𝐱_2) - ℒ(𝐱_2,𝐱_1) = 0, implying ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_2,𝐱_2) = ℒ(𝐱_1,𝐱_2) + ℒ(𝐱_2,𝐱_1) which is true if and only if 𝐱_1 = 𝐱_2 or 𝐱_2=𝐱_1 by (S5). Conversely, suppose 𝐱_2=𝐱_1, then d(𝐱_1,𝐱_1) = ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_1,𝐱_1) - ℒ(𝐱_1,𝐱_1) - ℒ(𝐱_1,𝐱_1) = 2ℒ(𝐱_1,𝐱_1) - 2ℒ(𝐱_1,𝐱_1) = 0 Triangle inequality (d(x_1, x_3) ≤ d(x_1, x_2) + d(x_2, x_3)): note by (S4) that ℒ(𝐱_1,𝐱_2) + ℒ(𝐱_2,𝐱_3) ≤ℒ(𝐱_1,𝐱_3) + ℒ(𝐱_2,𝐱_2), and ℒ(𝐱_3,𝐱_2) + ℒ(𝐱_2,𝐱_1) ≤ℒ(𝐱_3,𝐱_1) + ℒ(𝐱_2,𝐱_2). Then, d(𝐱_1,𝐱_3) = ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_3,𝐱_3) - ℒ(𝐱_1,𝐱_3) - ℒ(𝐱_3,𝐱_1) ≤ℒ(𝐱_1,𝐱_1) + ℒ(𝐱_3,𝐱_3) - ℒ(𝐱_1,𝐱_2) - ℒ(𝐱_2,𝐱_3) + ℒ(𝐱_2,𝐱_2) - ℒ(𝐱_2,𝐱_1) - ℒ(𝐱_3,𝐱_2) + ℒ(𝐱_2,𝐱_2) = d(𝐱_1,𝐱_2) + d(𝐱_2,𝐱_3) □ §.§ KDSUM: Dissimilarity metric for Mixed-Type Data The pairwise similarity between two observations x_i and x_j is ψ(x_i, x_j|λ) = ∏_k=1^p_c1/λ_k K( x_i,k - x_j,k/λ_k) + ∑_k=p_c+1^p_u L(x_i,k,x_j,k,λ_k) + ∑_k = p_c+p_u+1^p ℓ(x_i,k,x_j,k,λ_k). By the definition of the kernel functions in Section <ref>, ψ(·) satisfies the similarity properties (S1)-(S4) and is a similarity function, thus we can use ℒ(·):=ψ(·). Combining the similarity properties (S1)-(S4), and adapting the kernel distance described by Phillips and Venkatasubramanian (2011) to the multivariate setting, we define the distance between any two data points x_i, x_j of the dataset X as d(x_i,x_j|λ) = ψ(x_i,x_i|λ) + ψ(x_j,x_j|λ) - 2ψ(x_i,x_j|λ). Consider the following simulated mixed-type data matrix: X = cccc p_c_1 p_u_1 p_o_1 c(ccc)x_1 1.5 1 3 x_2 1.5 1 3 x_3 1.5 0 0 x_4 0 1 0 x_5 0 0 3 , where p_c_1=p_u_1=p_o_1=1. The distance between vectors 𝐱_1 𝐱_2 should be assigned 0, while 𝐱_3, 𝐱_4, and 𝐱_5 each contain one variable value in common with both 𝐱_1 and 𝐱_2 but the rest of the observations are 0, and the variables in common between 𝐱_3, 𝐱_4, and 𝐱_5 and 𝐱_1 and 𝐱_2 are different for each vector. We begin by testing λ_1 = [0.01, 0, 0] in anticipation that the distances approach their upper bounds. We also test λ_2 = [10, 1, 1] in anticipation that the distances will be minimal as the bandwidths reach their upper bounds. Case 1 and 2 below confirm this. Finally, we test λ chosen using maximum likelihood-cross validation with the given kernel functions, which yields λ_3 = [1.29, 1.000, 1.83×10^-8]. We observe that the variable p_u_1 has nearly reached its upper bounds and will contribute little to the overall distance based on the data, while p_c_1 and p_o_1 will contribute more heavily. The results are shown in case 3. case 1: d(X | λ_1) cccccc x_1 x_2 x_3 x_4 x_5 c(ccccc)x_1 0 0 4.000 81.788 81.788 x_2 0 4.000 81.788 81.788 x_3 0 81.788 81.788 x_4 0 4.000 x_5 0 case 2: d(X | λ_2) cccccc x_1 x_2 x_3 x_4 x_5 c(ccccc)x_1 0 0 0 0.001 0.001 x_2 0 0 0.001 0.001 x_3 0 0.001 0.001 x_4 0 0 x_5 0 case 3: d(X | λ_3) cccccc x_1 x_2 x_3 x_4 x_5 c(ccccc)x_1 0 0 2.000 2.306 0.306 x_2 0 2.000 2.306 0.306 x_3 0 0.306 2.306 x_4 0 2.000 x_5 0 §.§ KDSUM Algorithm The metric KDSUM in Equation (<ref>) requires a user-defined kernel distance metric for each variable type within a given dataset. § STUDY DESCRIPTION To evaluate the performance of the KDSUM metric in comparison to established metrics for mixed-type data distance-based clustering, we analyzed simulated and real datasets of continuous, categorical, and mixed-type attributes using agglomerative hierarchical clustering techniques. We establish the performance of the KDSUM metric relative to existing metrics for handling mixed-type data and to demonstrate the potential of the KDSUM metric to enhance clustering accuracy. By comparing the KDSUM metric to these advanced clustering techniques, we demonstrate the flexibility of the KDSUM metric for usage in distance-based clustering of various mixed datasets. §.§ Clustering algorithms Mixed-type approaches offer a solution to the challenge of clustering datasets that contain both continuous and categorical variables. One approach involves selecting a distance metric that can handle both types of variables, and then clustering the data using methods that depend on the distance function. §.§.§ Agglomerative Hierarchical Clustering A kernel distance metric can be utilized in any clustering algorithm that accepts a dissimilarity metric. Additionally, this metric can be adapted to centroid, medoid, or prototype-based methods. To test the KDSUM metric for clustering, we follow previous literature that uses agglomerative hierarchical clustering algorithms designed to cluster based on dissimilarity metrics (see, for example, Day & Edelsbrunner, 1984; Murtagh & Contreras, 2012; Bouguettaya et al., 2015; Sasirekha & Baby, 2013; Nielson, 2016). Single-linkage calculates the distance between two clusters as the shortest distance between any two points in the two clusters. Similarly, Complete-linkage (e.g., Macnaughton-Smith, 1965) calculates the distance between two clusters as the maximum distance between any two points in the two clusters. Average-linkage (e.g., Lance & Williams, 1967), on the other hand, considers the average distance between all pairs of points in the two clusters. Ward's method (Ward, 1963) seeks to minimize the total variance within each cluster as the criterion for merging clusters. Median linkage employs the median distance between all pairs of points in the two clusters, while centroid linkage (e.g., Sokal & Michener, 1958) relies on the distance between the centroids of the two clusters. §.§.§ Evaluation Metrics When evaluating and comparing the effectiveness and accuracy of clustering and classification techniques, we use the two commonly used metrics of classification accuracy (CA) and the Adjusted Rand Index (ARI). The ARI is a statistic that quantifies the similarity between the true classification of the data and the classification obtained by a given method (Rand, 1971). The ARI is defined as ARI = ∑_ijn_ij2 - [ ∑_ia_i2∑_jb_j2 ] / n2/1/2 [ ∑_i a_i2 + ∑_jb_j2 ] - [ ∑_ia_i2∑_jb_j2 ] / n2, where n_ij is the diagonal sum of the clustering contingency table, and a_i, b_j correspond to the row sums and column sums of the contingency table, respectively. The contingency table is a visual depiction that summarizes agreeance and disagreeance between the true class labels and the classification class labels. The index considers the number of pairs of data points that are labelled identically in both sets and labelled differently in both sets. The ARI then adjusts for a chance agreement based on the expected agreement between the two sets under a null model. The resulting ARI value ranges from 0 to 1, where 0 indicates complete randomness and 1 indicates perfect agreement in classification. The ARI is calculated using in R using the package mclust (Scrucca et al., 2022) Clustering accuracy is also used to measure the percentage of data points correctly assigned to their corresponding clusters. It is calculated by comparing the true classification labels with those generated by the clustering algorithm, defined as CA(y,ŷ) = ∑_i=1^n1(ŷ_i = y_i)/n, where the indicator function 1(·) = 1 if the class label y for the ith observation matches the predicted class label ŷ_i, and 0 otherwise. The CA ranges from 0 to 1, where 0 indicates that none of the data points are assigned to the correct clusters, and 1 indicates that all data points are assigned to the correct clusters. §.§ Simulated Data The first four continuous datasets were simulated to evaluate the ability of KDSUM to effectively handle data that exhibits a diverse range of distributions, adapted from Morbieu (2018). In all instances, the simulated data comprised two variables and two known classes. Specifically, the first set of data consisted of 1000 observations that were simulated using a highly-correlated bimodal Gaussian distribution with low variance. Each cluster consisted of 500 observations, and the cluster separation was distinct. The second set contained 2050 observations that were simulated using a well-defined large cluster of 2000 observations with low variance, and a small cluster of 50 observations with high variance. Compared to the first simulation, the boundary between the clusters was blurred. The third set of data consisted of 200 observations that were simulated with one dense spherical cluster that was contained inside a sparse spherical cluster, with both clusters having equal observations. Lastly, the fourth set of data consisted of two equally-sized clusters that were spiralled within each other. A visualization of the four simulated continuous datasets is presented in Figure <ref>. Two additional simulated datasets were constructed with categorical and mixed variables, primarily for bandwidth selection analysis. Sim 6 was constructed with 100 observations and 6 variables, where 4 of them were continuous variables drawn from a uniform distribution to serve as noise terms with no interpretable meaning. The two categorical variables were unordered and consisted of two distinct groupings, with values ranging from 0 to 10 for class 1 and 12 to 20 for class 2. In contrast, Sim 5 consisted of 200 observations, where 5 unordered categorical variables were employed. Two of these variables were random binary noise variables with no interpretable meaning. The three categorical variables had three distinct clusters of equal sizes. The three unordered categorical variables were randomly selected integers in the range of 0 - 30, and the first class consisted only of values between 0 - 10 for each of the three variables, while classes two and three consisted of values of 10-20 and 20-30, respectively. From Figure <ref>, it is evident that there are some overlap in class assignment in these intervals since the value of 10 and 20 belongs to more than one class. §.§ Real Data This study utilized a diverse range of data, including continuous, categorical, and mixed-type datasets, to evaluate the KDSUM metric for clustering algorithms. All data sets used in the study are publicly available through the UCI Machine Learning Repository (Dua & Graff, 2017). The study included the Iris dataset, which has 150 observations with four continuous variables and a classification column containing three distinct classes, the Wine dataset with 178 observations of 13 continuous variables and three classes, the Zoo dataset with 101 observations consisting of 12 binary variables and one ordered categorical variable, and the Breast Cancer dataset with nine ordered categorical variables and two classes. The Soybean dataset was also included, which contains 307 observations and a mix of 35 ordered and unordered categorical variables and 18 classes. The Australian Credit and Auto MPG dataset were also utilized, which have a mix of continuous and categorical variables and two classes each. For the Auto MPG dataset, the predicted class was a continuous variable (miles per gallon), and was partitioned into 2 distinct classes with an approximately even dispersion of observations into the two classes, namely, miles per gallon < 22 and ≥ 22. A summary of all the datasets can be found in Table <ref>. § RESULTS §.§ Comparison to Mixed-type Distance Metrics Prior to the comparative study of clustering methods, an experiment was performed to assess the effectiveness of the suggested metric compared to other established mixed-type metrics when employed in agglomerative hierarchical clustering. Furthermore, an analysis of the bandwidths obtained via MLCV was conducted. Four common mixed-type metrics were evaluated and compared to the KDSUM metric. The dissimilarity matrices resulting from each of the tested metrics were subsequently clustered using Average-Linkage agglomerative hierarchical clustering. The data used for this study was Zoo, Auto, Sim 5, and Sim 6. Our objective with the simulated data was to examine the influence of perfect clustering results by analyzing the bandwidths. For Sim 6, the bandwidths of the continuous noise terms were determined to be (λ_1^c,λ_2^c,λ_3^c,λ_4^c) = (0.475, 0.310, 0.448,0.469), whereas the meaningful categorical variables had bandwidths of (λ_5^u,λ_6^u) = (0.108, 0.070). As continuous variable bandwidths range from 0 < λ^c < ∞, and unordered categorical variables have bandwidths in the range 0 ≤λ^u ≤ 1, it is evident that all MLCV bandwidths for Sim 6 were small. This implies that all the variables in the data were given higher weighting, resulting in sharp, narrow kernels for both variable types. The observed bandwidths for Sim 5 were (λ_1^u,λ_2^u,λ_3^u,λ_4^u,λ_5^u) = (0.775, 0.741, 0.017, 0.029, 0.037), with (λ_3^u,λ_4^u, λ_5^u) being the bandwidths associated with variables of importance. By analyzing the bandwidth outcomes, we observe that the noise terms acquired bandwidths (λ_1^u,λ_2^u) that essentially attained their maximum bandwidth values. As a result, they were effectively smoothed out from the data, leading to a reduced influence on the overall kernel distance calculation. Employing cross-validation techniques for kernel smoothing in the distance metric offers two-fold benefits. First, it effectively smooths out variables considered irrelevant from the distance calculation, thereby leaving only significant variables in the overall calculation of the distance function. Second, this approach results in a more accurate representation of distance between observations. Table <ref> presents the Adjusted Rand Index (ARI) scores for four mixed datasets. The results demonstrate that the KDSUM metric outperforms the other tested mixed-type metrics in terms of clustering accuracy and ability to cluster data against known class labels for each dataset. It is notable that the KDSUM method achieves exceptionally high clustering results, while other metrics exhibit poor performance with respect to hierarchical clustering methods. §.§ Comparison to Common Clustering Algorithms To cluster the 6 simulated and 8 real datasets, the KDSUM metric was utilized in conjunction with standard agglomerative hierarchical clustering techniques provided in the R package stats (R Core Team, 2022). The clustering method with the highest clustering accuracy (CA) was employed for the resulting analysis. The performance of the KDSUM metric was compared to Gower's distance using a partitioning around medoids (PAM) model (Kaufman, 1990). For purely continuous or categorical data, Gower's distance employs the Euclidean distance or Simple Matching Coefficient, respectively (Gower, 1971). Results for continuous datasets are provided for k-Means (Hartigan & Wong, 1979), for categorical data, results are provided for k-Modes (Huang, 1998), and for mixed-type data, results are provided for k-Prototypes (Huang, 1998). The clustering accuracy and ARI are summarized in Table <ref>. The results of the experiments demonstrate the significant advantages of the KDSUM metric in terms of clustering accuracy. For each of the six simulated datasets, the KDSUM metric in combination with hierarchical clustering achieved clustering accuracy that was 11% to 40.6% higher than that of the next best clustering accuracy for the respective models, along with significantly higher ARI scores. For the Wine dataset, the clustering accuracy achieved by the KDSUM metric was more than 20% higher than that of Gower's distance with PAM, and k-Means. Similarly, for the Zoo, Breast Cancer, Soybean (lg), Credit, and Auto datasets, the KDSUM metric obtained an increase in clustering accuracy of 12%, 1.1%, 9.9%, 2.3%, 8.4%, respectively, over the second highest performing method. All methods performed well for the Soybean (sm) dataset. It should be noted that the only dataset in which the KDSUM method did not perform as well as the competing methods was Iris, where both competing methods achieved clustering accuracy that was 1.3% higher. It is worth mentioning that if we only consider one column (petal width), all three methods achieved the same clustering accuracy and ARI of 0.960 and 0.886, respectively, which is the highest value obtained from any combination of variables in this dataset. While the obvious cluster of the setosa species class label was correctly identified, the overlapping nature of the remaining two species, setosa and versicolor, led the KDSUM method to incorrectly classify two observations more than the other two methods. Improving the effectiveness of the KDSUM metric for handling overlapping clusters is an active area of consideration. § CONCLUSION In this study, we proposed a novel kernel distance metric for effectively handling mixed-type data. Specifically, we developed a metric based on the kernel function, which is a widely used tool in machine learning and data analysis for transforming data into a higher dimensional feature space. To ensure the viability of our KDSUM metric, we rigorously proved that it satisfies all necessary properties of a distance metric, including non-negativity, symmetry, the triangle inequality, and the identity of indiscernibles. In doing so, we established the theoretical foundation for our KDSUM metric and demonstrated its potential for accurately capturing the distances between mixed-type data points. We conducted extensive experiments on both simulated and empirical data to evaluate the effectiveness of our KDSUM metric compared to existing mixed-type data metrics and state-of-the-art clustering algorithms designed to handle mixed-type data. Using the same agglomerative hierarchical clustering techniques, we assessed the performance of our KDSUM metric in terms of clustering accuracy and the Adjusted Rand Index. The experimental results showed that our KDSUM metric outperformed existing mixed-type data metrics and achieved competitive results compared to state-of-the-art clustering algorithms. Although most existing metrics employ an additive structure for each variable type, which is similar to the KDSUM method, none of the methods analyzed utilize kernels or kernel smoothing techniques to eliminate irrelevant variables for clustering. Instead, they rely on parametric approaches that require either data transformations through importance weighting of categorical variables that can be controlled by the user directly or estimated using optimization techniques. This paper demonstrates the first steps towards a generalized distance for mixed-type data. Some improvements to the methodology are possible. We have calculated distances orthogonally to the clustering algorithm, and we note that calculating optimal bandwidths for density estimation may not necessarily lead to sub-optimal bandwidths for distance and clustering. An investigation of optimal bandwidth selection procedures for various distance-based clustering algorithms is also future work. While agglomerative hierarchical clustering was preferred for this study for ease of demonstration, a new or existing algorithm may further enhance the classification and clustering of mixed data with a kernel distance metric. A detailed analysis of clustering algorithms that require dissimilarity matrices as input and determine the optimal clustering algorithm that pairs with kernel distance metrics is also future work. Moreover, we identified several promising directions for future research, including applying kernel metrics to fuzzy clustering algorithms. By exploring these research directions, we can further explore the applicability and effectiveness of our KDSUM method for clustering mixed-type data. § DATA AVAILABILITY The datasets analyzed during the current study are publicly available in the UCI Learning Repository. § CODE AVAILABILITY All code is available upon request from the contact author. § CONFLICT OF INTEREST The authors declare they have no conflict of interest. *
http://arxiv.org/abs/2306.02895v1
20230605140453
Evading Black-box Classifiers Without Breaking Eggs
[ "Edoardo Debenedetti", "Nicholas Carlini", "Florian Tramèr" ]
cs.CR
[ "cs.CR", "cs.LG", "stat.ML" ]
Prebiosignature Molecules Can Be Detected in Temperate Exoplanet Atmospheres with JWST O. Shorttle July 31, 2023 ====================================================================================== Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out “bad” data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as “bad” come at a higher cost because they trigger additional security filters, e.g., usage throttling or account suspension. Yet, we find that existing decision-based attacks issue a large number of “bad” queries, which likely renders them ineffective against security-critical systems. We then design new attacks that reduce the number of bad queries by 1.5–7.3×, but often at a significant increase in total (non-bad) queries. We thus pose it as an open problem to build black-box attacks that are more effective under realistic cost metrics[Code to reproduce our experiments: <https://github.com/ethz-privsec/realistic-adv-examples/>]. § INTRODUCTION Adversarial examples <cit.> are a security risk for machine learning (ML) models that interact with malicious actors. For example, an attacker could use adversarial examples to post undesired content to the Web while bypassing ML filtering mechanisms <cit.>. In such security-critical uses of ML, the attacker often only has black-box access to the ML model's decisions. Decision-based attacks <cit.> generate adversarial examples in black-box settings by repeatedly querying the model and observing only the output decision on perturbed inputs. The original of <cit.> required over 100,000 model queries to reliably find small adversarial perturbations. Subsequent work <cit.> has optimized for this metric of “total number of model queries”, and reduced it by 1–3 orders of magnitude. We argue this metric fails to reflect the true cost of querying a security-critical ML system. Such systems typically aim to detect “bad” data, such as malware, harmful content or malicious traffic. Queries with benign data (e.g., a selfie uploaded to social media) carry little cost; in contrast, bad data flagged by the system (e.g., offensive content) triggers additional security measures that carry a high cost for the attacker—up to account termination. Thus, we argue that black-box attacks should strive to be stealthy, by minimizing the number of “bad” queries that are flagged by the ML system. We find that existing attacks are not stealthy: over 50% of the queries they make are bad. We then show how to drastically reduce the number of bad queries for a class of attacks that measure distances to the model's boundary along random directions (e.g.,  <cit.>,  <cit.> and  <cit.>). Inspired by the famous “egg-dropping problem” <cit.> , we design variants of these attacks that trade-off bad queries for benign ones. We evaluate our attacks on three classification tasks: ImageNet, dog vs. not-dog in ImageNet, and NSFW content <cit.>. Our stealthy attacks reduce the number of bad queries of the original attacks by 1.5–7.3×. Notably, on ImageNet, our stealthy variant of the attack outperforms and in terms of bad queries, despite the two latter attacks issuing fewer queries in total. Yet, our most stealthy ℓ_2 attacks incur a large increase in benign queries (350–1,400×). The tradeoff is better for ℓ_∞ attacks: our stealthy variant of the attack reduces bad queries by 2.1–2.5× over and 6–17× over , while making 2.1–3.4× more benign queries than . We use the stealthy attack to evade a commercial black-box NSFW image detector, with 2.2× fewer bad queries than the original attack. Overall, our results suggest that many decision-based attacks are far from stealthy, and that stealthier attacks are often only viable if the cost of bad queries far outweighs that of good queries (especially for ℓ_2 attacks). We thus recommend that future decision-based attacks account for asymmetric query costs, to better reflect the true cost of deploying such attacks against real security-critical systems. § DECISION-BASED ATTACKS Given a classifier f: [0,1]^d →𝒴 and input (x,y), an (untargeted) adversarial example x̂ is an input close to x that is misclassified, i.e., f(x̂) ≠ y and x̂- x_p ≤ϵ for some ℓ_p norm and threshold ϵ. A decision-based attack gets oracle access to the model f. The attacker can query the model on arbitrary inputs x ∈ [0,1]^d to obtain the class label y f(x). Existing decision-based attacks aim to minimize the total number of queries made to the model f before the attack succeeds. Applications.   Decision-based attacks <cit.> were designed for black-box ML systems that only return model decisions (e.g., an ML model that filters social media content). Such attacks are also applicable when an attacker has physical access to a model guarded by hardware protections, e.g, a phone's authentication mechanism, or a self-driving system. Decision-based attacks are also commonly used to evaluate the robustness of white-box models, when computing gradients is hard <cit.>. In this paper we are interested in the first two scenarios, where decision-based attacks are used against black-box ML security systems. In particular, we assume that these security systems monitor and log user queries, and can throttle or disable an attacker's access to the system. § ASYMMETRIC QUERY COSTS Existing decision-based attacks optimize for the total number of model queries. This is reasonable if the attacker's primary cost is incurred by queries to the model, and this cost is uniform across queries (e.g., if the attacker has to pay a fixed service fee for each query). But we argue that query costs are rarely uniform in practical security-critical systems. This is because in such systems, the goal of a ML model is usually to detect “bad” data (e.g., malware, harmful content, malicious traffic, etc). The costs incurred by querying such a model are highly asymmetric. Querying the model with “good” data is expected, comes with no additional overhead, and is thus cheap. Whereas querying the model with “bad” data is unexpected, triggers additional security measures and filters, and thus places a much higher cost on the attacker. As an example, consider an attacker who tries to upload inappropriate content to a social media website. Every uploaded image passes through a ML model that flags inappropriate content. Benign content is very rarely flagged and thus carries little cost. But if a query is flagged as inappropriate, the system blocks the contents and may take further costly actions (e.g., account throttling or suspension). We now formalize this asymmetry. Assume one or more of the classifier's output classes 𝒴 are “bad”, denoted as 𝒴_bad. The attacker is given an input (x,y) where y ∈𝒴_bad (e.g., x is a NSFW image) and wants to find an adversarial example x̂ where f(x̂) ∉𝒴_bad. All queries to the model f carry a base cost c_0, due to data processing, network bandwidth, or disk storage, or throttling if the attacker makes too many queries. This base cost is typically very low: e.g., Facebook users can upload 1,000 images at once in an album <cit.>. However, for queries x that are flagged as inappropriate (i.e., f(x) ∈𝒴_bad), the cost c_bad incurred by the attacker is much larger. Their account could be suspended or banned, their IP blacklisted, etc. While these restrictions can be circumvented (e.g., by buying multiple accounts <cit.>), this places a significantly higher cost on queries flagged as bad, i.e., c_bad≫ c_0. We thus argue that decision-based attacks should strive to minimize the following cost: minimize Q_total· c_0 + Q_bad· c_bad , where Q_bad is the number of bad model queries (f(x) ∈𝒴_bad), Q_total is the total number of queries—including bad ones—and c_bad≫ c_0. We call attacks that minimize this asymmetric cost stealthy. Existing attacks are not stealthy. No existing black-box attack considers such asymmetric query costs. As a result, these attacks issue a large number of bad queries. We illustrate this with an untargeted attack on ImageNet.[ImageNet is not a security-critical task, and thus most content is not “bad”. We use ImageNet here because prior attacks were designed to work well on it. To mimic a security-critical evasion attempt, we set the class to be evaded as “bad” and all other classes as “good”. That is, for an input (x, y) we set 𝒴_bad = {y} and the attacker's goal is to find an adversarial example x̂ such that f(x̂) ≠ y, while avoiding making queries labeled as y.] In <Ref>, we show the number of total queries Q_total and bad queries Q_bad made by various ℓ_2 and ℓ_∞ decision-based attacks on a ResNet-50 classifier. In all cases, half or more of the attacker's queries are “bad” (i.e., they get the class label that was to be evaded). Despite differences in the fraction of bad queries for each attack, attacks that make fewer total queries also make fewer bad queries. But this begs the question of whether we could design attacks that issue far fewer bad queries in total. The remainder of this paper answers this question. Selecting the values of c_0 and c_bad. The true cost of a query (whether good or bad) may be hard to estimate, and can vary between applications. As a result, we recommend that black-box attack evaluations report both the value of Q_total and Q_bad, so that the attack cost can be calculated for any domain-specific values of c_0 and c_bad. In this paper, we often make the simplifying assumption that c_0=0, c_bad=1, a special case that approximates the attack cost when c_bad≫ c_0. In this special case, the attacker solely aims to minimize bad queries, possibly at the expense of a large increase in total queries. We will however also consider less extreme trade-offs between these two values. § DESIGNING STEALTHY DECISION-BASED ATTACKS To begin, we explore the design space of stealthy decision-based attacks, which minimize the total number of bad queries made to the model. One possibility is simply to design a better decision-based attack, that makes fewer total queries. As we see from <Ref>, this is how prior work has implicitly minimized asymmetric attack costs so far. We take a different approach, and design attacks that explicitly trade-off bad queries for good ones. §.§ How do Decision-based Attacks Work? Most decision-based attacks follow the same blueprint <cit.>. For an input (x, y), the attacks first pick an adversarial direction θ∈ [0, 1]^d and find the ℓ_p distance to the model's decision boundary from x along the direction θ. They then iteratively perturb θ to minimize the boundary distance along the new direction. In each iteration, the attacks compute an update direction δ and step-size α and make an update step θ' θ + α·δ, and then compute the new boundary distance from x along θ'. Decision-based attacks use two fundamental subroutines: * (x, θ, p) →^+: this routine computes the distance (in ℓ_p norm) from x to the decision boundary along the direction θ. Most attacks do this by performing a binary search between x and a misclassified point x̂ in the direction θ, up to some numerical tolerance η. * (x, θ', , y) →{-1, 1}: this routine uses a single query to check if the point at distance in direction θ' is misclassified, i.e., it returns 1 if f(x + ·θ'θ'_p) ≠ y. Different attacks combine these two subroutines in different ways, as described below. As we will see, how an attack balances these two routines largely impacts how stealthy the attack can be made. An overview of existing attacks. We briefly review how different attacks make use of and routines. A more detailed explanation is in <Ref>.  <cit.> is an ℓ_2 attack that only calls the routine. It starts with a direction θ from the input x to a misclassified point x̂. In each iteration, it samples an update direction δ and checks whether it reduces the boundary distance (with one call to ). The attack estimates the new distance by calling (x, θ+r_i, , y) for n random directions r_i. is a greedy ℓ_∞ attack. It starts with a signed adversarial direction θ=[1, …, 1]. In each iteration, it flips the sign of some entries in θ, and calls to check if this still yields an adversarial point at distance . If so, it calls to compute the improved boundary distance. and find the current adversarial distance by calling . Then, they call (x, θ+r_i, , y) for n random directions r_i to approximate a gradient δ (n is an attack hyper-parameter). Both attacks compute the step-size α with a geometric search, which can be implemented by calling once (for ), or m≥ 1 times (for ). is the same as , with a more precise gradient estimation. It estimates a gradient by computing distances from x to the decision boundary along directions θ+r_i for n random vectors r_i. Compared to and , thus issues n calls to instead of to obtain a more precise estimate of the gradient at the expense of more model queries. <Ref> in the appendix summarizes the calls made to and by each attack. In <Ref>, we show how many bad queries and total queries are used for both routines in an untargeted attack for a standard ResNet-50 on ImageNet (where we view the class to be evaded as “bad”). §.§ Maximizing Information per Bad Query To design stealthy decision-based attacks, we first introduce the entropy-per-bad-query metric. This is the information (measured in bits) that the attacker learns for every bad query made to the model. Consider an attack that calls (x, θ+r_i, , y) for many random r_i. , and do this to estimate the shape of the decision boundary. For a locally linear boundary, we expect 50% of such queries to be bad. The attacker thus learns two bits of information per bad query. To increase the entropy-per-bad-query, we would need to sample the r_i so that fewer queries are bad. But this requires a prior on the boundary's geometry, which is what these queries aim to learn. It thus seems hard to make this procedure stealthier. For calls to , a standard binary search requires log1η queries (half of which are bad) to estimate the boundary distance up to tolerance η. A call to thus gives log1η bits of information. So the attacker also learns an average of two bits per bad query. However, here there is a simple way to trade-off bad queries for good ones, which lets the attacker learn the same log1η bits of information with as little as one bad query. All that is required is a tall building, and some eggs! Measuring distances with one bad query. In the famous “egg-dropping problem”, there is a building of N floors, and you need to find the highest floor n ∈ [1, N] from which an egg can be dropped without breaking. The egg breaks if and only if dropped from above some unknown floor n. In the simplest version of the problem, you have a single egg and must compute the value of n. The solution is to drop the egg from each floor consecutively starting from the first, until it breaks. We note that finding the decision boundary between x and x̂, while minimizing bad queries, is exactly the egg-dropping problem! Assuming x-x̂_p = 1, a search tolerance of η yields a “building” of N=1η “floors” of length η from x̂ to x. The first n floors (up to the boundary) are good queries, i.e., no broken egg. All floors above n are bad queries on the wrong boundary side, i.e., a broken egg. While a binary search minimizes the total number of queries for finding the boundary, a line-search—which moves from x̂ to x until the boundary is hit—is optimal for minimizing bad queries. Many attacks use a small search tolerance η (on the order of 10^-3), so a full line-search incurs a large cost of good queries (1η). We thus consider finer-grained methods to trade-off bad and good queries. r0.4 [width=0.38]figures/fig_linesearch Line-search strategies to find the boundary (in gray) between a benign input (green) and the original bad input (brown). Red crosses are bad queries. Trading good and bad queries. In the general version of the egg-dropping problem, you are given k≥ 1 eggs to find the safe height n with a minimal number of egg drops. Asymptotically, you need Θ(N^1k) egg drops given k eggs, as we now show for k=2 eggs: first, divide the N floors into √(N) groups of √(N) floors and do a coarse-grained line-search by dropping from floors 1, 1+√(N), 1+2√(N), … until the first egg breaks. You now know the solution is in the previous group of √(N) floors, so you do a fine-grained line-search in this group one floor at a time. This requires at most 2√(N) egg drops. For our boundary finding problem, we can thus divide the interval between x and x̂ into 1η intervals, and do two line searches with step-sizes respectively √(η) and η. This will incur two bad queries, and 2√(1η) total queries, compared to one bad query and 1η total queries as above. A further optimization: early stopping. Greedy attacks such as repeatedly check whether a new search direction θ' θ + δ improves upon the current adversarial distance , and only if so issue a call to to compute the new distance ' <. For these attacks to progress, it may not be necessary to compute ' exactly. Instead, knowing that ' ≪ may be sufficient to know that the new direction θ' is “good” and the attack can proceed with it. We could thus stop a line-search early when ' ≤γ·—for some γ < 1. In many cases, this lets us call while incurring no bad query at all, at the expense of a less accurate distance computation. §.§ Stealthy Variants of Decision-based Attacks We now design stealthy variants of prior decision-based attacks, by applying the toolkit of stealthy search procedures outlined above, and illustrated in <Ref>. Stealthy distance computations. The most obvious way to make existing attacks more stealthy is to instantiate every call to with a (k-stage) line-search instead of a binary search. In contrast, calls to on arbitrary directions θ' are hard to make more stealthy. This change applies to the boundary distance computation in , to the gradient-estimation queries in , and to the step-size searches and boundary projections in , and . Since only calls , it cannot easily be made more stealthy. Stealthy gradients. Attacks like , and use most of their queries for estimating gradients. The main difference is that instead of calling , uses more expensive calls to to get a better estimation. Prior work shows that this tradeoff is suboptimal in terms of total queries. However, the extra precision comes for free when we consider the cost in bad queries! Recall that yields two bits of information per bad query, while with a line-search yields log1η bits. Thus, 's gradient estimator is strictly better if we consider bad queries. In <Ref> we formally prove that (under mild conditions) 's gradient estimator gives quadratically better convergence rates (in terms of bad queries) than the gradient estimators of and . We can leverage this insight to design stealthy “hybrid” attacks that combine 's stealthy gradient estimator with efficient components of other, newer attacks. Stealthy hyper-parameters. Prior attacks were designed with the goal of minimizing the total number of queries. As a result, their hyper-parameters were also tuned for this metric. When considering our asymmetric query cost, existing hyper-parameters might thus no longer be optimal. Our attacks. We combine the above principles to design stealthy variants of existing attacks. * : As in the original attack, in each iteration we first greedily check if a new search direction improves the boundary distance and then replace the binary search for the new distance by a (k-stage) line-search, optionally with early-stopping (see <Ref>). * : The attack is perfectly amenable to stealth as it only calls . We replace the original binary search by a (k-stage) line-search in each of these distance computations. When computing distances in random directions for estimating gradients, we need to select a safe starting point for the line search. If the current boundary distance is , we start the search at the point at distance (1+γ)· along θ', for γ > 0. If this point is not misclassified (i.e., the query is bad), we return (1+2·γ)· as an approximate distance. If the point is misclassified (i.e., safe), we perform a line-search with tolerance 1η. We use γ=1% in all experiments. * : In each iteration, we use line searches to compute the current boundary distance, and the update step-size. We replace the original coarse-grained gradient estimator (which calls n times) with 's estimator (with n20 calls). * : We make the same changes as for , except that we retain the original coarse-grained gradient estimator (otherwise this would be the same as ). To better balance the number of bad queries used in different attack phases, we reduce the number n of queries used to estimate gradients. This change is sub-optimal if we care about the attack's total number of queries, but is beneficial in terms of bad queries as the attack now spends a larger fraction of work on queries that can be made more stealthy. § EVALUATION We evaluate our stealthy decision-based attacks on a variety of benchmarks, in order to show that our attacks can drastically reduce the number of bad model queries compared to the original attacks. §.§ Setup Datasets and models.   We consider four benchmarks: * We begin with standard untargeted attacks on ImageNet against a ResNet-50 classifier. We mark a query as bad if it is classified into the class of the original input. * To capture more realistic security-critical scenarios, we consider a variety of binary classification tasks that aim to separate “good” from “bad” data. As a toy benchmark, we use a binary labeling of ImageNet (hereafter ImageNet-Dogs), with all dog breeds grouped as the “bad” class. The classifier is also a ResNet-50, with a binary head finetuned over the ImageNet training set. * We then consider a NSFW classification task with a CLIP classifier that was used to sanitize the LAION dataset <cit.>. To avoid collecting a new NSFW dataset, we use a subset of ImageNet (hereafter “ImageNet-NSFW”) that this classifier labels as NSFW with high confidence.[We do not collect a new NSFW dataset due to the ethical hazards that arise from curating such sensitive data. By using a subset of ImageNet—the most popular image dataset in machine learning research—we mitigate, but do not completely eliminate <cit.>, the potential harms of constructing a NSFW dataset.] * Finally, we evaluate a black-box commercial NSFW detector, using our ImageNet-NSFW dataset. The detector returns a score from 1 to 5, denoting that the input is “highly unlikely” to “highly likely” to contain adult content or nudity. We consider a query to be bad if it gets a score of 4 or 5. Attacks.   We evaluate , , and for ℓ_2 attacks, and and for ℓ_∞ attacks. We adapt each attack's official code to enable counting of bad queries. We use each attack's default hyper-parameters, except for some optimizations by <cit.> (see Appendix <ref>). We further evaluate our stealthy versions of , , and . For , , and we split search intervals into 10,000 sub-intervals, and perform either a full line-search or a two-stage line-search with 100 coarse-grained and fine-grained steps. For efficiency sake, we perform two-stage line-searches in all our experiments and use the results to infer the number of queries incurred by a full line-search. For , we further trade-off the query budgets for computing gradients and step-sizes by reducing the attack's default number of gradient queries n by a factor k∈{1.5, 2.0, 2.5}. For , we replace each binary search with a line-search of step-size η=10^-3 (the default binary-search tolerance for ) and implement early-stopping with γ=0.9. Metrics. As in prior work, we report the median ℓ_p norm of adversarial examples after N attack queries (except we only count bad queries). For each task, we run the attacks on 500 samples from the corresponding test set (for ImageNet-Dogs, we only attack images of dogs). For the attacks on the commercial NSFW detector, use use 200 samples from ImageNet-NSFW. Our motivation for counting bad queries is to assess whether black-box attacks are viable for attacking real security systems. We thus focus on a “low” query regime: each attack can make at most 1,000 bad queries per sample. Prior work has considered much larger query budgets, which we disregard here as such budgets are likely not viable against systems that implement any query monitoring. §.§ Results The main results of our evaluation appear in <Ref>. We also provide a full ablation over different attack variants and optimizations in <Ref>. For all benchmarks, our stealthy attacks (with 1-stage line searches) issue significantly fewer bad queries than the corresponding original attack. ℓ_2 attacks. Remarkably, while is one of the earliest and least efficient decision-based attacks, our variant is stealthier than the newer and attacks. To reach a median ℓ_2 perturbation of 10 on ImageNet, needs 686 bad queries, a saving of 7.3× over the original , and of 1.4× compared to . Our hybrid attack is the stealthiest attack overall. On all three benchmarks, it requires 1.47–1.82× fewer bad queries than to reach a median perturbation of 10. This shows that we can even improve the stealthiness of attacks that do not make use of many distance queries. Our techniques are thus likely also applicable to other decision-based attacks that follow 's blueprint. <Ref> in the appendix shows the total number of queries made by our stealthy attacks. As expected, our stealthy attacks issue many more queries in total than attacks that optimize for this quantity. To reach a median perturbation of ϵ=10, our attacks make 350–1420× more total queries than the original non-stealthy attack. This large increase is only warranted if benign queries are significantly cheaper than bad queries. This may be the case in some applications, e.g., uploading 1,000 benign images is permitted on platforms like Facebook <cit.>, and thus likely less suspicious than a single bad query. However, for less extreme asymmetries in query costs (e.g., c_bad = 10 · c_0), a less strict tradeoff between bad and good queries is warranted. We will explore this in <Ref>. In <Ref>, we further show the total of our attacks for various configurations of the query costs c_0 and c_bad. A different attack variant is optimal depending on the cost overhead of bad queries. ℓ_∞ attacks. The cost-effectiveness of stealthy ℓ_∞ attacks is better. Our attack reduces bad queries compared to the original , which is itself more efficient than . To reach a median norm of ϵ=8255, needs 103–181 bad queries for the three benchmarks, 2.1–2.4× less than , and 7–17× less than . As issues only 2.1–3.4× more queries than (see <Ref>), it is clearly cost effective if c_bad≫ c_0. §.§ Trading off Good and Bad Queries Our stealthy attacks in <Ref> use full line-searches, which use a single bad query (and many good queries). In <Ref> and <Ref> we consider alternative tradeoffs. We provide a full ablation over different attack variants and optimizations in <Ref>. For ℓ_∞ attacks, with a two-stage line-search and early stopping provides a nice tradeoff: for a median perturbation of ϵ=8255, the attack makes 1.37× more bad queries than a full line-search, but 3.7× fewer total queries. This attack is actually strictly better than the original (thanks to early stopping): our attack makes 1.77× fewer bad queries, and 8% fewer good queries! For ℓ_2 attacks, with a two-stage line-search shows a nice tradeoff over the original : for a median perturbation of ϵ=10, our attack makes 4× fewer bad queries, at the expense of 5× more good queries (see <Ref>). Unfortunately, none of our stealthy attacks with two-stage line searches beat the original in terms of bad queries. Thus, attaining state-of-the-art stealthiness with our techniques does appear to come at the expense of a large overhead in good queries. As a result, improving the total cost of existing ℓ_2 decision-based attacks may be hard, and thus attacking real security-critical systems with these attacks may simply not be cost-effective. §.§ Attacking a Black-box NSFW Detector We now turn to a much more realistic attack scenario where we target a commercial black-box detector of NSFW images. The few attacks that have been evaluated against commercial systems (e.g., the , or Qeba <cit.>) used a limited number of attack samples (3 to 5) due to the high query cost—and thus monetary cost—of evaluating these attacks against a commercial API. To enable a more rigorous evaluation, we focus here on —the only attack we evaluated that reliably finds small adversarial perturbations on a limited query budget (500 queries). Since real black-box systems expect 8-bit RGB images as input, we set 's threshold η for a binary search or line-search to 1255, the smallest distance between two distinct RGB images. This is much coarser than the default threshold of η=10^-3, and the attack thus finds larger perturbations. Other decision-based attacks face similar quantization issues when applied to real black-box systems. We evaluate and on 200 images from ImageNet-NSFW. <Ref> shows the results. Evading this commercial detector is much harder than the prior models we attacked, presumably due to the discretization constraint described above. Our attack outperforms by 2.2× (we reach a median distance of 32255 with 79 bad queries, while needs 172 bad queries). These perturbations are noticeable, but preserve the images' NSFW nature. § RELATED WORK Threat models for ML evasion attacks.   Modeling realistic ML evasion attacks is challenging <cit.>. Our work contributes to this goal by introducing the more realistic asymmetric query cost metric, and evaluating the feasibility of stealthy decision-based attacks. Prior work has attacked real security-critical ML systems such as malware detectors <cit.>, copyright systems <cit.>, or online content blockers <cit.>. These works either assume white-box model access, or use black-box transfer attacks. The latter are perfectly stealthy (they make no bad queries) but have limited success rates. Detecting decision-based attacks.   <cit.> and <cit.> detect decision-based attacks by monitoring sequences of user queries. We aim to evade a more fundamental form of monitoring that any security-critical system likely uses: flagging and banning users who issue many “bad” queries. Stealthy score-based attacks.   Score-based attacks, which query a model's confidence scores <cit.>, also issue many bad queries. Designing stealthy score-based attacks is similar to the problem of “safe black-box optimization” in reinforcement learning <cit.>. § CONCLUSION Our paper initiates the study of stealthy decision-based attacks, which minimize costly bad queries that are flagged by a ML system. Our “first-order” exploration of the design space for stealthy attacks shows how to equip existing attacks with stealthy search procedures, at a cost of a larger number of benign queries. Decision-based attacks may be made even stealthier by designing them from scratch with stealth as a primary criterion. We leave this is an open problem we hope future work can address. We hope our paper will pave the way towards more refined analyses of the cost of evasion attacks against real ML systems. In particular, our paper suggests a new possible defense metric for defenses designed to resist black-box attacks: the number of bad queries before an attack is effective. plainnat § DETAILS ON EXPERIMENTAL SETUP §.§ Datasets and Models ImageNet. We run the attacks against a ResNet-50 <cit.> classifier trained on ImageNet <cit.>. We use the model weights provided as part of the torchvision library <cit.>, which reach 76.13% validation accuracy. When running the attacks, we use ImageNet's validation set and we skip the samples that are already classified incorrectly by the model. ImageNet-Dogs. We create a binary classification task from ImageNet by considering as “bad” the images belonging to classes of dog breeds (i.e., the classes with indices included in the range [151, 268]) and as “good” the images belonging to all the other classes. We create training and validation sets in this way from the respective splits of ImageNet. Then, we take the ResNet-50 provided by torchvision, change the last linear layer to a layer with one output, and fine-tune this model for one epoch on the training set, using Adam <cit.> with learning rate 10^-3. Training the model takes around 1 hour using an Nvidia RTX A6000. The final model has 96.96% accuracy, 87.14% precision, and 87.10% recall on the validation set. Since we are interested in creating adversarial examples for the “bad” images, we only attack the images in the validation set that are correctly classified as “bad” (i.e., as dogs) by the fine-tuned model. ImageNet-NSFW. As mentioned in <ref>, we also evaluate the attacks on the NSFW content detector shared by <cit.>. This classifier takes as input CLIP <cit.> embeddings of images and outputs a confidence in [0, 1]. We use the CLIP implementation provided by the HuggingFace Transformers library <cit.> to extract the CLIP embeddings from the input images. To create an evaluation set of NSFW images, we select the subset of 1,000 images in the ImageNet validation set that the NSFW content detector classifies as NSFW with highest confidence (it is well known that ImageNet contains NSFW content <cit.>). When attacking the model, we consider an attack to be successful if the confidence of the detector drops below 0.5. §.§ Attack Hyper-parameters . We use the official implementation[<https://github.com/bethgelab/foolbox/blob/1c55ee/foolbox/attacks/boundary_attack.py>], which is part of from Foolbox <cit.>, with default hyper-parameters on all tasks. . We use the official implementation.[<https://github.com/Jianbo-Lab/HSJA/blob/daecd5/hsja.py>] Following <cit.>, we set = 10,000 (this hyper-parameter is used to determine the binary search threshold), as this gives better results. and . We use the official implementation.[<https://github.com/uclaml/RayS/blob/29bc17/RayS.py>] The attack has no hyper-parameters. The default binary search tolerance is η=10^-3. For the line-search in we use the same step-size of 10^-3 and perform either a full line-search, or a two-stage search by first dividing the N search intervals into coarse groups of size √(N). For attacking the commercial black-box NSFW classifier in <Ref>, we set the binary search tolerance and line-search step-size to η=1255 and perform a full line-search. For the early-stopping optimization, we end a line search if ' < 0.9 ·. In <Ref>, <Ref> and <Ref>, the attack is the version with a full line-search and early-stopping. and . We use the official implementation.[<https://github.com/cmhcbb/attackbox/blob/65a82f/attack/OPT_attack.py>] Following <cit.>, we set β = 10^-2 (this hyper-parameter is used to determine the binary search threshold). For , we do line-searches for gradient estimation in the interval [0.99·, 1.01·], where is the current adversarial distance. For computing step-sizes, we do a line-search in the interval [0.99·, ], since we only care about the new distance if it improves upon the current one. We split this interval into N=10,000 sub-intervals and perform a 2-stage line-search with 100 coarse-grained steps and 100 fine-grained steps. For efficiency sake, we batch the line-search by calling the model on two batches of size 100, one for all coarse-grained steps, and one for all fine-grained steps. To count the number of bad queries and total queries, we assume that the line-search queries were performed one-by-one. If the first query in a line-search is not safe (i.e., the boundary distance is larger than 1.01·, we approximate the distance by ' ≈ 2·. In <Ref> and <Ref>, the attack is the version with a full line-search. and . We use the official implementation.[<https://github.com/cmhcbb/attackbox/blob/65a82f/attack/Sign_OPT.py>] Following <cit.>, we set β = 10^-2 (this hyper-parameter is used to determine the binary search threshold). For , we do the same line-search procedure as for computing step-sizes. We change the default number of gradient estimation queries per iteration from n=200 to n/k for k ∈{1.5, 2, 2.5, 3}, i.e., n ∈{67, 80, 100, 133}. In <Ref> and <Ref>, the attack uses a full line-search, and k=2.5. §.§ Compute and code We run every attack on one Nvidia RTX 3090, and the time to run the attacks on 500 samples ranges from twelve hours, for the attacks ran with binary search, to more than three days for the slowest attacks (e.g. OPT) ran with line search. We wrap all the attack implementations in a common set-up for which we use PyTorch <cit.>. The code can be found at the following URL: <https://github.com/ethz-privsec/realistic-adv-examples/>. The checkpoints of the model we trained, the NSFW classifier we ported from Keras to PyTorch, and the outputs of this model on the ImageNet train and validation datasets can be found at the following URL: <https://github.com/ethz-privsec/realistic-adv-examples/releases/tag/v0.1>. § DETAILS ON DECISION-BASED ATTACKS In this section, we provide some additional detail on how existing decision-based attacks work, and how they spend their bad queries. As explained in <Ref>, existing decision-based attacks optimize over some adversarial direction θ∈ [0, 1]^d by repeatedly: (1) computing the boundary distance from x along θ; (2) computing an update direction δ; and (3) picking a step-size α, in order to perform an update step θθ + α·δ. We can thus split each attack iteration into three phases: * : given the original input (x, y) and a search direction θ, this phase finds a point x_b that lies on the model's decision boundary along the line x + α·θθ., and returns the ℓ_p distance between x and x_b, i.e., x - _p. * : This phase searches for an update direction δ to be applied to the search direction θ. * : This phase selects a step-size α for an update to the search direction θ. We now describe how different attacks instantiate these generic phases and how they use the and routines in each phase. . The original decision-based attack of <cit.> is a greedy attack. In contrast to other attacks, it only performs a heuristic, approximate projection to the model's boundary in each step. : Given a misclassified point x_b along the direction θ (originally a natural sample from a different class than x), the attack samples random points around x_b and checks on which side of the boundary they fall. From this, the attack estimates a step-size to project x_b onto the boundary, and then computes the distance between x_b and x. This requires n calls to . : The attack is greedy and simply picks a small update direction δ at random. : The attack checks whether the distance to the boundary along the new direction θ + δ is smaller than the current distance, . If not, the update is discarded. Note that this test can be performed with a single query to the model, with a call to . . This is a greedy attack similar to , tailored to the ℓ_∞ norm. Its search direction θ∈{-1, +1}^d is always a signed vector. : find the current distance to the decision boundary using a binary search, by calling . : The attack picks a new search direction by flipping the signs of a all pixels in a rectangular region of θ. : The attack greedily checks whether the new direction improves the current distance to the boundary, by issuing a call to . If the distance is not reduced, the update is discarded. . This attack first proposed a gradient-estimation approach to decision-based attacks. : The attack starts by measuring the distance to the boundary, with a call to . Specifically, it performs a binary search between x and some point of a different class along the direction θ. : The attack estimates the gradient of the distance to the boundary along the search direction θ. To this end, it samples random directions r_1, …, r_n and computes the distance to the boundary along θ + r_i, denoted as d_i ∈^+, for each. The estimated gradient is then: δ1/n∑_i=1^n ( - d_i)· r_i . The attack uses n calls to to compute the boundary distance along each random direction. : computes the step-size α with a geometric search: starting from a small step size, double it as long as this decreases the distance to the decision boundary along the new direction θ + α·δ. Thus, each step of the geometric search involves a call to and . These attacks are very similar, and improve over by using a more query-efficient gradient-estimation procedure. : In , this step is viewed as a boundary “projection” step which returns the point x_b on the boundary, while computes the distance from x to the boundary along θ. But the two views, and their implementations, are equivalent. Both attacks use a binary-search to find a point x_b on the boundary, as in , with a call to . : Both attacks also sample n random search directions r_1, …, r_n. But instead of computing the distance to the boundary along each updated direction as in , and simply check whether each update decreases the current distance to the decision boundary or not. The update direction is computed as δ1/n∑_i=1^n z_i · r_i , where z_i ∈{-1, +1} is one if and only if the point at distance along the direction θ+r_i is misclassified. differs slightly in that the random directions r_i are applied to the current point on the boundary , and we check whether +r_i is misclassified or not. Compared to , these attacks thus only issue n calls to (instead of n calls to ), but the gradient estimate they compute has higher variance. : uses the exact same geometric step-size search as . is slightly different from the generic algorithm described above, in that it applies the update δ to the current point on the boundary . The attack starts from a large step size and halves it until + α·δ is misclassified. This amounts to finding the distance to the boundary from along the direction δ, albeit with a geometric backtracking search instead of a binary search. Summary of attacks. <Ref> summarizes how different attacks implement the three generic attack phases , and in each attack iteration. We distinguish here between the two routines called by the attacks, and defined in <Ref>. § CONVERGENCE RATES OF STEALTHY ATTACKS Prior work has analyzed the convergence rate of SGD with the zero-order gradient estimation schemes used in and <cit.>. We can use these results to prove that the gradient estimation of our attack is asymptotically more efficient (in terms of bad queries) than the non-stealthy gradient estimation used by and . Let g(θ) be the distance to the boundary along the direction θ, starting from some example x (this is the function that and explicitly minimize). Suppose we optimize g with black-box gradient descent, using the following two gradient estimators: * : 1/Q∑_i=1^Q(g(θ + r_i) - g(θ)) · r_i for Q random Gaussian directions r_i. * : 1/Q'∑_i=1^Q'(g(θ + r_i) - g(θ)) · r_i for Q' random Gaussian directions r_i. We can then show the following results: Assume g has gradients that are L-Lipshitz and bounded by C (assume L and C are constants for simplicity). Let d be the data dimensionality. Optimizing g with T iterations of gradient descent, using 's gradient estimator, yields a convergence rate of [∇ g(x)_2^2] = 𝒪(d/T), with 𝒪(T^2/d) bad queries. Assume g is L-Lipschitz and has gradients bounded by C (assume L and C are constants for simplicity). Let d be the data dimensionality. Optimizing g with T iterations of gradient descent, using 's gradient estimator, yields a convergence rate of [∇ g(x)_2] = 𝒪(√(d / T)), with 𝒪(T^2d) bad queries. The convergence rate of is thus at least as good as that of , [Note that <cit.> provide a bound on the gradient norm, while <cit.> provide a bound on the squared gradient norm. Applying Jensen's inequality to the result of <Ref>, we know that for we have [∇ g(x)_2^2] ≥ ([∇ g(x)_2])^2 = 𝒪(d / T).] but 's gradient estimator with line searches requires a factor d^2 fewer bad queries. The same asymptotic result as for holds for the similar estimator used by . <cit.> show that 's gradient estimator yields a convergence rate of [∇ g(x)_2^2] = 𝒪(d/T) + 𝒪(1/Q) (see Theorem 2 in <cit.>). To balance the two convergence terms, we set Q = T/d. To perform Q evaluations of g(θ + r_i) - g(θ), we need Q+1 calls to . Each call makes multiple queries to the model f, but only one bad query if we use a line search. This yields the number of bad queries in the theorem (T iterations with T/d bad queries per iteration). <cit.> show that 's gradient estimator yields a convergence rate of [∇ g(x)_2] = 𝒪(√(d/T)) + 𝒪(d/√(Q')) (see Theorem 3.1 in <cit.>). To balance the two convergence terms, we set Q' = Td. To perform Q' evaluations of (g(θ + r_i) - g(θ)), one call to and Q' calls to are required. Each call makes a single query to the model f, i.e., 1/2 bad queries on average. This yields the number of bad queries in the theorem (T iterations with Td/2 bad queries per iteration). § ADDITIONAL FIGURES
http://arxiv.org/abs/2306.04078v1
20230607003953
Parametrically driven pure-Kerr temporal solitons in a chip-integrated microcavity
[ "Grégory Moille", "Miriam Leonhardt", "David Paligora", "Nicolas Englebert", "François Leo", "Julien Fatome", "Kartik Srinivasan", "Miro Erkintalo" ]
physics.optics
[ "physics.optics" ]
[email protected] ^1Joint Quantum Institute, NIST/University of Maryland, College Park, USA ^2Microsystems and Nanotechnology Division, National Institute of Standards and Technology, Gaithersburg, USA ^3Department of Physics, University of Auckland, Auckland 1010, New Zealand ^4The Dodd-Walls Centre for Photonic and Quantum Technologies, New Zealand ^5Service OPERA-Photonique, Université libre de Bruxelles (U.L.B.), 50 Avenue F. D. Roosevelt, CP 194/5, B-1050 Brussels, Belgium ^6Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS Université de Bourgogne, Dijon, France The discovery that externally-driven nonlinear optical resonators can sustain ultrashort pulses corresponding to coherent optical frequency combs has enabled landmark advances in applications from telecommunications to sensing. The main research focus has hitherto been on resonators with purely cubic (Kerr-type) nonlinearity that are externally-driven with a monochromatic continuous wave laser – in such systems, the solitons manifest themselves as unique attractors whose carrier frequency coincides with that of the external driving field. Recent experiments have, however, shown that a qualitatively different type of temporal soliton can arise via parametric down-conversion in resonators with simultaneous quadratic and cubic nonlinearity. In contrast to conventional solitons in pure-Kerr resonators, these parametrically driven solitons come in two different flavours with opposite phases, and they are spectrally centred at half of the frequency of the driving field. Here, we theoretically predict and experimentally demonstrate that parametrically driven solitons can also arise in resonators with pure Kerr nonlinearity under conditions of bichromatic driving. In this case, the solitons arise through four-wave mixing mediated phase-sensitive amplification, come with two distinct phases, and have a carrier frequency in between the two external driving fields. Our experiments are performed in an integrated silicon nitride microcavity, and we observe frequency comb spectra in good agreement with theoretical predictions. In addition to representing a fundamental discovery of a new type of temporal dissipative soliton, our results constitute the first unequivocal realisation of parametrically driven soliton frequency combs in a microcavity platform compatible with foundry-ready mass fabrication. Parametrically driven pure-Kerr temporal solitons in a chip-integrated microcavity Miro Erkintalo^3,4 July 31, 2023 ================================================================================== § INTRODUCTION The injection of monochromatic continuous wave (CW) laser light into dispersive optical resonators with purely Kerr-type χ^(3) nonlinearity can lead to the generation of localized structures known as dissipative Kerr cavity solitons (CSs) <cit.>. These CSs correspond to ultrashort pulses of light that can persist within the resonator [Fig. <ref>(a)], indefinitely maintaining constant shape and energy <cit.>. While first observed in macroscopic optical fiber ring resonators <cit.>, CSs have attracted particular attention in the context of monolithic Kerr microcavities <cit.>, where they underpin the generation of coherent and broadband optical frequency combs <cit.>. By offering a route to coherent frequency comb generation in chip-integrated, foundry-ready platforms, CSs have enabled ground breaking advances in applications including telecommunications <cit.>, artificial intelligence <cit.>, astronomy <cit.>, frequency synthesis <cit.>, microwave generation <cit.>, and distance measurements <cit.>. The conventional CSs that manifest themselves in resonators with pure Kerr nonlinearity sit atop a CW background, and they gain their energy through four-wave-mixing (FWM) interactions with that background <cit.>. In the frequency domain, the solitons are (to first order) centred around the frequency of the external CW laser that drives the resonator [Fig. <ref>(a)]. They are (barring some special exceptions <cit.>) unique attracting states: except for trivial time translations, all the CSs that exist for given system parameters are identical. These features can be disadvantageous or altogether prohibitive for selected applications: noise on the external CW laser can degrade the coherence of nearby comb lines, removal of the CW background may require careful spectral filtering, whilst applications that require coexistence of distinguishable binary elements <cit.> are fundamentally beyond reach. Interestingly, recent experiments reveal that qualitatively different types of CSs can exist in resonators that display a quadratic χ^(2) in addition to a cubic χ^(3) nonlinearity [Fig. <ref>(b)]; in particular, degenerate optical parametric oscillators driven at 2ω_0 can support CSs at ω_0 <cit.>. In this configuration, the solitons are parametrically driven through the quadratic down-conversion of the externally-injected field, which endows them with fundamental differences compared to the conventional CSs emerging in monochromatically-driven, pure-Kerr resonators. Specifically, parametrically driven cavity solitons (PDCSs) are spectrally separated from the driving frequency (e.g. ω_0 versus 2ω_0), and they come in two binary forms with opposite phase. These traits render PDCSs of interest for an altogether new range of applications. Optical PDCSs have so far been generated only via the quadratic χ^(2) nonlinearity, which is not intrinsically available in integrated (foundry-ready) resonator platforms, such as silicon <cit.> or silicon nitride <cit.>. However, it is well-known that phase-sensitive amplification analogous to χ^(2) parametric down-conversion can also be realised in pure Kerr resonators when driven with two lasers with different carrier frequencies <cit.>, allowing e.g. for novel random number generators <cit.> and coherent optical Ising machines <cit.>. A natural question that arises is: is it possible to generate PDCSs in foundry-ready, pure-Kerr resonators with bichromatic driving? Whilst a related question has been theoretically explored in the context of diffractive Kerr-only resonators <cit.>, the presence of dispersion substantially changes the physics of the problem. The impact of bichromatic driving in the dynamics of conventional Kerr CSs has also been considered <cit.>, but the possibility of using the scheme to generate temporal PDCSs remains unexplored. Here, we theoretically predict and experimentally demonstrate that a dispersive resonator with pure Kerr nonlinearity can support PDCSs in the presence of bichromatic driving [Fig. <ref>(c)]. We reveal that, under appropriate conditions, a signal field with carrier frequency in between two spectrally-separated driving fields obeys the damped, parametrically driven nonlinear Schrödinger equation (PDNLSE) that admits PDCS solutions, and we unveil the system requirements for the practical excitation of such solutions. Our experiments are performed in a 23 μm-radius, chip-integrated silicon nitride microring resonator whose dispersion is judiciously engineered to facilitate PDCS generation at 253 THz (1185 nm) when bichromatically pumping at 314 THz (955 nm) and 192 THz (1560 nm). We observe PDCS frequency comb spectra that are in good agreement with numerical simulations, as well as clear signatures of the anticipated ℤ_2 symmetry, i.e., coexistence of two PDCSs with opposite phase. By revealing a fundamentally new pathway for the generation of coherent PDCS frequency combs far from any pump frequency, in a platform that has direct compatibility with foundry-ready fabrication, our work paves the way for integrated, low-noise frequency comb generation in new spectral regions, as well as photonic integration of applications requiring combs with a binary degree of freedom. § RESULTS We first summarise the main points that lead to the prediction of PDCSs in bichromatically-driven Kerr resonators [for full details, see Methods]. To this end, we consider a resonator made out of a dispersive, χ^(3) nonlinear waveguide that is driven with two coherent CW fields with angular frequencies ω_± [see Fig. <ref>(c)]. The dispersion of the resonator is described by the integrated dispersion <cit.> at the cavity resonance ω'_0 (apostrophes highlight resonance frequencies throughout the article) closest to the frequency : D_int(μ) = ω'_μ - ω'_0 - μ D_1 = ∑_k≥ 2D_k/k!μ^k. Here, μ is a relative mode number with respect to the resonance ω_0' and D_1/(2π) is the cavity free-spectral range (FSR) at ω'_0. The terms D_k with k>1 account for deviations of the resonance frequencies ω'_μ from an equidistant grid defined by ω'_0 + μ D_1. Under particular conditions [see Methods], the evolution of the slowly-varying electric field envelope centred at ω_0 can be shown to be (approximately) governed by the PDNLSE, with the parametric driving ensuing from non-degenerate FWM driven by the intracavity fields at the pump frequencies [ω_+ + ω_- →ω_μ + ω_-μ, see Fig. <ref>(d)]. (Note: in stark contrast to standard Kerr CSs, for which only one comb line is externally driven, all of the components of a PDCS frequency comb are separately driven via non-degenerate FWM.) Because the PDNLSE is well-known to admit PDCS solutions <cit.>, it follows that the system may support such solitons with a carrier frequency ω_0 in between the two driving frequencies, provided however that the system parameters – particularly resonator dispersion – are conducive for soliton existence. The resonator dispersion must meet three key conditions for PDCS excitation to be viable [Methods]. First, for solitons to exist, the dispersion around the degenerate FWM frequency ω_0 must be anomalous, i.e., D_2 > 0 in Eq. (<ref>). Second, the effective detuning [see Methods] between the degenerate FWM frequency (ω_0) and the closest cavity resonance (ω'_0) must be within the range of soliton existence, essentially requiring that the degenerate FWM process ω_+ + ω_- → 2ω_0 (approximately) satisfies linear phase-matching [Fig. <ref>(e)]. This second condition can be written as , where ± p correspond to the modes excited by the driving lasers at ω_±. Given that D_2 > 0, this requires at least one higher-even-order dispersion coefficient (e.g. D_4) to be negative. Third, the intracavity field amplitudes at the driving frequencies, |E_±|, must remain (approximately) homogeneous and stationary to ensure a constant parametric driving strength for the PDCS field E_0 centred at ω_0 [Fig. <ref>(f)]. This final condition can be met by ensuring dispersion at the driving frequencies is (i) normal (or driving amplitudes small), such that the corresponding intracavity fields do not undergo pattern forming (modulation) instabilities <cit.>, and (ii) such that the temporal walk-off between the driving frequencies ω_± and the signal frequency ω_0 is sufficiently large so as to mitigate pump depletion in the vicinity of the soliton that would otherwise break the homogeneity of the fields at ω_± [Fig. <ref>(f)]. As will be demonstrated below, all of these conditions can be met through judicious dispersion engineering that is within the reach of contemporary microphotonic fabrication. 5pt Simulations. Before discussing our experiments, we present results from numerical simulations that illustrate the salient physics. Our simulations are based upon a full iterative “Ikeda” map of the system without any approximations [Methods], and they consider a toy resonator with 25 GHz FSR and minimal dispersion necessary for PDCS existence [see Fig. <ref>(a)]. Specifically, we assume a quartic dispersion with D_2 = 2π× 4.1 kHz and D_4 = -2π× 33 mHz, yielding D_int(p) + D_int(-p)≈ 0 for pump frequency shift Ω_p = 2π× 30.4 THz (corresponding to mode number p = 1217). We assume for simplicity that the two driving fields are coincident on their respective linear cavity resonances (zero detuning), and both carry CW laser power of about 140 mW [see Methods for other parameters]. Because the group-velocity dispersion at the pump frequencies is normal, modulational instabilities are suppressed and the intracavity fields converge to stable homogeneous states with equal circulating CW power of about 43 W, thus yielding an effective parametric driving strength and detuning within the regime of PDCS existence [see Methods]. Figure <ref>(b) shows the evolution of the numerically simulated intracavity intensity profile with an initial condition consisting of two hyperbolic secant pulses with opposite phases. As can be seen, after a short transient, the field reaches a steady-state that is indicative of two pulses circulating around the resonator. The pulses sit atop a rapidly oscillating background that is due to the beating between the quasi-homogeneous fields at the pump frequencies [Fig. <ref>(c)]. Correspondingly, the spectrum of the simulation output [Fig. <ref>(d)] shows clearly the presence of a hyperbolic secant-shaped feature that sits in between the strong quasi-monochromatic components at the pump frequencies. In accordance with PDCS theory [see Methods], there is no significant CW peak at the parametric signal frequency ω_0 at which the solitons are spectrally centred. To highlight the phase disparity of the steady-state pulses, we apply a numerical filter to remove the quasi-monochromatic intracavity components around the pump frequencies, and plot in Fig. <ref>(e) the real part of the complex intracavity electric field envelope. The simulation results in Fig. <ref>(e) are compared against the real parts of the exact, analytical PDCS solutions [Methods], and we clearly observe excellent agreement. The results in Fig. <ref>(a)–(e) corroborate the fundamental viability of our scheme. However, they were obtained assuming a completely symmetric dispersion profile with no odd-order terms, which may be difficult to realise even with state-of-the-art microphotonic fabrication (including the resonators considered in our experiments). We find, however, that PDCSs can exist even in the presence of odd-order-dispersion, albeit in a perturbed form. This point is highlighted in Figs. <ref>(f)–(h), which show results from simulations with all parameters as in Fig. <ref>(a)–(e) except an additional non-zero third-order dispersion term D_3 = -2π× 58 Hz. As for conventional (externally-driven) Kerr CSs <cit.>, we find that third-order dispersion causes the solitons to emit dispersive radiation at a spectral position determined by the phase-matching condition D_int(μ_DW)≈ (ω_0-ω_0') [Fig. <ref>(g)]. This emission results in the solitons experiencing constant drift in the temporal domain, and endows them with oscillatory tails [Fig. <ref>(h)]. Yet, as can clearly be seen, the PDCSs continue to exist in two distinct forms with near-opposite phase. It is worth noting that, for the parameters considered in Fig. <ref>(f)–(h), the low-frequency driving field experiences anomalous group-velocity dispersion; however, the intracavity intensity at that frequency is below the modulation instability threshold <cit.>, thus allowing the corresponding field to remain quasi-homogeneous (the modulation on the total intensity profile arises solely from the linear beating between the different fields). Experiments. For experimental demonstration [see Fig. <ref>(a) and Methods], we use a microring resonator made from a 690 nm-thick, 850 nm-wide silicon nitride layer embedded in fused silica, fabricated in a commercial foundry. The ring exhibits a radius of 23 μm, thus yielding a free-spectral range of about 1 THz. We use two external cavity diode lasers to drive the resonator: one tunable in the telecommunications C-band (from 186 THz to 198 THz, i.e., from 1613 nm to 1515 nm) and the other tunable from 306 THz to 330 THz (980 nm to 910 nm). Both driving fields are optically amplified and combined using a wavelength-division multiplexer (WDM) before being coupled into the resonator via a pulley scheme that ensures efficient coupling at all the relevant frequencies <cit.>. At the output of the resonator, 90% of the signal is routed to an optical spectrum analyzer for analysis. The remaining 10% is passed through a bandpass filter to remove spectral components around the driving frequencies, thus allowing to isolate the parametrically-generated signal field for characterisation. The orange curve in Fig. <ref>(b) depicts an estimate of the resonator's integrated dispersion around a cavity mode at 253 THz, obtained through a combination of finite-element-modelling and fitting to our experimental observations [see Methods]. This data is consistent with experimentally measured resonance frequencies (blue circles), yet we caution that our inability to probe the resonances around 253 THz prevents unequivocal evaluation of the dispersion at that frequency. The estimated dispersion can be seen to be such that the requisite phase-matching for generating a PDCS at 253 THz (δω≈ 0) can be satisfied, provided that the pump lasers are configured to drive cavity modes at 314 THz and 192 THz [Fig. <ref>(c)]. In our experiments, we set the on-chip driving power for both driving fields to be about 150 mW and tune the high-frequency pump to the cavity mode at 314 THz. We then progressively tune the low-frequency pump to the cavity mode at 192 THz (from blue to red), maintaining the high-frequency pump at a fixed frequency. As the low-frequency pump tunes into resonance, we initially observe non-degenerate parametric oscillation characterised by the generation of two CW components symmetrically detuned about 253 THz. These CW components progressively shift closer to each other as the pump tunes into the resonance, concomitant with the formation of a frequency comb around the degenerate FWM frequency ω_0 [see Fig. <ref>(c)]. To characterise the comb noise, we performed a heterodyne beat measurement using a helper laser at 230 THz within the vicinity of a single comb line. Initially, no beat note is observed, which is characteristic of an unstable, non-solitonic state within the resonator. Remarkably, as the 192 THz driving field is tuned further into resonance, we observe that the parametric signals reach degeneracy, concomitant with the emergence of a broadband comb state with smooth spectral envelope [Fig. <ref>(e)] and a heterodyne beat note (comparable with the helper laser linewidth of 250 kHz) that is considerably narrower than the 300 MHz microcavity linewidth [Fig. <ref>(f) and Supplementary Figure 1]. The emergence of the smooth comb state [Fig. <ref>(e)] is associated with an abrupt drop in the photodetector signal recorded around 253 THz, giving rise to a noticeable step-like feature [Fig. <ref>(f)]. Similar steps are well-known signatures of conventional CSs in monochromatically-driven Kerr resonators <cit.>. Moreover, as shown in Fig. <ref>(e), the smooth spectral envelope observed in the step-region is in very good agreement with the spectrum of a 24 fs (full-width at half-maximum) PDCS derived from numerical modelling that use estimated experimental parameters [see Supplementary Figure 2]. The simulations faithfully reproduce the main features of the experimentally observed spectrum, including a strong dispersive wave peak at about 210 THz. We note that the prominent dip at about 275 THz arises due to the frequency-dependence of the pulley coupler <cit.>, which was taken into account ad hoc when estimating the spectrum of the out-coupled PDCS [shown as blue curve in Fig. <ref> – see also Methods and Supplementary Figure 2]. It is interesting to note that, in addition to the frequency comb around the degenerate FWM frequency 253 THz, frequency combs arise also around both of the pump frequencies. These combs originate from FWM interactions between the pump fields and the comb lines around 253 THz, in a manner similar to spectral extension <cit.> and two-dimensional frequency comb <cit.> schemes studied in the context of conventional Kerr CSs. The combs around the pump frequencies share the line spacing with the comb around 253 THz, but there is a constant offset between the pump and PDCS combs. In our experiments, this comb offset is directly observable in the optical spectrum [inset of Fig. <ref>(e)] and found to be about 50 GHz ± 2 GHz (uncertainty defined by the optical spectrum analyzer resolution), which is in good agreement with the value of 49 GHz predicted by our modelling [see Methods]. All in all, given the considerable uncertainties in key experimental parameters (particularly dispersion and detunings), we find the level of agreement between the simulations and experiments remarkable. The results shown in Fig. <ref> are strongly indicative of PDCS generation in our experiments. Further confirmation is provided by observations of low-noise combs with complex spectral structures that afford a straightforward interpretation in terms of multi-PDCS states [Fig. <ref>]. Specifically, whilst a single PDCS circulating in the resonator is expected to yield a smooth spectral envelope, the presence of two (or more) PDCSs results in a spectral interference pattern whose details depend upon the soliton's relative temporal delay and – importantly – phase. Figures <ref>(a) and (b) show selected examples of multi-soliton comb spectra measured in our experiments. Also shown as solid curves are spectral envelopes corresponding to fields with two linearly superposed, temporally delayed PDCSs [Figs. <ref>(c) and (d) and Methods]. We draw particular attention to the fact that, in the measured data shown in Fig. <ref>(b), the comb component at the degenerate FWM frequency ω_0 is suppressed by about 40 dB compared to neighbouring lines, which is in stark contrast with results in Fig. <ref>(a), in which the degenerate FWM component is dominant. This suppression is indicative of a relative phase shift of 0.992π between the two solitons [Fig. <ref>(d)] – a clear signature of PDCSs. § DISCUSSION We have shown theoretically, numerically, and experimentally that dispersive resonators with a purely Kerr-type χ^(3) nonlinearity can support parametrically driven cavity solitons under conditions of bichromatic driving. Our theoretical analysis has revealed the salient conditions that the system dispersion must meet to allow for PDCS persistence, with approximation-free numerical simulations confirming the fundamental viability of the scheme. Experimentally, we realise suitable dispersion conditions in a chip-integrated silicon nitride microresonator, observing low-noise frequency comb states that evidence PDCS generation. Significantly, our measurements show spectral interference patterns that indicate the co-existence of two localized structures with opposite phase – a defining feature of PDCSs. Our work fundamentally predicts and demonstrates that dispersive Kerr resonators can support a new type of dissipative structure – the PDCS – in addition to conventional Kerr CSs. We envisage that studying the rich nonlinear dynamics <cit.>, interactions <cit.>, and characteristics (including quantum <cit.>) of pure-Kerr PDCSs will draw substantial future research interest, echoing the extensive exploration of conventional Kerr CS dynamics over the past decade <cit.>. In this context, to the best of our knowledge, the results reported in our work represent the first prediction and observation of dispersive wave emission by PDCSs in any physical system. From a practical vantage, our scheme offers a route to generate PDCS frequency combs in foundry-ready, chip-integrated platforms with characteristics that are fundamentally different from those associated with conventional Kerr CSs. For example, forming between the two input frequencies, PDCSs could permit comb generation at spectral regions where direct pump lasers may not be available. Moreover, the lack of a dominating CW component at the PDCS carrier frequency alleviates the need for careful spectral shaping, and could result in fundamental advantages to noise characteristics. We emphasise that PDCSs are underpinned by phase-sensitive amplification <cit.>, which can theoretically offer a sub-quantum-limited (squeezed) noise figure <cit.>. Finally, the fact that PDCSs come in two forms with opposite phase opens the doors to a new range of applications that require a binary degree of freedom, including all-optical random number generation and realisations of coherent optical Ising machines. Whilst the potential of PDCSs for such applications has been noted earlier <cit.>, our work provides for the first time a route for chip-integrated realisations with potential to CMOS-compatible mass manufacturing. § METHODS -10pt Simulation models. We first describe the theoretical models that describe the dynamics of bichromatically driven Kerr resonators and that underpin simulation results in our work. Our starting point is a polychromatic Ikeda-like map, which we will use to derive an extended mean-field Lugiato-Lefever equation that has been used in previous studies <cit.>. To this end, we consider a Kerr resonator made out of a dispersive waveguide [with length L and propagation constant β(ω)] that is driven with two coherent fields with angular frequencies ω_± [see Fig. <ref>(c)]. The evolution of the electric field envelope [referenced against the degenerate FWM frequency ω_0 = (ω_++ω_-)/2] during the mth transit around the resonator is governed by the generalized nonlinear Schrödinger equation: ∂ E^(m)(z,τ)/∂ z = iβ̂_S(i∂/∂τ)E^(m) + iγ |E^(m)|E^(m). Here z is a coordinate along the waveguide that forms the resonator, τ is time in a reference frame that moves with the group-velocity of light at ω_0, γ is the Kerr nonlinearity coefficient and the dispersion operator β̂_S(i∂/∂τ) = ∑_k≥ 2β_k/k!(i∂/∂τ)^k, with β_k = dβ/dω|_ω_0 the Taylor series expansion coefficients of β(ω) around ω_0. Note that the single electric field envelope E^(m)(τ,z) contains all the frequency components pertinent to the nonlinear interactions, including the fields at the pump frequencies ω_± and the signal frequency at ω_0. Note also that the Taylor series expansion coefficients β_k are linked to the resonance frequency expansion coefficients in Eq. (<ref>) as D_k ≈ -D_1^k+1 L β_k/(2π) <cit.>, such that D_int(μ) ≈ -D_1 L/2πβ̂_S(μ D_1). The Ikeda map consists of Eq. (<ref>) together with a boundary equation that describes the coupling of light into the resonator. Considering bichromatic driving, the boundary equation reads [see also Supplementary Note 1]: E^(m+1)(0,τ) = √(1-2α) E^(m)(L,τ)e^-iδ_0 + √(θ_+)E_in,+ e^-iΩ_pτ + imb_+ + √(θ_-)E_in,- e^iΩ_pτ + imb_-. Here α is half of the fraction of power dissipated by the intracavity field over one round trip, δ_0 = 2π k - β(ω_0) L is the linear phase detuning of the reference frequency ω_0 from the closest cavity resonance (with order k), E_in,± are the complex amplitudes of the driving fields at ω_±, respectively, Ω_p = pD_1 with p a positive integer represents the angular frequency shifts of the pumps from the reference frequency ω_0, and θ_± are the power transmission coefficients that describe the coupling of the driving fields into the resonator. The coefficients b_± allow us to introduce the phase detunings δ_± that describe the detunings of the pump frequencies from the cavity resonances closest to them (thus accounting for the fact that the frequency shift ω_0-ω_± may not be an exact integer multiple of D_1): b_± = δ_± -δ_0 +β̂_S(±Ω_p)L. Note that the phase detunings δ described above are related to the frequency detunings of the corresponding carrier frequency ω from the closest cavity resonances at ω' as δ≈ 2π (ω'-ω)/D_1. Before proceeding, we note that, in our specific configuration, only two out of the three detuning terms introduced above (δ_0 and δ_±) are independent. This is because the degenerate FWM frequency is completely determined by the pump frequencies viz. ω_0 = (ω_+ + ω_-)/2; therefore, the signal detuning δ_0 can be written in terms of the pump detunings δ_± as [see Supplementary Note 2]: δ_0 = δ_+ + δ_- + L[β̂_S(Ω_p)+β̂_S(-Ω_p)]/2. Substituting this expression for δ_0 into Eq. (<ref>) yields b_± = ± b, where b = δ_+ - δ_- + L[β̂_S(Ω_p)-β̂_S(-Ω_p)]/2. It can be shown [see Supplementary Note 3] that this coefficient describes the offset, Δ f, between the frequency combs forming around ω_0 and ω_± viz. Δ f = |b|D_1/(2π)^2. -10pt PDCS theory. All of the simulations presented in our work use the full Ikeda-like map defined by Eqs. (<ref>) and (<ref>). However, the system's ability to sustain PDCSs can be inferred more readily from the mean-field limit, obtained under the assumption that the intracavity envelope E^(m)(z,τ) evolves slowly over a single round trip (i.e., the cavity has a high finesse, and the linear and nonlinear phase shifts are all small). In this case, the Ikeda-like map described above can be averaged into the generalized Lugiato-Lefever mean-field equation similar to the one used, e.g., in refs. <cit.>. We write the equation in normalized form as [see Supplementary Note 4]: ∂ E(t,τ)/∂ t = [-1 + i(|E|^2 -Δ_0)+ iβ̂(i∂/∂τ)]E + S_+ e^-iΩ_pτ + iat + S_- e^iΩ_pτ - iat. Here t is a slow time variable that describes the evolution of the intracavity field over consecutive round trips (and is thus directly related to the index m of the Ikeda-like map), are the normalized strengths of the driving fields, Δ_0 = δ_0/α is the normalized detuning of the signal field, and the normalized dispersion operator β̂ is defined as Eq. (<ref>) but with normalized Taylor series coefficients β_k→ d_k = [2α/(|β_2|L)]^k/2β_kL/α. Finally, the coefficient a = b/α = Δ_+ - Δ_- + [β̂(Ω_p)-β̂(-Ω_p)]/2, where Δ_± = δ_±/α are the normalized detunings of the external driving fields. To avoid notational clutter, we use the symbol Ω_p to represent pump frequency shifts both in our dimensional and normalized equations. We now make the assumption that the intracavity fields E_± at the pump frequencies are homogeneous and stationary. (Note: this assumption is not used in any of our simulations.) To this end, we substitute the ansatz E(t,τ) = E_0(t,τ) + E_+e^-iΩ_pτ + ia t + E_-e^iΩ_pτ - ia t into Eq. (<ref>). We then assume further that the (soliton) spectrum around the degenerate FWM frequency (the Fourier transform of E_0(t,τ)) does not exhibit significant overlap with the pump frequencies. This allows us to separate terms that oscillate with different frequencies, yielding the following equation for the signal field: ∂ E_0(t,τ)/∂ t =[-1 + i(|E_0|^2 -Δ_eff) + iβ̂(i∂/∂τ) ]E_0 + 2iE_+E_- E_0^∗, where the effective detuning Δ_eff = Δ_0 - 2(Y_+ + Y_-) with Y_±=|E_±|^2 includes both linear and nonlinear (cross-phase modulation) phase shifts. Equation (<ref>) has the precise form of the parametrically-driven nonlinear Schrödinger equation <cit.> with effective detuning Δ_eff and parametric driving coefficient ν = 2iE_+E_-. Accordingly, assuming that the resonator group-velocity dispersion is anomalous at the signal frequency (β_2 < 0), the equation admits exact (parametrically-driven) soliton solutions of the form <cit.>: E_0(τ) = √(2)ζsech(ζτ)e^i(ϕ + θ), where cos(2ϕ) = 1/|ν|, ζ = √(Δ_eff+|ν|sin(2ϕ)), and . It should be clear from the last term of Eq. (<ref>) that all of the frequency components of E_0 are parametrically driven. This is particularly evident when expanding the field as a Fourier series, E_0(t,τ)=∑_n c_n(t) e^-inD_1τ: the equation of motion for each modal amplitude c_n will include a parametric driving term 2iE_+E_-c_-n^∗. Of course, the viability of sustaining the PDCS solution described by Eq. (<ref>) in an actual bichromatically-driven Kerr resonator system is contingent on the applicability of the assumptions outlined above. As described in the main text, the assumption that the intracavity fields E_± at the pump frequencies are homogeneous and stationary lead to the requirements of dispersive walk-off and suppression of modulation instabilities. The requirement for phase-matching of the degenerate FWM process ensues from the fact that stable PDCS solutions generically exist only if the effective detuning Δ_eff is sufficiently small <cit.>. Indeed, recalling Eq. (<ref>), we have Δ_eff = Δ_+ + Δ_- + β̂(Ω_p) + β̂(-Ω_p)/2 - 2(Y_+ + Y_-). Considering typical parameters, Δ_eff and |ν| = 2√(Y_+Y_-) are of the order of unity for stable solitons to exist <cit.>, while the detunings Δ_± can be assumed small to ensure that sufficient intracavity powers Y_± can be attained without excessive driving powers X_± = |S_±|^2. This implies, then, that the pump frequency shift Ω_p must satisfy [β̂(Ω_p) + β̂(-Ω_p)]≈ 0. Unpeeling the normalization, and converting to the integrated dispersion defined as Eq. (<ref>) of the main text, shows that this condition is equivalent with the linear phase-matching of degenerate FWM: D_int(p) + D_int(-p)≈ 0. -10pt Resonator used in experiments. The chip-integrated microring resonator used in our experiments was fabricated in a commercially-available foundry service. The resonators are made of a 690 nm-thick layer of silicon nitride that is fully embedded in fused silica. The ring has a width of 850 nm and a radius of 23 μm, thus yielding a round trip length L = 144.5 μ m. Light is coupled into the ring via a 460 nm-wide integrated bus waveguide, with a 32 μm-long pulley-coupler ensuring good coupling at all the different frequencies of interest (ω_0, ω_±). The resonator has intrinsic and loaded Q-factors of 1.5×10^6 and 0.75×10^6, respectively, corresponding to a finesse of ℱ≈ 3000 and a resonance linewidth of . The chip has an input-to-output insertion loss of about 5.6 dB at 980 nm and 8.4 dB at 1550 nm. -10pt Resonator dispersion and thermal nonlinearity. The theoretically estimated resonator dispersion [orange curve shown in Fig. <ref>(b)] was obtained in two steps. We first calculated the theoretical resonance frequencies using finite-element modelling, and then slightly modified that data [see Supplementary Figure 3 for a comparison of the two integrated dispersion curves] to match the PDCS simulations to experimentally obtained spectra. Experimentally, we characterized the dispersion at various spectral regions by measuring the resonance frequencies using a set of widely tunable lasers and a high-resolution wavemeter. Unfortunately, the unavailability of a suitable laser around the degenerate FWM frequency (253 THz) prevented us from directly probing the dispersion at that frequency. Because we are not able to probe the dispersion around 253 THz, it is not possible to unequivocally compare experimentally measured dispersion with our theoretical estimate. This is because the integrated dispersion D_int depends upon the precise resonance frequency ω'_0 and the free-spectral range [D_1/(2π)] at ω'_0, which we are unable to probe experimentally. To nonetheless show that our measurements at different spectral regions are consistent with our theoretical estimate, we can use nonlinear least-squares to fit our experimental data to the theoretical data, and in doing so obtain experimental estimates for ω'_0 and D_1, which then allows us to compute the integrated dispersion. The blue dots in Fig. <ref>(a) were obtained using this procedure. The fitting also provides the one-standard-deviation errors for the parameter estimates, Δω'_0 and Δ D_1, which then allows us to compute the fitting errors for Δ D_int(μ) and Δδω(μ). We find that the maximum (across relative mode order μ) error in the estimated D_int is max[Δ D_int(μ)/(2π)] ≈ 0.50 GHz, yielding max[Δδω(μ)/(2π)] ≈ 0.35 GHz. These errors are smaller than the markers used in Figs. <ref>(b) and (c), which is why errorbars are not shown. Due to the resonator's small size, it exhibits a strong thermal nonlinearity <cit.>. We leverage this effect to achieve self-stabilization, such that the input lasers can remain free-running but still maintain near-constant detunings. In addition, the thermal nonlinearity causes the resonance frequencies to shift over several GHz as the pump laser(s) are tuned into resonance [see e.g. Fig. <ref>(f)], which we suspect is key to achieving phase-matched operation (and thus PDCS generation). We also note that the thermal nonlinearity may influence the resonator dispersion directly <cit.>; whilst this effect is generally weak (and under-examined), it is possible that it also influences the precise phase-matching conditions, thus playing a role in our experiments. A detailed study on the impact of the thermal nonlinearity to PDCS generation is beyond the scope of our present work. -10pt Simulation parameters. The simulations in Fig. 2 assume a critically-coupled (α = θ) resonator with a round trip length L ≈ 8.3 mm, nonlinearity coefficient γ = 1.2 W^-1km^-1, and finesse ℱ = π/α = 5000. The driving fields are positioned at an angular frequency shift ±Ω_p = 2π× 30.4 THz with respect to the degenerate FWM frequency, corresponding to relative mode number p = 1217. The dispersion coefficients are β_2 = -5 ps^2/km, β_3 = 0.45 ps^3/km and β_4 = 1.6× 10^-3ps^4/km, corresponding to D_2/(2π) = 4.06 kHz, D_3/(2π) = -57.90 Hz and D_4 = -0.03 Hz. The above parameters yield an effective (normalised) driving strength |ν| = 1.37 and detuning Δ_eff = 1.2 which are known to be in the regime of soliton existence <cit.>. As a matter of fact, the above parameters were found by looking for the driving powers and frequency shifts that yield these particular values for the driving strength and detuning. The simulations in Fig. <ref> and <ref> use experimental values quoted in the main text or in the resonator description above, with the addition that the nonlinearity coefficient was set to γ = 1 W^-1m^-1. The pump detunings were chosen such that, in Fig. <ref>, the effective driving strength |ν| = 1.28 and Δ_eff = 6, and in Fig. <ref>, |ν| = 1.15 and Δ_eff = 5. The effective detunings were coarsely tuned so as to match the simulations to the experimentally measured spectra. The simulation outcomes are not sensitive to the particular values of the driving strength ν used. With the parameters used to obtain the simulation results in Fig. <ref>, the coefficient b defined in Eq. (<ref>) was b=-0.307, yielding a comb frequency offset of Δ f = 49 GHz from Eq. (<ref>). -10pt Frequency-dependent coupling. All the simulations reported in our manuscript have been obtained using the model defined by Eqs. (<ref>) and (<ref>). However, as explained in the main text [see also Supplementary Figure 2], when comparing against experimentally measured spectra [Figs. <ref> and <ref>], the simulation outputs were post-processed to account for the frequency-dependent coupling, thus providing an estimate for the out-coupled spectrum. This was achieved by multiplying the simulated intracavity spectra with the frequency-dependent coupling coefficient [Supplementary Figure 2] obtained from rigorous coupled-mode simulations <cit.>. These coupled-mode simulations assumed the coupler length to be 31.25 μm, which was found to provide a better agreement with our experiments compared to the design value of 32 μm. This discrepancy is reasonable in terms of fabrication tolerances given the high sensitivity to the phase mismatch between the ring and waveguide modes and that any small discrepancy in the side-wall angle or waveguide width could cause a smaller effective pulley. However we note that the obtained length is well within fabrication tolerance of deep-UV stepper fabrication. Note that the frequency-dependent coupling was not included explicitly in our numerical simulation model for the sake of simplicity. -10pt Multi-soliton states. Because of pump depletion and finite dispersive walk-off, the PDCSs carve a depletion region onto the intracavity fields at the pump frequencies [see Fig. <ref>(f)]. These depletion regions are the time-domain manifestations of the frequency combs that form around the pump frequencies, and they give rise to long-range soliton interactions. Compounded by the system's periodic boundary conditions, stable multi-soliton states only exist at selected relative delays (or not at all) in our simulations. On the other hand, it is well-known (from studies of conventional Kerr CSs) that experimental systems exhibit imperfections (e.g. avoided mode crossings) which, along with oscillatory tails from dispersive waves, force multi-soliton states to only manifest themselves at some prescribed relative delays <cit.>. Because the PDCSs in our simulations exhibit long-range coupling, it is not possible to obtain a simulation of a multi-soliton state with the same relative delays as in our experiments, unless one has access to full details of the experimental system (including dispersion that captures possible avoided mode crossings), which we do not have. Because of the above, the theoretical PDCS fields in Fig. <ref>(c) and (d) were created from a single steady-state PDCS – obtained via simulations of Eqs. (<ref>) and (<ref>). Specifically, the two-soliton fields were obtained by linearly adding together two replicas of the single steady-state PDCS state, with the relative delay (Δτ) and phase (Δϕ) between the replicas inferred from nonlinear least squares fitting to the experimentally observed spectral interference pattern. For both in- and out-of-phase states, our fitting algorithm yields two possible configurations (Δτ, Δϕ) that identically minimise the sum of the squared residuals. For the in-phase configuration, these are (533 fs,1×10^-3π) and (467 fs,3×10^-4π), and for the out-of-phase configuration we have (525 fs, 0.99π) and (475 fs, 1.01π). In Figs. <ref>(c) and (d), we plot the configurations associated with the larger delay. The one-standard-deviation errors for the fits are all smaller than (0.4 fs, 0.01π). § ACKNOWLEDGEMENTS G. M. and K. S. acknowledge support from the NIST-on-a-chip program. J. F. acknowledges the CNRS (IRP WALL-IN project). § AUTHOR CONTRIBUTIONS G. M. performed all the experiments and assisted in the interpretation of the results. M. L. and D. P contributed to the theoretical development of the scheme and performed initial simulations to confirm the fundamental viability of the scheme. N. E. and F. L. provided guidance on parametrically-driven soliton theory. J. F. assisted in the interpretation of Kerr cavity physics. K. S. supervised and obtained funding for the experiments. M. E. developed the theory, performed the simulations, and wrote the manuscript with input from all the authors. § DATA AVAILABILITY The data that support the plots within this paper and other findings of this study are available from M.E. upon reasonable request. § COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. bibstyle2nonotes 10 leo_temporal_2010 F. Leo, S. Coen, P. Kockaert, S.-P. Gorza, P. Emplit, and M. Haelterman, Temporal cavity solitons in one-dimensional Kerr media as bits in an all-optical buffer, Nature Photon 4, 471–476 (2010). kippenberg_dissipative_2018 T. J. Kippenberg, A. L. Gaeta, M. Lipson, and M. L. Gorodetsky, Dissipative Kerr solitons in optical microresonators, Science 361 (2018). wabnitz_suppression_1993 S. Wabnitz, Suppression of interactions in a phase-locked soliton optical memory, Opt. Lett., OL 18, 601–603 (1993). herr_temporal_2014 T. Herr, V. Brasch, J. D. Jost, C. Y. Wang, N. M. Kondratiev, M. L. Gorodetsky, and T. J. Kippenberg, Temporal solitons in optical microresonators, Nature Photon 8, 145–152 (2014). brasch_photonic_2016 V. Brasch, M. Geiselmann, T. Herr, G. Lihachev, M. H. P. Pfeiffer, M. L. Gorodetsky, and T. J. Kippenberg, Photonic chip–based optical frequency comb using soliton Cherenkov radiation, Science 351, 357–360 (2016). pasquazi_micro-combs_2018 A. Pasquazi, M. Peccianti, L. Razzari, D. J. Moss, S. Coen, M. Erkintalo, Y. K. Chembo, T. Hansson, S. Wabnitz, P. Del’Haye, X. Xue, A. M. Weiner, and R. Morandotti, Micro-combs: A novel generation of optical sources, Physics Reports 729, 1–81 (2018). marin-palomo_microresonator-based_2017 P. Marin-Palomo, J. N. Kemal, M. Karpov, A. Kordts, J. Pfeifle, M. H. P. Pfeiffer, P. Trocha, S. Wolf, V. Brasch, M. H. Anderson, R. Rosenberger, K. Vijayan, W. Freude, T. J. Kippenberg, and C. Koos, Microresonator-based solitons for massively parallel coherent optical communications, Nature 546, 274–279 (2017). corcoran_ultra-dense_2020 B. Corcoran, M. Tan, X. Xu, A. Boes, J. Wu, T. G. Nguyen, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, and D. J. Moss, Ultra-dense optical data transmission over standard fibre with a single chip source, Nature Communications 11, 2568 (2020). xu_11_2021 X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, 11 TOPS photonic convolutional accelerator for optical neural networks, Nature 589, 44–51 (2021). feldmann_parallel_2021 J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, Parallel convolutional processing using an integrated photonic tensor core, Nature 589, 52–58 (2021). suh_searching_2019 M.-G. Suh, X. Yi, Y.-H. Lai, S. Leifer, I. S. Grudinin, G. Vasisht, E. C. Martin, M. P. Fitzgerald, G. Doppmann, J. Wang, D. Mawet, S. B. Papp, S. A. Diddams, C. Beichman, and K. Vahala, Searching for exoplanets using a microresonator astrocomb, Nature Photonics 13, 25–30 (2019). obrzud_microphotonic_2019 E. Obrzud, M. Rainer, A. Harutyunyan, M. H. Anderson, J. Liu, M. Geiselmann, B. Chazelas, S. Kundermann, S. Lecomte, M. Cecconi, A. Ghedina, E. Molinari, F. Pepe, F. Wildi, F. Bouchy, T. J. Kippenberg, and T. Herr, A microphotonic astrocomb, Nature Photonics 13, 31–35 (2019). spencer_optical-frequency_2018 D. T. Spencer, T. Drake, T. C. Briles, J. Stone, L. C. Sinclair, C. Fredrick, Q. Li, D. Westly, B. R. Ilic, A. Bluestone, N. Volet, T. Komljenovic, L. Chang, S. H. Lee, D. Y. Oh, M.-G. Suh, K. Y. Yang, M. H. P. Pfeiffer, T. J. Kippenberg, E. Norberg, L. Theogarajan, K. Vahala, N. R. Newbury, K. Srinivasan, J. E. Bowers, S. A. Diddams, and S. B. Papp, An optical-frequency synthesizer using integrated photonics, Nature 557, 81–85 (2018). lucas_ultralow-noise_2020 E. Lucas, P. Brochard, R. Bouchand, S. Schilt, T. Südmeyer, and T. J. Kippenberg, Ultralow-noise photonic microwave synthesis using a soliton microcomb-based transfer oscillator, Nature Communications 11, 374 (2020). kwon_ultrastable_2022 D. Kwon, D. Jeong, I. Jeon, H. Lee, and J. Kim, Ultrastable microwave and soliton-pulse generation from fibre-photonic-stabilized microcombs, Nature Communications 13, 381 (2022). suh_soliton_2018 M.-G. Suh and K. J. Vahala, Soliton microcomb range measurement, Science 359, 884–887 (2018). riemensberger_massively_2020 J. Riemensberger, A. Lukashchuk, M. Karpov, W. Weng, E. Lucas, J. Liu, and T. J. Kippenberg, Massively parallel coherent laser ranging using a soliton microcomb, Nature 581, 164–170 (2020). nielsen_coexistence_2019 A. U. Nielsen, B. Garbin, S. Coen, S. G. Murdoch, and M. Erkintalo, Coexistence and Interactions between Nonlinear States with Different Polarizations in a Monochromatically Driven Passive Kerr Resonator, Phys. Rev. Lett. 123, 013902 (2019). anderson_coexistence_2017 M. Anderson, Y. Wang, F. Leo, S. Coen, M. Erkintalo, and S. G. Murdoch, Coexistence of Multiple Nonlinear States in a Tristable Passive Kerr Resonator, Phys. Rev. X 7, 031031 (2017). hansson_frequency_2015 T. Hansson and S. Wabnitz, Frequency comb generation beyond the Lugiato&#x2013;Lefever equation: multi-stability and super cavity solitons, J. Opt. Soc. Am. B, JOSAB 32, 1259–1266 (2015). xu_spontaneous_2021 G. Xu, A. U. Nielsen, B. Garbin, L. Hill, G.-L. Oppo, J. Fatome, S. G. Murdoch, S. Coen, and M. Erkintalo, Spontaneous symmetry breaking of dissipative optical solitons in a two-component Kerr resonator, Nat Commun 12, 4023 (2021). lucas_spatial_2018 E. Lucas, G. Lihachev, R. Bouchand, N. G. Pavlov, A. S. Raja, M. Karpov, M. L. Gorodetsky, and T. J. Kippenberg, Spatial multiplexing of soliton microcombs, Nature Photon 12, 699–705 (2018). takesue_10_2016 H. Takesue and T. Inagaki, 10 GHz clock time-multiplexed degenerate optical parametric oscillators for a photonic Ising spin network, Opt. Lett., OL 41, 4273–4276 (2016). okawachi_quantum_2016 Y. Okawachi, M. Yu, K. Luke, D. O. Carvalho, M. Lipson, and A. L. Gaeta, Quantum random number generator using a microresonator-based Kerr oscillator, Opt. Lett., OL 41, 4194–4197 (2016). okawachi_dynamic_2021 Y. Okawachi, B. Y. Kim, Y. Zhao, X. Ji, M. Lipson, M. Lipson, A. L. Gaeta, and A. L. Gaeta, Dynamic control of photon lifetime for quantum random number generation, Optica, OPTICA 8, 1458–1461 (2021). inagaki_Large-scale_2016 T. Inagaki, K. Inaba, R. Hamerly, K. Inoue, Y. Yamamoto, and H. Takesue, Large-scale Ising spin network based on degenerate optical parametric oscillators, Nature Photon 10, 415–419 (2016). mohseni_ising_2022 N. Mohseni, P. L. McMahon, and T. Byrnes, Ising machines as hardware solvers of combinatorial optimization problems, Nat Rev Phys pages 1–17 (2022). englebert_parametrically_2021 N. Englebert, F. De Lucia, P. Parra-Rivas, C. M. Arabí, P.-J. Sazio, S.-P. Gorza, and F. Leo, Parametrically driven Kerr cavity solitons, Nat. Photon. 15, 857–861 (2021). bruch_pockels_2021 A. W. Bruch, X. Liu, Z. Gong, J. B. Surya, M. Li, C.-L. Zou, and H. X. Tang, Pockels soliton microcomb, Nature Photonics 15, 21–27 (2021). thomson_roadmap_2016 D. Thomson, A. Zilkie, J. E. Bowers, T. Komljenovic, G. T. Reed, L. Vivien, D. Marris-Morini, E. Cassan, L. Virot, J.-M. Fédéli, J.-M. Hartmann, J. H. Schmid, D.-X. Xu, F. Boeuf, P. O’Brien, G. Z. Mashanovich, and M. Nedeljkovic, Roadmap on silicon photonics, Journal of Optics 18, 073003 (2016). liu_high-yield_2021 J. Liu, G. Huang, R. N. Wang, J. He, A. S. Raja, T. Liu, N. J. Engelsen, and T. J. Kippenberg, High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits, Nature Communications 12, 2236 (2021). luSHG2020 X. Lu, G. Moille, A. Rao, D. A. Westly, and K. Srinivasan, Efficient photoinduced second-harmonic generation in silicon nitride photonics, Nature Photonics pages 1–6 (2020). NitissNat.Photon.2022 E. Nitiss, J. Hu, A. Stroganov, and C.-S. Brès, Optically reconfigurable quasi-phase-matching in silicon nitride microresonators, Nature Photonics 16, 134–141 (2022). mecozzi_long-term_1994 A. Mecozzi, W. L. Kath, P. Kumar, and C. G. Goedde, Long-term storage of a soliton bit stream by use of phase-sensitive amplification, Opt. Lett., OL 19, 2050–2052 (1994). agrawal_nonlinear_nodate G. Agrawal, Nonlinear Fiber Optics - 5th Edition, . radic_two-pump_2003 S. Radic and C. J. McKinstrie, Two-pump fiber parametric amplifiers, Optical Fiber Technology 9, 7–23 (2003). okawachi_dual-pumped_2015 Y. Okawachi, M. Yu, K. Luke, D. O. Carvalho, S. Ramelow, A. Farsi, M. Lipson, and A. L. Gaeta, Dual-pumped degenerate Kerr oscillator in a silicon nitride microresonator, Opt. Lett., OL 40, 5267–5270 (2015). Andrekson_fiber-based_2020 P. A. Andrekson and M. Karlsson, Fiber-based phase-sensitive optical amplifiers and their applications, Adv. Opt. Photon., AOP 12, 367–428 (2020). de_valcarcel_phase-bistable_2013 G. J. de Valcárcel and K. Staliunas, Phase-bistable Kerr cavity solitons and patterns, Phys. Rev. A 87, 043802 (2013). hansson_bichromatically_2014 T. Hansson and S. Wabnitz, Bichromatically pumped microresonator frequency combs, Phys. Rev. A 90, 013811 (2014). ceoldo_multiple_2016 D. Ceoldo, A. Bendahmane, J. Fatome, G. Millot, T. Hansson, D. Modotto, S. Wabnitz, and B. Kibler, Multiple four-wave mixing and Kerr combs in a bichromatically pumped nonlinear fiber ring cavity, Opt. Lett., OL 41, 5462–5465 (2016). zhang_spectral_2020 S. Zhang, J. M. Silver, T. Bi, and P. Del’Haye, Spectral extension and synchronization of microcombs in a single microresonator, Nat Commun 11, 6384 (2020). moille_ultra-broadband_2021 G. Moille, E. F. Perez, J. R. Stone, A. Rao, X. Lu, T. S. Rahman, Y. K. Chembo, and K. Srinivasan, Ultra-broadband Kerr microcomb through soliton spectral translation, Nat Commun 12, 7275 (2021). qureshi_soliton_2021 P. C. Qureshi, V. Ng, F. Azeem, L. S. Trainor, H. G. L. Schwefel, S. Coen, M. Erkintalo, and S. G. Murdoch, Soliton linear-wave scattering in a Kerr microresonator, Communications Physics 5, 123 (2022). taheri_all-optical_2022 H. Taheri, A. B. Matsko, L. Maleki, and K. Sacha, All-optical dissipative discrete time crystals, Nat Commun 13, 848 (2022). miles_parametrically_1984 J. W. Miles, Parametrically excited solitary waves, Journal of Fluid Mechanics 148, 451–460 (1984). barashenkov_stability_1991 I. V. Barashenkov, M. M. Bogdan, and V. I. Korobov, Stability Diagram of the Phase-Locked Solitons in the Parametrically Driven, Damped Nonlinear Schrödinger Equation, Europhysics Letters 15, 113 (1991). coen_universal_2013 S. Coen and M. Erkintalo, Universal scaling laws of Kerr frequency combs, Opt. Lett., OL 38, 1790–1792 (2013). coen_modeling_2013 S. Coen, H. G. Randle, T. Sylvestre, and M. Erkintalo, Modeling of octave-spanning Kerr frequency combs using a generalized mean-field Lugiato–Lefever model, Opt. Lett., OL 38, 37–39 (2013). jang_observation_2014 J. K. Jang, M. Erkintalo, S. G. Murdoch, and S. Coen, Observation of dispersive wave emission by temporal cavity solitons, Opt. Lett., OL 39, 5503–5506 (2014). milian_soliton_2014 C. Milián and D. V. Skryabin, Soliton families and resonant radiation in a micro-ring resonator near zero group-velocity dispersion, Opt. Express, OE 22, 3732–3739 (2014). moille_broadband_2019 G. Moille, Q. Li, T. C. Briles, S.-P. Yu, T. Drake, X. Lu, A. Rao, D. Westly, S. B. Papp, and K. Srinivasan, Broadband resonator-waveguide coupling for efficient extraction of octave-spanning microcombs, Optics Letters 44, 4737–4740 (2019). MoillearXiv2023 G. Moille, C. Li, J. Stone, M. Chojnacky, P. Shandilya, Y. K. Chembo, A. Dutt, C. Menyuk, and K. Srinivasan, Two-Dimensional Nonlinear Mixing Between a Dissipative Kerr Soliton and Continuous Waves for a Higher-Dimension Frequency Comb, arXiv (2023). leo_dynamics_2013 F. Leo, L. Gelens, P. Emplit, M. Haelterman, and S. Coen, Dynamics of one-dimensional Kerr cavity solitons, Optics Express 21, 9180–9191 (2013). yu_breather_2017 M. Yu, J. K. Jang, Y. Okawachi, A. G. Griffith, K. Luke, S. A. Miller, X. Ji, M. Lipson, and A. L. Gaeta, Breather soliton dynamics in microresonators, Nature Communications 8, 14569 (2017). lucas_breathing_2017 E. Lucas, M. Karpov, H. Guo, M. L. Gorodetsky, and T. J. Kippenberg, Breathing dissipative solitons in optical microresonators, Nature Communications 8, 736 (2017). jang_ultraweak_2013 J. K. Jang, M. Erkintalo, S. G. Murdoch, and S. Coen, Ultraweak long-range interactions of solitons observed over astronomical distances, Nature Photon 7, 657–663 (2013). cole_soliton_2017 D. C. Cole, E. S. Lamb, P. Del’Haye, S. A. Diddams, and S. B. Papp, Soliton crystals in Kerr resonators, Nature Photonics 11, 671–676 (2017). wang_universal_2017 Y. Wang, F. Leo, J. Fatome, M. Erkintalo, S. G. Murdoch, and S. Coen, Universal mechanism for the binding of temporal cavity solitons, Optica, OPTICA 4, 855–863 (2017). chembo_quantum_2016 Y. K. Chembo, Quantum dynamics of Kerr optical frequency combs below and above threshold: Spontaneous four-wave mixing, entanglement, and squeezed states of light, Physical Review A 93, 033820 (2016). bao_quantum_2021 C. Bao, M.-G. Suh, B. Shen, K. Şafak, A. Dai, H. Wang, L. Wu, Z. Yuan, Q.-F. Yang, A. B. Matsko, F. X. Kärtner, and K. J. Vahala, Quantum diffusion of microcavity solitons, Nature Physics 17, 462–466 (2021). yang2021squeezed Z. Yang, M. Jahanbozorgi, D. Jeong, S. Sun, O. Pfister, H. Lee, and X. Yi, A squeezed quantum microcomb on a chip, Nature Communications 12, 4781 (2021). guidry_quantum_2022 M. A. Guidry, D. M. Lukin, K. Y. Yang, R. Trivedi, and J. Vučković, Quantum optics of soliton microcombs, Nature Photonics 16, 52–58 (2022). McKinstrie_phase-sensitive_2004 C. J. McKinstrie and S. Radic, Phase-sensitive amplification in a fiber, Optics Express 12, 4973–4979 (2004). slavik_all-optical_2010 R. Slavík, F. Parmigiani, J. Kakande, C. Lundström, M. Sjödin, P. A. Andrekson, R. Weerasuriya, S. Sygletos, A. D. Ellis, L. Grüner-Nielsen, D. Jakobsen, S. Herstrøm, R. Phelan, J. O'Gorman, A. Bogris, D. Syvridis, S. Dasgupta, P. Petropoulos, and D. J. Richardson, All-optical phase and amplitude regenerator for next-generation telecommunications systems, Nature Photonics 4, 690–695 (2010). tong_towards_2011 Z. Tong, C. Lundström, P. A. Andrekson, C. J. McKinstrie, M. Karlsson, D. J. Blessing, E. Tipsuwannakul, B. J. Puttnam, H. Toda, and L. Grüner-Nielsen, Towards ultrasensitive optical links enabled by low-noise phase-sensitive amplifiers, Nature Photonics 5, 430–436 (2011). Zhang2008 J. Zhang, C. Ye, F. Gao, and M. Xiao, Phase-sensitive manipulations of a squeezed vacuum field in an optical parametric amplifier inside an optical cavity, Phys. Rev. Lett. 101, 233602 (2008). Ye2021 Z. Ye, P. Zhao, K. Twayana, M. Karlsson, V. Torres-Company, and P. A. Andrekson, Overcoming the quantum limit of optical amplification in monolithic waveguides, Science Advances 7, eabi8150 (2021). taheri_optical_2017 H. Taheri, A. B. Matsko, and L. Maleki, Optical lattice trap for Kerr solitons, Eur. Phys. J. D 71, 153 (2017). bondila_topography_1995 M. Bondila, I. V. Barashenkov, and M. M. Bogdan, Topography of attractors of the parametrically driven nonlinear Schrödinger equation, Physica D: Nonlinear Phenomena 87, 314–320 (1995). carmon_dynamical_2004 T. Carmon, L. Yang, and K. J. Vahala, Dynamical thermal behavior and thermal self-stability of microcavities, Optics Express 12, 4742–4750 (2004). moille_integrated_2022 G. Moille, D. Westly, E. F. Perez, M. Metzler, G. Simelgor, and K. Srinivasan, Integrated buried heaters for efficient spectral control of air-clad microresonator frequency combs, APL Photonics 7, 126104 (2022). haelterman_dissipative_1992 M. Haelterman, S. Trillo, and S. Wabnitz, Dissipative modulation instability in a nonlinear dispersive ring cavity, Optics Communications 91, 401–407 (1992). § SUPPLEMENTARY NOTE 1: DERIVATION OF THE IKEDA MAP We present here a heuristic derivation of the Ikeda-like map used in our numerical simulations [Eq. (5) of the main manuscript]. Including the rapid temporal oscillations at ω_0 = (ω_+ + ω_-)/2, where ω_± are the input frequencies, the intracavity electric field during the m^th cavity transit is written as E^(m)(z,τ)exp[-iω_0 T], where τ is time in a co-moving reference frame defined as τ = T-z/v_g with T absolute laboratory time, z the coordinate along the waveguide that forms the resonator, v_g the group velocity of light at ω_0, and E^(m)(z,τ) is the slowly-varying electric field envelope that follows the generalized nonlinear Schrödinger equation [Eq. (2) of the main manuscipt]. The boundary equation for the full electric field can then be written as E^(m+1)(0,τ)e^-iω_0 T = √(1-2α) E^(m)(L,τ)e^-iδ_0-iω_0 T + √(θ_+)E_in,+ e^-iω_+T + √(θ_-)E_in,- e^-iω_-T, where δ_0 = 2π k - β(ω_0) L is the linear phase detuning of the reference frequency ω_0 from the closest cavity resonance (with order k), and ω_± are the frequencies of the pump fields. Multiplying each side with exp[iω_0 T] and replacing T = τ + mL/v_g= τ + mt_R where t_R is the round trip time, yields E^(m+1)(0,τ) = √(1-2α) E^(m)(L,τ)e^-iδ_0 + √(θ_+)E_in,+ e^-iΩ_pτ + i(ω_0-ω_+)mt_R + √(θ_-)E_in,- e^iΩ_pτ + i(ω_0-ω_-)mt_R. Note that, in the above formulation, the co-moving time variable τ should be understood as the “fast time” that describes the envelope of the intracavity electric field over a single round trip, i.e., the distribution of the envelope within the resonator <cit.>. As such, the τ variable spans a single round trip time of the resonator, and the intracavity envelope must obey periodic boundaries within that range. These conditions stipulate that the frequency variable Ω_p = 2π p ×FSR, where p is a positive integer and FSR = t_R^-1 is the free-spectral range of the resonator. The fact that the frequency difference (ω_0-ω_±)/(2π) may not, in general, be an integer multiple of the FSR is captured by the additional phase shifts accumulated by the driving fields with respect to the intracavity field from round trip to round trip. To link the frequency differences ω_0-ω_± to the respective phase detunings, we first recall that the phase detuning of a driving field with frequency ω from a cavity resonance at ω' obeys δ≈ (ω'-ω)t_R. We can thus write ω_0-ω_±≈ω'_0-δ_0/t_R - ω'_± + δ_±/t_R, where the frequency variables with (without) apostrophes refer to resonance (pump) frequencies. We next write the resonance frequencies as ω'_q = ω'_0 + q D_1 + D̂_int(q), where D_1 = 2πFSR with FSR = t^-1_R the free-spectral range of the cavity (at ω'_0), q an integer that represents the mode index (with ω_0 corresponding to q = 0), and the integrated dispersion D_int(q) = ∑_k≥ 2D_k/k!q^k, where D_k are the expansion coefficients. Assuming that the resonance frequencies ω'_± are associated with indices ± p (with p>0), respectively, we can write Eq. (<ref>) as ω_0-ω_±≈ -δ_0/t_R∓ p D_1 - D_int(± p) + δ_±/t_R. The second term on the right-hand-side of Eq. (<ref>) can be ignored, as it yields an integer multiple of 2π when used in Eq. (<ref>). Next, we use the fact <cit.> that the coefficients D_k with k≥ 2 can be linked to the Taylor series expansion coefficients of the propagation constant β(ω) viz. D_k ≈ - D_1^kLβ_k/t_R. This allows us to write the integrated dispersion corresponding to the resonance frequencies ω'_± as D_int(± p) ≈∑_k≥ 2 -D_1^kLβ_k/t_Rk!(± p)^k, = -L/t_R∑_k≥ 2β_k/k!(± D_1 p)^k. Using Eq. (<ref>) in Eq. (<ref>) and substituting the latter into Eq. (<ref>) yields the Ikeda-like map described by Eq. (5) of the main manuscript with coefficients b_± as defined in Eq. (6) of the manuscript, and the pump frequency shift Ω_p = D_1 p = 2π p ×FSR. We also note that the map can be straightforwardly extended to include arbitrarily many driving fields following the procedure above. § SUPLEMENTARY NOTE 2: SIGNAL DETUNING To derive the relationship between the parametric signal detuning δ_0 and the pump detunings δ_± (i.e., Eq. (7) of the main manuscript), we write out the parametric signal frequency as ω_0 = ω_+ + ω_-/2 = ω_+' - Δω_+ + ω_-'- Δω_-/2, where ω_±' are the resonance frequencies closest to the pump frequencies and Δω_± are the angular frequency detuning of the pump frequencies from those resonances. Substituting Δω_± = δ_±/t_R and expanding the pump resonance frequencies as ω_±' = ω_0' ± pD_1 + D̂_int(± p) yields ω_0 = ω_0' - δ_+ + δ_-/2t_R + D̂_int(p) + D̂_int(-p)/2. Then using Eq. (<ref>) and rearranging, we obtain δ_0 = (ω_0'-ω_0)t_R = δ_+ + δ_- + L[D̂_S(Ω_p)+D̂_S(-Ω_p)]/2. This is Eq. (7) of the main manuscript. § SUPLEMENTARY NOTE 3: COMB OFFSET The boundary equation of the Ikeda-like map [Eq. (5) of the main manuscript] shows that the driving fields experience an additional relative phase shift per round trip determined by the coefficient b = δ_+ - δ_- + L[β̂_S(Ω_p)-β̂_S(-Ω_p)]/2. Using δ_± = (ω'_± - ω_±)t_R and Lβ̂_S(±Ω_p)=-t_R D_int(± p), we obtain b = t_R[ ω'_+ - ω'_-/2 - ω_+ - ω_-/2 - D_int(p) -D_int(-p)/2]. By then using D_int(p) - D_int(-p) = ω'_+ - ω'_- - 2pD_1, we obtain b = t_R[pD_1 - ω_+-ω_-/2]. Given that ω_0 = (ω_+ + ω_-)/2, we recognise (ω_+-ω_-)/2 as the angular frequency shift between the pump at ω_+ and the degenerate FWM signal at ω_0. Moreover, because the integer p corresponds to mode number of the driven mode ω'_+ relative to ω'_0, we may write b = t_RΔω, where Δω is the angular frequency difference between the pump at ω_+ and the closest component of the frequency comb (with spacing D_1/(2π)) that forms around ω_0. Defining the ordinary comb offset as Δ f = |Δω/(2π)|, and using the fact that the combs that form around ω_0 and ω_± have the same spacing, we obtain Δ f = |b|/2π t_R = |b|D_1/(2π)^2, where we used D_1 = 2π/t_R. § SUPPLEMENTARY NOTE 4: LUGIATO-LEFEVER EQUATION AND NORMALIZATION Under the assumption that the intracavity envelope E^(m)(z,τ) evolves slowly over a single round trip (i.e., the cavity has a high finesse, and the linear and nonlinear phase shifts are all small), the Ikeda-like map described by Eqs. (2) and (5) of the main manuscript can be averaged into a single mean-field equation. The derivation is well-known <cit.>, proceeding by integrating Eq. (2) using a single step of the forward Euler method to obtain E^(m)(L,τ), which is then substituted into Eq. (5). After linearizing with respect to δ_0 and α and introducing the slow time variable t = mt_R (such that the round trip index m = t/t_R), one obtains: t_R∂ E(t,τ)/∂ t = [-α + i(γ L |E|^2 -δ_0)+ iLβ̂_S(i∂/∂τ)]E + √(θ_+)E_in,+ e^-iΩ_pτ + ib_+t/t_R + √(θ_-)E_in,- e^iΩ_pτ + ib_-t/t_R. To obtain the normalized Eq. (10) of the main manuscript, we first introduce the variable transformations τ→τ√(2α/(|β_2|L)), t→α t/t_R, Ω_p→Ω_p√(|β_2|L/(2α)) and E→ E √(γ L/α), yielding ∂ E(t,τ)/∂ t = [-1 + i(|E|^2 -Δ_0)+ iβ̂(i∂/∂τ)]E + S_+ e^-iΩ_pτ + ia_+t + S_- e^iΩ_pτ + ia_-t, where S_± = E_in,±√(γ L θ_±/α^3), and Δ_0 = δ_0/α. The normalized dispersion operator β̂ is defined as β̂(i∂/∂τ) = ∑_k≥ 2d_k/k!(i∂/∂τ)^k, where the normalized dispersion coefficients are given by d_k = β_k L/α(2α/|β_2| L)^k/2. Finally, the coefficients a_± = Δ_± - Δ_0 + β̂(±Ω_p), where Δ_± = δ_±/α are the normalized detunings of the external driving fields. Note that, for the particular configuration considered in our work, where the signal frequency ω_0 is strictly linked to the pump frequencies ω_± via ω_0 = (ω_+ + ω_-)/2, the coefficients a_± = ± a, where a is defined by Eq. (11) of the main manuscript.
http://arxiv.org/abs/2306.01515v1
20230602130835
Variational formulation of active nematics: theory and simulation
[ "Waleed Mirza", "Alejandro Torres-Sánchez", "Guillermo Vilanova", "Marino Arroyo" ]
physics.bio-ph
[ "physics.bio-ph" ]
Variational formulation of active nematics: theory and simulation]Variational formulation of active nematics: theory and simulation ^1 LaCàN, Universitat Politècnica de Catalunya BarcelonaTech, Jordi Girona 1-3 08034 Barcelona, Spain ^2 Barcelona Graduate School of Mathematics (BGSMath), Campus de Bellaterra, Edifici C 08193 Bellaterra Barcelona, Spain ^3 Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology (BIST), Baldiri Reixac 10-12, 08028 Barcelona Spain ^4 Centre Internacional de Mètodes Numèrics en Enginyeria (CIMNE), 08034 Barcelona, Spain [email protected] [email protected] The structure and dynamics of important biological quasi-two-dimensional systems, ranging from cytoskeletal gels to tissues, are controlled by nematic order, defects and activity. Continuum hydrodynamic descriptions combined with numerical simulations have been used to understand such complex systems, but the physical interpretation of different active nematic models and their applicability to specific systems is often unclear. For instance, most works rely on theories for incompressible liquid crystals but important active 2D nematic systems are compressible due to density variations or turnover. Here, we propose a theoretical and computational framework for possibly compressible and density-dependent 2D active nematic systems. This framework is based on Onsager's variational formalism to irreversible thermodynamics, according to which the dynamics result from a competition between free-energy release, dissipation and activity. We particularize this framework to recover a standard incompressible active nematic model and further formulate an alternative model for density-dependent active nemato-hydrodynamics. We show that the variational principle enables a direct and transparent derivation not only of the governing equations, but also of the finite element numerical scheme. We exercise this model in two representative examples of active nematodynamics relevant to the actin cytoskeleton during wound healing and to the dynamics of confined colonies of elongated cells. [ W Mirza^1,2, A Torres-Sánchez^1,3[Present address: Tissue Biology and Disease Modelling Unit, European Molecular Biology Laboratory, Doctor Aiguader 88, Barcelona (08003), Spain.], G Vilanova^1 and Marino Arroyo^1,3,4 Received: date 2021.09 / Accepted: 2022.03 =================================================================================================================================================================================================================================== § INTRODUCTION Important sub- and supra-cellular biological systems and bioinspired materials can be described as active nematic systems <cit.>. Examples include microtubule-kinesin gels <cit.>, acto-myosin gels <cit.>, dense bacterial suspensions <cit.> or dense colonies of elongated cells <cit.>. Active nematic systems are characterized by local nematic order combined with active power input, which couples to nematic order in that contractile or extensile active stresses orient along the nematic direction. Both nematicity and activity can independently induce the self-organization of heterogeneous structures. For instance, passive nematic systems can develop defects, either half-integer such as comets (+1/2) and trefoils (-1/2) or full-integer (+1) such as spirals or asters. Without nematic order, activity can also induce self-organized patterns, such as those resulting from self-reinforcing flows driving density accumulation <cit.>. Acting together, nematicity and activity drive a plethora of dynamical behaviors, where defects generate active flows, and flows nucleate defects, which become motile, leading to active turbulence at high-enough activity <cit.>. Moreover, the interplay between nematic organization, hydrodynamic flows and density accumulation plays a key role in the self-organization of diverse architectures such as dense nematic bundles <cit.>, asters <cit.>, or tactoids <cit.> in the cell cytoskeleton or in actin-based reconstituted systems, as further discussed in <cit.>. The self-organization of active nematic systems has been addressed using both discrete and continuous descriptions. In discrete models <cit.>, active nematic systems are resolved at the scale of individual active units, which are usually modeled as self-propelled Brownian particles. The emergent behavior of the system depends on an interplay of fluctuations, active forces generated by individual units, and alignment and repulsion interactions. If truly describing individual microscopic units, discrete models can rarely access phenomena occurring at large length-scales and over long time-scales such as cell wound healing <cit.>, division <cit.>, motility <cit.> or the spontaneous flow of cells in a colony <cit.>. Alternatively, continuum models in combination with numerical discretization can access the pertinent mesoscales in time (minutes) and space (tens of microns) using fewer degrees of freedom and are further amenable to mathematical analysis. A number of active nematic continuum models have been proposed <cit.>. These models are often based on phenomenological constitutive relations consistent with the framework of irreversible thermodynamics <cit.>, but have also been derived by coarse-graining of microscopic dynamics <cit.>. Active nematic continuum models have been numerically approximated with the Lattice Boltzmann algorithm <cit.>, the Hybrid Boltzmann algorithm <cit.>, and finite element methods <cit.>. Despite previous approaches to explore the dynamics of active nematic systems, the literature has focused on incompressible systems rather than compressible active nematic systems undergoing changes in density, which can be caused by convergent/divergent flows or molecular/cellular turnover. To study active nematic systems with possibly time-dependent density in their full nonlinearity, here we develop a transparent modeling framework for density-dependent active nemato-hydrodynamics based on Onsager's variational formalism of irreversible thermodynamics. We recover standard incompressible active nematic models but focus on compressible 2D systems possibly undergoing turnover. In the model proposed here, nematic ordering can emerge from crowding of elongated elements, as classically assumed, or can be the result of active self-organization <cit.>. We also develop a variational numerical framework to approximate the solution of the proposed model and use it to explore self-organization of nematic architectures in two biologically relevant situations, namely the active self-organization leading to wound healing in large egg cells <cit.>, and defect dynamics in confined populations of elongated cells <cit.>. This paper is organized as follows. In Section <ref>, we provide background on Onsager's variational formalism for irreversible thermodynamics. In Section <ref>, we introduce the the variables characterizing the state of a compressible active nematic system and those describing its rate of change. In Sec. <ref>, we formulate and derive the governing equations of a generic thermodynamically consistent active nematic system following Onsager's variational formalism. In Section <ref>, we particularize the framework to recover the standard equations of an incompressible active liquid crystal. In Sec. <ref>, we develop a compressible active nematic model. In Sec. <ref>, we derive a numerical algorithm to approximate the theoretical model using Onsager's variational formalism. In Sec. <ref>, we apply our model to biologically relevant situations, namely the assembly of the contractile ring leading to wound healing in the actin cytoskeleton and the defect dynamics in a confined colony of contractile cells. Finally, in Sec. <ref>, we summarize our contribution and the outlook of our work. § ONSAGER'S VARIATIONAL FORMALISM IN SOFT AND ACTIVE MATTER §.§ Variational approaches to irreversible thermodynamics The theoretical framework of irreversible thermodynamics <cit.> provides a foundation for continuum models of soft and active matter such as active gels <cit.>. This procedure posits balance laws, identifies power-conjugate pairs of thermodynamic fluxes and generalized forces from the statement of energy conservation, and constrains the form of linear constitutive relations between such fluxes and forces by an entropy production argument, by Onsager's reciprocal relations <cit.>, and by the Curie symmetry principle. The clear thermodynamic foundation of this framework enables a precise notion of activity. Because it considers systems close to thermodynamic equilibrium, irreversible thermodynamics often focuses on the linear response. However, many contexts require dropping linearity. The framework of irreversible thermodynamics can also accommodate fully nonlinear theories that satisfy thermodynamic consistency, frame indifference and material symmetries. For instance, material theories based on continuum irreversible thermodynamics account for the geometric nonlinearity of finite deformations and for the nonlinear relation between fluxes and generalized forces required to model important irreversible processes such as plasticity <cit.>. Complementary to the conventional approach and with a parallel history, irreversible thermodynamics has been framed in terms of variational principles. Building on Rayleigh's principle of least dissipation of energy <cit.>, Onsager's celebrated papers recognize the equivalence between the reciprocal relations and a variational principle minimizing the sum of a quadratic dissipation function and the rate of change of an entropic free energy <cit.>. The variational route has been justified from a statistical mechanics point of view <cit.>, and applied in different contexts over more than seven decades <cit.>. Variational principles of irreversible thermodynamics naturally generalize to problems with non-quadratic and non-smooth dissipation functions <cit.>. They also reveal mathematical structures underlying the governing equations, such as gradient flows <cit.>, and enable the application of tools from calculus of variations such as relaxation or homogenization <cit.>. §.§ Onsager's variational principle in soft and active matter In the context of soft matter, the variational approach to irreversible thermodynamics is often referred to as Onsager's variational principle and provides a convenient modeling framework for a variety of nonlinear phenomena including phase separation, gel dynamics or viscoelasticity <cit.>. In recent years, it has been used to model the reshaping of biological membranes <cit.>, their interaction with adhesion and curved proteins <cit.>, or non-equilibrium phase separation in chemically-responsive polymer solutions <cit.> to name a few. In isothermal conditions, the variational principle defines a Rayleighian functional where changes in free energy, dissipation and power input compete, and the evolution equations are obtained by minimizing the Rayleighian with respect to generalized rates or velocities. When inertial forces are negligible, stationarity of the Rayleighian with respect to (generalized) velocities is equivalent to the principle of virtual work. Hence, (generalized) force balance is a consequence of the variational principle. We sketch next a minimal abstract statement of the principle. We denote the state variables of the system as X(t), the system free energy as ℱ(X), the process variables describing how the systems changes as V, a dissipation potential as 𝒟(V;X), and a potential for the external/active power input as 𝒫(V;X), a linear operator in V taking the abstract form 𝒫(V;X) = -F(X) V where F(X) are external/active generalized forces. By “(V;X)” we emphasize that the main dependence is on V but that there may be a parametric dependence on X. The state and process variables may include chemical and mechanical fields. We also suppose that the process variables are constrained by 0 = ℂ(X) V. We shall assume that all these potentials satisfy frame indifference and material symmetries, and that 𝒟 is nonnegative, satisfies 𝒟(0,X) = 0, is a differentiable and convex function of V, but need not be quadratic <cit.>. The free energy may be nonlinear and non-convex. In general, the process variable V may not be simply ∂_t X. For instance, in a model dependent on density ρ advected by a flow with velocity field v, the continuity equation relates the rate of change of state variable ∂_t ρ with the process variable v. As noted by <cit.>, V often contains redundant information to describe ∂_tX, which is however required to properly model dissipation. Indeed, in the example above ∂_t ρ is a scalar field but v is a vector field. We formalize the relation between ∂_t X and V through a linear process operator ∂_tX=P(X)V. The rate of change of the free energy follows from the chain rule and Eq. (<ref>) as d/dt[ ℱ(X(t)) ] = Dℱ(X) ∂_t X = Dℱ(X) P(X)V, where Dℱ(X) denotes the derivative of the free energy. We form the Rayleighian as ℛ(V;X) = Dℱ(X) P(X)V + 𝒟(V;X) -F(X) V. Onsager's variational principle then states that the system evolves such that V = argmin_W ℛ(W;X), ℂ(X) W=0. The constrained dynamics can be equivalently characterized as stationary points of the Lagrangian ℒ(V,Λ;X) = Dℱ(X) P(X)V + 𝒟(V;X) -F(X) V + Λ·ℂ(X) V, where Λ are the Lagrange multipliers. Once V is obtained from this variational principle, we can then integrate ∂_tX in time recalling Eq. (<ref>). Let us examine the first-order optimality conditions. The stationarity condition 0 = δ_Λℒ simply leads to 0 = ℂ(X)V. The stationarity condition 0 = δ_V ℒ leads to 0 = D ℱ(X) P(X) + D_V𝒟(V;X) -F(X) + Λ·ℂ(X), where D_V𝒟 denotes the derivative of dissipation with respect to its first argument. This equation establishes a balance between thermodynamic driving forces, dissipative forces, external/active forces and constraint forces. If 𝒟 is smooth, then generalized reciprocal relations are simply the statement of symmetry of second derivatives of 𝒟 with respect to different components of V <cit.>. Multiplying Eq. (<ref>) by the actual V along the dynamics, using the fact that ℂ(X)V=0 and rearranging terms, we obtain Dℱ(X) P(X) V_dℱ/dt = -D_V𝒟(V;X) V + F(X)V, which is a statement of energy balance relating the rate of change of the free energy, the power dissipated in irreversible processes, and the external/active power input. For a quadratic dissipation potential, we have D_V𝒟(V;X)V = 2 𝒟(V;X) and hence the dissipated power is twice the dissipation potential. The second-order optimality condition for V to be a minimum of the Rayleighian is the condition that 𝒟 is a convex function of V, which leads to 𝒟(0;X) ≥𝒟(V;X) - D_V𝒟(V;X)V. When supplemented by the natural conditions 𝒟(0;X) = 0 and 𝒟(V;X)≥ 0, we conclude that D_V𝒟(V;X)V ≥ 0. This equation is an entropy production inequality for irreversible processes. Hence, the existence of a non-negative and convex dissipation potential satisfying 𝒟(0;X) = 0 from which dissipative forces derive is the nonlinear generalization of Onsager's relations and the entropy production inequality <cit.>. In the absence of external/active forces, we conclude from Eqs. (<ref>,<ref>) that dℱ/dt = - D_V𝒟(V;X)V ≤ 0, and hence the free energy ℱ is a Lyapunov function of the dynamics. In summary, Onsager's variational principle is thermodynamically consistent by construction; the first-order optimality condition establishes energy conservation, and the second-order optimality condition guarantees non-negative entropy production. §.§ Onsager's variational principle as a modeling tool Although the traditional approach to irreversible thermodynamics and Onsager's variational formalism are fundamentally equivalent, there are practical differences when used as modeling tools, which we discuss below. Simplicity. Onsager's variational formalism summarizes the essential elements of a theory in terms of a scalar functional and an optimization principle, making the derivation of the governing equations direct and systematic irrespective of the number of mechanical or chemical fields involved and the nonlinearity of the model. Concepts such as stress tensors, chemical potentials and other generalized forces and their balance equations, or the various mechano-chemical couplings, naturally arise from the application of the chain rule to derive the optimality conditions of the minimum principle, but are not required to formulate the theory. Activity. The power input functional can accommodate not only externally applied forces but also active phenomena, and hence Onsager's variational principle provides a natural framework to develop theoretical models of active matter <cit.>. Nonlinearity. In the context of soft and active matter, nonlinearity plays a central role; nonlinear chemical networks control signaling pathways <cit.>, softness leads to large deformations and nonlinear kinematics, e.g. during reshaping and morphogenesis <cit.>, cellular compartments exhibit molecular crowding <cit.>, and nonlinear dissipations are required to model stick-slip frictional behaviors <cit.> or chemical reactions <cit.>. Onsager's variational formalism accommodates naturally nonlinearity without compromising thermodynamic consistency, frame indifference or material symmetries. Curvilinear coordinates. Being formulated in terms of a scalar functional, the treatment of problems described in curvilinear or generalized coordinates, such as active hydrodynamics on curved and time-evolving surfaces, is direct and systematic as long as the Rayleighian functional is formulated covariantly <cit.>. Numerical approximation. Variational principles have long been used to obtain approximate solutions. Regarding space, Onsager's variational principle enables a direct variational derivation of the spatially discretized equations by constrained minimization in a finite-dimensional approximation space, which is often much simpler than an approximation scheme that starts from the strong form of the governing equations. Regarding time, Onsager's principle can be framed in terms of a sequence of incremental minimization problems, leading to numerical time-stepping algorithms that inherit discrete and nonlinear counterparts of thermodynamic consistency and stability <cit.>. § DESCRIPTION OF A 2D ACTIVE NEMATIC GEL We consider a planar thin sheet of a nematic fluid. We describe this system with its areal density field ρ(x,t) and with a nematic tensor field q(x,t) =S(x,t) [n(x,t) ⊗n(x,t) - 1/2I] , where I is the 2D identity tensor, n is a unit vector representing the average nematic alignment, ⊗ is the dyadic or outer product, and S=√(2 q_abq_ab) is the nematic order parameter, which measures the strength of the nematic alignment about n. q_ab denote the components of q in a Cartesian basis and we follow Einstein's summation convention for repeated indices. The limiting cases S=0 and S=1 represent isotropically organized and perfectly aligned cases, respectively. The nematic tensor is symmetric and traceless, i.e. q=q^T and trq=0. The density and nematic fields ρ(x,t) and q(x,t) are the state variables, denoted by X(t) in Section <ref>. To describe how the system changes its state, we consider the hydrodynamic velocity field v. We decompose its gradient into a symmetric and an antisymmetric parts ∇v = d + w, where d_ab = 1/2(∇_b v_a +∇_av_b), w_ab = 1/2(∇_b v_a -∇_av_b). The tensor d characterizes the rate of deformation of a differential of volume of the material and it is usually referred to as the rate-of-deformation tensor. This tensor plays an important role in different aspects of the theory. For instance, trd=∇·v measures local area changes and hence controls the rate of accumulation or dilution of density, and viscous dissipation in the fluid is formulated in terms of d as discussed later. While trd describes the isotropic part of the rate of deformation, the deviatoric part is described by the traceless tensor d^ dev = d - trd/2I. The tensor w describes the local rate of rotation induced by the flow, possibly leading to rotation of the nematic alignment, and it is referred to as the spin tensor; note that in 2D it can be represented with a single scalar w=wϵ, where ϵ is the Levi-Civita tensor. The gradient of the spin is given by ζ = ∇ w . We characterize the rate of change of q with the Jaumann derivative <cit.> q= q̇ + q w - w q, or in components as q_ab = q̇_ab + q_ac w_cb - w_ac q_cb. where q̇_ab = ∂_t q_ab + v_c ∇_c q_ab is the total time derivative of the nematic order tensor. The Jaumann derivative can be defined geometrically in terms of Lie derivatives <cit.>. q measures the rate of change of q viewed by an observer that flows and rotates with v; thus, q is zero if q is advected and rotated by the flow without any further rearrangements of the nematic field. The fields v and q define the process variables V of our system. The process operator relating process variables and time-derivatives of the state variables, denoted by ∂_t X = P(X)V in Section <ref>, is specified by Eq. (<ref>) along with the mass conservation equation for ρ given by ρ̇ + ρ tr d = r , where ρ̇(x,t) = ∂_t ρ(x,t) + v(x,t)·∇ρ(x,t) is the material time-derivative of ρ. The second term characterizes the dilution (or compaction) of ρ caused by the rate of change of local area, and r is the rate of change of density not explained by the flow v, which typically results from chemical reactions and diffusion. Although the diffusive fluxes and reaction rates leading to r can be deduced from an extended Onsager variational formalism <cit.>, here we focus on active nemato-hydrodynamics and hence assume r as given. § GOVERNING EQUATIONS OF A GENERIC ACTIVE GEL FROM ONSAGER'S FORMALISM §.§ Free-energy, dissipation and power input functionals We derive next the governing equations of an active nematic gel at low Reynolds numbers. Let A ⊂ℝ^2 be an open set with smooth boundary ∂ A. The unit outward normal to ∂ A is denoted by N. We assume Dirichlet boundary conditions for v, w and q in subsets of the boundary denoted by ∂_D_v A, ∂_D_w A and ∂_D_q A, respectively. At Neumann boundaries, denoted by ∂_N_t A, ∂_N_Γ A and ∂_N_L A, we prescribe the traction vector, t, the antisymmetric torque tensor, Γ, and the generalized force power-conjugate to q represented by a symmetric traceless tensor, L. Dirichlet and Neumann boundaries are pairwise complementary, e.g. ∂_D_v A ∪∂_N_t A = ∂ A and ∂_D_v A ∩∂_N_t A = ∅. See Fig. <ref> for an illustration. To derive the governing equations, we follow Onsager's variational formalism introduced in Section <ref>. We postulate generic forms for the free-energy, dissipation and power input functionals as ℱ[ρ,q] = ∫_A f(q,∇q) ρ dA, 𝒟[v,q; ρ,q] = ∫_A d(v,d,w,ζ,q;ρ,q) ρ dA, 𝒫[v,q; ρ,q] = ∫_A p(v,d,w,ζ,q;ρ,q) ρ dA   - ∫_∂_N_t At·v dl - ∫_∂_N_Γ AΓ:w dl - ∫_∂_N_L AL:q dl, where f, d and p are the free-energy, dissipation, and power input densities per unit mass. The free energy depends on the state variables ρ and q. The dissipation and power inputs depend on process variables v, q, and might also depend parametrically on the state variables. Examining Eq. (<ref>), it is clear that we could have alternatively chosen V = (v, q̇) or V = (v, ∂_tq) as process variables, leading to different forms of the Euler-Lagrange equations. §.§ Rate of change of a frame-indifferent free energy functional To write down the Rayleighian, we compute the rate of change of Eq. (<ref>) applying Reynolds transport theorem as dℱ/dt = ∫_A [∂_tf ρ + f ∂_tρ] dA + ∫_∂ A fρv·N dl = ∫_A [∂_tf ρ + f ∂_tρ + ∇·(fρv) ] dA =∫_A [ḟρ + f ρ̇ + fρ trd] dA. Using the chain rule, the material time derivative of f can be written as ḟ = ∂ f/∂ q_abq̇_ab+ ∂ f/∂∇_c q_abD/Dt(∇_c q_ab), where we have introduced the more explicit notation D/Dt for the material time-derivative when required for clarity. To further elaborate on this expression, we note that, unlike partial time and space differentiation, the material time-derivative and ∇ do not commute. Indeed, the material time derivative of ∇q is given by D/Dt(∇_c q_ab) = ∂_t ∇_c q_ab + v_d ∇_d ∇_c q_ab, whereas the gradient of the material time derivative of q is ∇_c q̇_ab = ∇_c (D/Dtq_ab) = ∇_c ∂_t q_ab + ∇_c v_d ∇_d q_ab + v_d ∇_c ∇_d q_ab = D/Dt(∇_c q_ab) + ∇_c v_d q_ab = D/Dt(∇_c q_ab) + d_dc∇_d q_ab + w_dc∇_d q_ab. To express ḟ in terms of our process variable q, we compute its gradient Eq. (<ref>) as ∇_c q_ab = ∇_c q̇_ab + ∇_c q_ad w_db + q_ad∇_c w_db + ∇_c q_db w_da + q_db∇_c w_da. Combining Eq. (<ref>) and the definition of ζ in Eq. (<ref>), we rewrite this expression as ∇_c q_ab = D/Dt(∇_c q_ab) + d_dc∇_d q_ab + w_dc∇_d q_ab + w_db∇_c q_ad + w_da∇_c q_db + (q_adϵ_db - ϵ_ad q_db)ζ_c. Using this expression, the material time derivative of f in Eq. (<ref>) takes the form ḟ = ∂ f/∂ q_ab(q_ab- q_ad w_db- q_db w_da) + ∂ f/∂∇_c q_ab[ ∇_c q_ab + 2ϵ_ad q_dbζ_c - d_dc∇_d q_ab - w_dc∇_d q_ab - w_db∇_c q_ad - w_da∇_c q_db], where we have used q_ab = q_ba and ϵ_ab = -ϵ_ba to simplify the term involving ζ_c. The free energy density f should be frame indifferent, and hence its material time derivative should vanish for any rigid body motion characterized by d_ab=0, q_ab=0, and uniform but otherwise arbitrary w_ab, and hence ζ_c = 0. Invoking this principle along with Eq. (<ref>), we find that the identity 0 = ∂ f/∂ q_ab( q_ad w_db+ q_db w_da) + ∂ f/∂∇_c q_ab[ w_dc∇_d q_ab + w_db∇_c q_ad + w_da∇_c q_db], should hold for all antisymmetric tensors w and for all fields q. Combining Eqs. (<ref>) and (<ref>), we finally obtain ḟ = ∂ f/∂ q_abq_ab + ∂ f/∂∇_c q_ab( ∇_c q_ab - d_dc∇_d q_ab + 2ϵ_ad q_dbζ_c). Plugging this expression in Eq. (<ref>), we obtain dℱ/dt = ∫_A[ fρ̇/ρ + f trd +∂ f/∂q : q + ∂ f/∂∇_c q_ab( ∇_c q_ab - d_dc∇_d q_ab + 2ϵ_ad q_dbζ_c) ] ρ dA. A more direct and elegant geometric derivation of this result follows from writing down the free energy in a general curvilinear coordinate system and expressing the Jaumann derivative in terms of Lie derivatives <cit.>. Further particularizing the frame indifference condition in Eq. (<ref>) to uniform nematic fields (∇q = 0) and spin tensors of the form w = ϵ, we obtain the condition 0 = ∂ f/∂ q_ac q_cb - ∂ f/∂ q_bc q_ca. Similarly, considering a point where q=0 but its gradient is not, we obtain 0 = ∂ f/∂∇_a q_cd∇_b q_cd - ∂ f/∂∇_b q_cd∇_a q_cd + 2(∂ f/∂∇_d q_ac∇_d q_cb - ∂ f/∂∇_d q_bc∇_d q_ca). Equations (<ref>,<ref>) are thus identities that should hold for all q_ab and for all ∇_c q_ab, and that express frame indifference of the free energy. §.§ System Rayleighian For clarity of our derivation, we consider the fields ρ̇, d, w and ζ as independent variables, and enforce the kinematic and conservation relations relating them through Lagrange multipliers. Thus, the Rayleighian has the form ℛ[ρ̇,v,d,w,ζ,q;ρ,q] = dℱ/dt[ρ̇,d,ζ,q; ρ,q] + 𝒟[v,d,w,ζ,q; q,ρ] + 𝒫[v,d,w,ζ,q; q,ρ], and the governing equations can then be obtained according to Onsager's variational principle by minimizing it with respect to the extended process variables (ρ̇, v,d,w,ζ,q) subject to kinematic and mass conservation constraints expressed by the functional 𝒬[ϱ, σ^ s, σ^ a, m, ρ̇,v,d,w,ζ,q;ρ,q] = ∫_A {ϱ[ρ̇ + ρ tr d- r] . +σ^ s :[d - 1/2(∇v + (∇v)^T)] +σ^ a :[w - 1/2(∇v - (∇v)^T)] +. m·[ζ - ∇ w] }dA, where ϱ is the Lagrange multiplier imposing balance of mass; σ^ s, a symmetric tensor, and σ^ a, an antisymmetric tensor, are the Lagrange multipliers imposing the definitions of the rate-of-deformation and spin tensors; and m, a vector, is the Lagrange multiplier imposing the definition of the gradient of the spin. We thus form the Lagrangian as ℒ[ϱ, σ^ s, σ^ a,m,ρ̇,v,d,w,ζ,q; ρ,q] = ℛ[ρ̇,v,d,w,ζ,q;ρ,q] -𝒬[ϱ, σ^ s, σ^ a,m, ρ̇,v,d,w,ζ,q;ρ,q]. §.§ Balance equations and constitutive equations as optimality conditions We examine next the first order optimality conditions. Making ℒ stationary with respect to q and integrating by parts leads to 0 = ∫_A [ (∂ f/∂q + ∂ d/∂q+ ∂ p/∂q) : δq + ∂ f/∂∇_c q_ab∇_c δq_ab] ρ dA - ∫_∂_N_LAL:δq dl = ∫_A [ρ(∂ f/∂q_ab + ∂ d/∂q_ab+ ∂ p/∂q_ab) - ∇_c (ρ∂ f/∂∇_c q_ab) ] δq_ab dA + ∫_∂_N_LA(ρ∂ f/∂∇_c q_ab N_c -L_ab)δq_ab dl, for all admissible traceless and symmetric variations δq_ab that vanish on ∂_D_q A. Localizing this equation, we obtain ρ(∂ d/∂q_ab+ ∂ p/∂q_ab) - h_ab = 0 in A , ρ∂ f/∂∇_c q_ab N_c = L_ab on ∂_N_L A, where we have introduced the functional derivative of the free-energy with respect to the nematic tensor h_ab = -δℱ/δq_ab = -ρ∂ f/∂q_ab + ∇_c (ρ∂ f/∂∇_c q_ab). Equations (<ref>,<ref>) express balance of generalized forces power-conjugate to q. Variations with respect to ρ̇ lead to ϱ = f. Using this result, stationarity of ℒ with respect to d provides a definition for σ^ s: σ^ s_ab = ρ[-1/2(∂ f/∂∇_b q_dc∇_a q_dc+ ∂ f/∂∇_a q_dc∇_b q_dc) + ∂ d/∂ d_ab+ ∂ p/∂ d_ab]. Variations with respect to ζ leads to m_c =ρ(2 ϵ_ad∂ f/∂∇_c q_ab q_db + ∂ d/∂ζ_c + ∂ p/∂ζ_c). Introducing ω = - ρ(∂ d/∂w + ∂ p/∂w), stationarity of ℒ with respect to w leads to 0 =∫_A [-σ^ a :δw + 1/2m_c ϵ_ab∇_c δ w_ab - ω :δw] dA - ∫_∂_N_LAΓ:δw dl =∫_A -[σ^ a + 1/2(∇·m) ϵ + ω] : δw dA + ∫_∂_N_LA[1/2(m·N) ϵ- Γ]:δw dl, for arbitrary antisymmetric variations δw that vanish on ∂_D_w A. Localization leads to σ^ a + 1/2(∇·m)ϵ + ω = 0 in A, 1/2(m·N)ϵ = Γ on ∂_N_Γ A, which is a statement of balance of angular momentum, with m playing the role of the moment in a Cosserat theory <cit.> and ω of body torques. Equation (<ref>) provides a definition for σ^ a. Combining Eqs. (<ref>), (<ref>) and the definition for m in Eq. (<ref>), we find the following boundary condition 1/2ρ∂ (d+p)/∂ζ_c N_c ϵ_ab = Γ_ab - (L_ae q_be - L_be q_ae) on ∂_N_L A ∩∂_N_Γ A, . If d and p are independent of ζ, this equation shows that Γ and L cannot be chosen independently; a generalized force acting on nematic alignment determines the mechanical torque at the boundary. In this case, it is necessary that ∂_N_ΓA=∂_N_L A and ∂_D_q̂A=∂_D_w A. Introducing σ = σ^ s + σ^ a, the condition of stationarity of ℒ with respect to v leads to 0 = ∫_A [σ:∇δv - δv·f]dA - ∫_∂_N_t At·δv dl =∫_A [-∇·σ - f] ·δv dA + ∫_∂_N_t A(σ·N -t)·δv dl. for arbitrary δv that vanish on ∂_D_v A and f=-ρ∂ (d+p)/∂v. Localizing this equation, we find the statement of balance of linear momentum in the absence of inertia ∇·σ + f = 0 in A, σ·N = t on ∂_N_t A. We can hence identify σ as the Cauchy stress tensor, with σ^ s and σ^ a its symmetric and antisymmetric parts, and f as the body forces of dissipative and active/external origin. Finally, variations with respect to ϱ, σ^ s and σ^ a lead to balance of mass (Eq. (<ref>)) and the definitions of the rate-of-deformation and spin tensors, Eqs. (<ref>) and (<ref>). We end this section by providing a more explicit expression for the total Cauchy stress tensor. From Eqs. (<ref>) and (<ref>), we obtain σ_ab^a = ∇_c [ρ( ∂ f/∂∇_c q_bdq_ad - ∂ f/∂∇_c q_adq_bd) ]-1/2∇_c ( ρ∂ ( d+p)/∂ζ_c)ϵ_ab - ω_ab = ρ( ∂ f/∂∇_c q_bd∇_c q_ad - ∂ f/∂∇_c q_ad∇_c q_bd) + q_ad∇_c( ρ∂ f/∂∇_c q_bd) - q_bd∇_c( ρ∂ f/∂∇_c q_ad) -1/2∇_c ( ρ∂ ( d+p)/∂ζ_c)ϵ_ab - ω_ab = - ρ/2( ∂ f/∂∇_b q_cd∇_a q_cd - ∂ f/∂∇_a q_cd∇_b q_cd) + q_ad h_bd - q_bd h_ad -1/2∇_c ( ρ∂ ( d+p)/∂ζ_c)ϵ_ab - ω_ab where in the last step we have invoked frame-indifference of f as expressed by Eqs. (<ref>) and (<ref>) and the definition of the generalized nematic force h in Eq. (<ref>). Adding Eqs. (<ref>) and (<ref>), we can express the total stress as σ_ab = -ρ∂ f/∂∇_b q_dc∇_a q_dc + q_ad h_bd - q_bd h_ad + ρ∂ (d+p)/∂ d_ab + ρ∂ (d+p)/∂ w_ab - 1/2∇_c (ρ∂ ( d+p)/∂ζ_c) ϵ_ab . § STANDARD INCOMPRESSIBLE ACTIVE NEMATIC GEL Let us consider an incompressible gel (trd = ∇_a v_a =0) with initial uniform density and r=0. As a result, density remains uniform and can be ignored by considering free-energy, dissipation and power input densities per unit area. Because of incompressibility, the rate-of-deformation tensor is traceless and d^ dev = d. We define a quadratic dissipation functional by its density d(v,d,q) = η|d^ dev|^2 +η_rot/2|q|^2+ βd^ dev:q + γ/2|v|^2, where η>0 is the shear viscosity, η_rot>0 a viscosity parameter controlling the dissipative resistance to changes in the nematic order parameter relative to a frame that translates and rotates with the fluid, β captures the reciprocal drag between fluid shear and changes in nematic order, and γ>0 is a friction parameter with a substrate. The entropy production inequality requires non-negativity and convexity of the dissipation potential, which is satisfied whenever 2ηη_rot-β^2≥0. See <ref> for a related derivation of this condition. We note that there is no thermodynamic restriction on the sign of β. Physically, the natural notion that filaments align along the direction of stretching is achieved by β<0 <cit.>. We define power input by p(d,q;q) =λq:d^ dev - λ_q : q, which accounts for an anisotropic active tension along the nematic tensor with activity parameter λ and a generalized active force driving further alignment with activity parameter λ_. The constraint integral must include an additional term accounting for incompressibility of the form 𝒬 = … + ∫_A P ( trd) dA, where P is the 2D pressure (with units of surface tension) acting as a Lagrange multiplier. Particularizing the equations in the previous section, we can write balance of generalized force conjugate to changes in nematic order, i.e. Eq. (<ref>), as η_rotq = h - βd^ dev + λ_q, where here density is not present in the definition of h, h_ab = -δℱ/δq_ab = -∂ f/∂q_ab + ∇_c ∂ f/∂∇_c q_ab. Balance of linear momentum becomes γv = ∇·σ, where the stress tensor now accounts for the pressure resulting from Eq. (<ref>) and takes the form σ_ab = -∂ f/∂∇_b q_dc∇_a q_dc + q_ac h_cb - q_bc h_ca + 2ηd^ dev_ab + βq_ab + λq_ab - P δ_ab. In the liquid crystal literature, the dissipative coupling between strain rate and changes in nematic order appear in the stress tensor as a negative constant times h <cit.>, rather than as our term βq. To recover this expression, we insert the expression for generalized force balance, Eq. (<ref>), into Eq. (<ref>), to obtain σ_ab = -∂ f/∂∇_b q_dc∇_a q_dc + q_ac h_cb - q_bc h_ca + 2ηd^ dev_ab + β h_ab + λq_ab - P δ_ab, with effective shear viscosity η = η - β^2/(2η_rot), which is positive according to Eq. (<ref>), effective dissipative coupling coefficient β= β/η_rot and effective activity parameter λ= λ + λ_β/η_rot. With this manipulation, the balance and constitutive equations embodied in Eqs. (<ref>), (<ref>) and (<ref>) agree term by term with those in <cit.>. We have thus shown that the natural expression of the stress tensor according to our theory, Eq. (<ref>), is equivalent to the the more conventional one for an incompressible model, Eq. (<ref>). From a physical point of view, the interpretation of β h_ab in Eq. (<ref>) as a dissipative term is somewhat indirect because h_ab depends only on the state of the system, and not on its rate-of-change. Instead, the interpretation of the dissipative stress induced by changes of nematic order relative to the fluid motion, βq, is straightforward. Furthermore, Eq. (<ref>) requires a reinterpretation of material parameters, whose physical interpretation is unambiguous in our variational approach. § MODEL FOR A COMPRESSIBLE ACTIVE NEMATIC GEL Having derived the generic equations for a compressible active nematic gel in Section <ref>, here we make specific choices for free-energy, dissipation and power input to derive the governing equations for a density-dependent active nematic gel. For the free-energy density, we assume a Landau expansion f(q,∇q) = 1/2a S^2 + 1/8b S^4 + 1/2 L |∇q|^2, where L>0 is the Frank constant penalizing gradients of orientation and a and b>0 are susceptibility parameters. For a>0, the susceptibility parameters penalize deviations from the isotropic state given by S=0. For a<0, the susceptibility parameters penalize deviations from anisotropic states with S= √(-2a/b). For the dissipation potential, we adapt the form of d in Eq. (<ref>) to a 2D compressible thin layer. To motivate our functional form, we assume that this thin layer of gel is 3D incompressible with uniform and constant volumetric density ρ^3D and thickness h; hence the areal density is ρ = ρ^3D h. The 3D rate of deformation tensor D is block-diagonal with blocks d and the out-of-plane component D_33. Since trD=0, it follows that D_33 = - trd and hence |D|^2 = |d|^2 + (trd)^2 <cit.>. As a result, the dissipation potential in the gel can be expressed as ∫_V η^3D |D|^2 dV = ∫_A η^3D |D|^2 h dA= ∫_A (η^3D/ρ^3D) |D|^2 ρ dA = ∫_A η(|d|^2 + (trd)^2)ρ dA with η =η^3D/ρ^3D. Hence, we consider the dissipation density d(v,d,q) = η(|d|^2 + (trd)^2) +η_rot/2|q|^2+ βd^ dev:q + γ/2|v|^2. Clearly, Eq. (<ref>) implies that 𝒟[0,0]=0. The condition 2ηη_rot - β^2 ≥ 0, further guarantees non-negativity and convexity of the dissipation potential, <ref>. Hence, Eq. (<ref>) ensures non-negative entropy production. We consider the following power input density generated by out-of-equilibrium microscopic processes p(d,q;q) = λd + λ_ anisoq:d - (λ_' + ρλ_) q : q = λ(I + κq) : d - ρλ_q : q, where the first term in the first line is the power of an isotropic active tension, which now makes sense because of compressibility, the second term is the power of an anisotropic active tension along the nematic tensor, and the third term is the power of an active generalized force conjugate to changes in nematic order. In contrast to the previous section, here we expand the corresponding activity parameter up to linear order in density. The constant term λ_' has a formally equivalent effect in the governing equations as the first term in Eq. (<ref>), and hence can be subsumed in susceptibility parameter a. For this reason we consider λ_'= 0 in the second line, where we group active tensions in a single term by defining the tension anisotropy parameter κ=λ_ aniso/λ. For λ_>0, nematic activity tends to further increase alignment. With the free-energy, dissipation and power-input functions in Eqs. (<ref>,<ref>,<ref>), Onsager's variational formalism developed in Section <ref> yields the following generalized force balance equation η_rotq + βd^ dev + (2a + b S^2) q - L (Δq + ∇q·∇ρ/ρ) - ρλ_q = 0. This equation shows that 2a-ρλ_ can be interpreted as an effective susceptibility coefficient, which if negative, triggers spontaneous ordering. Balance of linear momentum takes the form ∇·σ = ργv, where the stress tensor is the sum of its symmetric component σ^ s_ab = ρ[2η (d_ab+d_ccδ_ab) + βq_ab + λ(δ_ab + κ q_ab) -L∇_a q_cd∇_b q_cd], and its antisymmetric component σ^ a_ab = L [∇_c ρ( q_ad∇_c q_db - q_bd∇_c q_da) + ρ(q_aeΔ q_be -q_beΔ q_ae) ]. For the boundary conditions, we either consider examples where t, Γ and L are zero, leading to homogeneous Neumann boundary conditions, or examples where v and q are fixed, leading to Dirichlet boundary conditions. For the right-hand side of Eq. (<ref>), we consider a polymerization rate k_p, a depolymerization rate proportional to ρ and given by -k_d ρ, and Fickian diffusion with diffusivity D, ρ̇ + ρ tr d = k_p - k_d ρ + D Δρ. We end this section by briefly discussing how this model can lead to patterns of nematic order. The conventional mechanisms for nematic ordering are driven by the free energy f, e.g. as a result of the excluded volume effects in the mixing entropy of elongated particles <cit.>. The model presented here can account for this mechanism by considering a<0, as in the numerical study in Section <ref>, possibly in a density-dependent manner. However, as illustrated by the numerical study in Section <ref> and in <cit.>, nematic order can also arise through an active mechanism according to which self-reinforcing convergent flows produce velocity gradients and density accumulation that increase order as a result of terms βd^ dev and - ρλ_q in Eq. <ref>. § FINITE ELEMENT FORMULATION The governing Eqs. (<ref>-<ref>) of the proposed model are non-linear and involve tight couplings between nematic, velocity and density fields. For this reason, a solution of the governing equations in arbitrary geometries and boundary conditions cannot be obtained by analytical means. Here, we develop a finite element computational approach building on Onsager's variational formalism. In this setting, numerical space discretization is straightforward and follows from performing extremization of the Lagrangian in a constrained functional space given by the finite element approximation of the process variable fields. The resulting stationarity conditions represent the discretized weak form of the governing equations. For time discretization, we resort to the implicit Euler method. §.§ Weak form of the governing equations In Eqs. (<ref>) and (<ref>), we have expressed the Rayleghian and the Lagrangian in terms of the independent variables ρ̇,q,v,d,w and ζ to derive the governing equations in the most physically meaningful form. However, to derive an Eulerian finite element method, it is more convenient to consider ∂_tq and v as the sole process variables. From the first expression in Eq. (<ref>), applying the chain rule to compute ∂_t f and using Eq. (<ref>), we can express the rate of change of the free energy as dℱ/dt[∂_tq,v;ρ,q] = ∫_A {ρ∂ f/∂q : ∂_t q + ρ∂ f/∂∇_c q_ab∇_c ∂_t q_ab + f [r-∇·(ρv)] }dA + ∫_∂_NA fρv·N dl. For the dissipation and power potentials, we directly substitute the definitions of w and ζ as a function of gradients of v and use Eq. (<ref>) to write q in terms of ∂_t q and v, formally leading to functionals of the form 𝒟[∂_tq,v;ρ,q] and 𝒫[∂_tq,v;ρ,q]. Combining these functionals, the Rayleighian can be expressed as ℛ[∂_tq,v;ρ,q] = dℱ/dt[∂_tq,v;ρ,q]+ 𝒟[∂_tq,v;ρ,q]+ 𝒫[∂_tq,v;ρ,q]. Onsager's variational principle then provides an alternative form of the governing equations by minimizing this Rayleighian with respect to ∂_t q and v. Minimization with respect to ∂_tq leads to 0 =δ_∂_t qℛ = ∫_A [ (∂ f/∂q + ∂ (d+p)/∂q):p + ∂ f/∂∇_c q_ab∇_c p_ab]ρ dA - ∫_∂_N_LAL:p dl, where p is an arbitrary variation of ∂_t q, and hence a traceless and symmetric second order tensor. Integration by parts of this weak form to obtain the corresponding Euler-Lagrange equations is not required for the finite element discretization. Minimization with respect to v leads to 0=δ_vℛ=  ∫_A {- f/ρ∇·(ρu) + ∂ ( d+ p)/∂v·u. + [∂ ( d+ p)/∂d + ∂ ( d+ p)/∂w] : ∇u + . ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a + ∂ ( d+ p)/∂q : δ_vq}ρ dA + ∫_∂ A fρN·u dl - ∫_∂_N_t At·u dl - ∫_∂_N_Γ AΓ:∇u dl - ∫_∂_N_L AL:δ_vq dl where u is an arbitrary variation of v. The variation of the Jaumann derivative of the nematic order tensor with respect to velocity is given by δ_vq_ab= u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e. We note that, although not written here, non-homogeneous Dirichlet boundary conditions on ∂_D_w A and ∂_D_q A would need to be explicitly enforced in the formulation presented here, e.g. through Lagrange multipliers, because constraints on q and w depend on ∂_t q and v in a non-trivial manner. The equivalence between Eq. (<ref>) and the form of balance of linear momentum found earlier in Eq. (<ref>) is not obvious even if both of these equations encode the same physics. In <ref>, we explicitly show this equivalence. For balance of mass, we consider the weak form of Eq. (<ref>) by multiplying by an arbitrary test function δρ, integrating, and applying the divergence theorem to the diffusive term to obtain ∫_A {( ∂ρ/∂ t + ∇· (ρv) - k_p +ρ k_d) δρ + D∇ρ·∇δρ} dA - ∫_∂ A D ∇ρ·Nδρ dl = 0. The boundary integral is dealt with by either prescribing the diffusive flux across the boundary or by considering δρ=0 in parts of the boundary where ρ is fixed. §.§ Space discretization We discretize fields in space following a typical finite element approximation based on a mesh with N vertices. Each vertex or node I of the mesh has an associated basis function B_I(x). The choice of the type of basis function depends on the required regularity. The weak form in Eq. (<ref>) contains second-order derivatives of u, and hence should require basis functions with at least square-integrable second order derivatives. However, since in our model ∂ ( d+ p) / ∂ζ = 0, only first order derivatives of u (and v) appear, and hence a conventional finite element discretization with C^0 continuity can be used. The density field is discretized as ρ(x,t) = ∑_I=1^N ρ_I(t) B_I(x), where ρ_I(t) is the I-th nodal coefficient at time t. Analogously, we have v(x,t) = ∑_I=1^Nv_I(t) B_I(x), where the nodal degrees of freedom are vectors. We represent the traceless and symmetric nematic tensor field as q(x,t) = ([ q_1(x,t) q_2(x,t); q_2(x,t) -q_1(x,t) ]), and discretize its components as q_1(x,t) = ∑_I=1^N q_1I (t) B_I(x), q_2(x,t) = ∑_I=1^N q_2I (t) B_I(x). Thus, we have 5 degrees of freedom per node, namely ρ_I, v_1I, v_2I, q_1I and q_2I. To discretize Eq. (<ref>), we express variations p as linear combinations of the following traceless symmetric tensors ([ B_I(x) 0; 0 -B_I(x) ]) ([ 0 B_I(x); B_I(x) 0 ]), for all I. Using the space-discretized nematic tensor and the variations defined above, the discretized form of balance of generalized force conjugate to nematic order in Eq. (<ref>) becomes a set of 2N algebraic equations. For the balance of linear momentum, Eq. (<ref>), we consider variations of velocity u to be linear combinations of [B_I(x) 0]^T and [0 B_I(x)]^T to obtain 2N additional algebraic equations. Finally, for the balance of mass, Eq. (<ref>), we consider δρ to be linear combinations of B_I(x) to obtain N equations. We note that when advection dominates in Eq. (<ref>), such a Galerkin approach, in which δρ are discretized with the same basis functions as ρ, leads to numerical instabilities. In our implementation, we check the condition for stability and stabilize the numerical formulation using the SUPG method if required <cit.>. Hence, we obtain a set of 5N differential-algebraic equations involving ρ_I, v_1I, v_2I, q_1I, q_2I, and the time-derivatives of the density and nematic degrees of freedom. §.§ Time discretization and solution method We discretize these equations in time using a non-uniform grid of time-steps that we denote by the superindex [n]. We denote by ρ^[n]_I, q^[n]_I, and v^[n]_I the nodal coeficients at the n-th time-step, and consider a backward Euler approximation, according to which we evaluate all fields in Eqs. (<ref>-<ref>) at step [n] and approximate time-derivatives as ∂_t ρ_I^[n] = (ρ^[n]_I-ρ^[n-1]_I)/Δ t^[n], ∂_t q_I^[n] = (q^[n]_I-q^[n-1]_I)/Δ t^[n]. This leads to a set of nonlinear algebraic equations that we solve with Newton-Raphson's method. We test our implementation by performing space and time convergence numerical experiments, finding the expected convergence rates, see  <ref>. § COMPUTATIONAL STUDIES OF ACTIVE NEMATODYNAMICS §.§ Cell wound healing The dynamical assembly of a contractile ring is essential to close holes and tears formed in the cortex of the Xenopus egg, as systematically examined with laser ablation experiments <cit.>. Ablation triggers a localized stimulus increasing myosin-II activity at the edge of the wound. A more contractile region in the actomyosin gel produces a gradient in active tension driving long-range cortical flows. The interplay between actin flow and enhanced myosin activity leads to the assembly of a ring made of a dense network of interconnected F-actin bundles, myosin-II and other actin-binding proteins. Inside the contractile ring, the network is aligned parallel to the boundary of the wound. This contractile ring leads to wound closure by the purse-string mechanism <cit.>. Here, we examine the self-organization of the contractile ring using the active nematic gel theory presented in Section <ref> and study conditions leading to wound closure. The model setup is illustrated in Fig. <ref>(a). We make the following assumptions. We ignore the curvature of the cell and consider a planar patch of actin cytoskeleton. The characteristic size of the domain ℓ_0 is much larger than any inherent length-scales of the model such as the hydrodynamic length scale ℓ_s=√(η/γ) or the nematic correlation length scale ℓ_p=√(L/|2a-ρ_0λ_|). This assumption implies that at distances greater than ℓ_s and ℓ_p, the effect of the ablated region on the system is negligible. We represent the ablated region as an ellipse with aspect ratio 1.5 and minor axis r_0. We denote the characteristic wound size with r_w(t) calculated as the minimum distance from the edge of the wound to its centroid. To reflect an enhanced activity around the ablated region, we set the parameters that control active tension to λ(x,t)= λ^0( 1+ δλe^-r(x,t)/w), λ_(x,t)= λ^0_( 1+ δλe^-r(x,t)/w), where r(x,t) is the distance between point x and its closest point projection on the boundary of the wound at time t. In Eq. (<ref>), λ^0 and λ_^0 are the base activity parameters and δλ sets the amplitude of the enhanced activity, which decays with the distance to the wound edge. We consider the width of the over-activity region w to be larger than ℓ_p and smaller than ℓ_s. The initial conditions are those of a uniform, quiescent and isotropic gel in its steady-state, and hence we make sure that the effective susceptibility 2a - ρ_0λ_ is positive. All material parameters are given in Table <ref>. We assume traction-free boundary conditions at the wound edge. To track changes of domain shape during wound closure, we adopt an updated Lagrangian approach; at each time-step, we update the nodes of the mesh following x^[n] = v^[n]Δ t^[n]+ x^[n-1]. To maintain the mesh quality during this Lagrangian flow, after updating the nodal coordinates of the mesh, we perform reparametrization of the mesh while keeping the boundary fixed if the local element distortion exceeds a given threshold. The reparametrization of the domain is performed using a customized version of the vtkvmtkPolyDataSurfaceRemeshing class of the Vascular Modeling Toolkit (VMTK) <cit.>. Once the deformed mesh is reparameterized, density and nematic tensor fields are mapped from the deformed mesh to the reparameterized mesh. To do so, we first perform a closest-point projection (using the function FindClosestPoint of the vtkCellLocator Class from the VTK library <cit.>) to establish a one-to-one mapping between material points in these two meshes. Then, the fields q and ρ are projected in the reparametrized mesh by a least-squares procedure. We first examine the role of κ, characterizing the anisotropy of active tension, as shown in Fig. <ref>(b). To illustrate the process of wound healing according to our model, we first focus on the curve corresponding to κ=1.0 with five snapshots I to V showing wound shape, density, velocity and nematic order, Fig. <ref>(c). At time-point I, we impose a local increase of activity following Eq. (<ref>). At this instant and due to tension in the active gel, the wound edge retracts away from the center increasing the size of the wound. At the same time, a centripetal actin flow driven by the gradient in active tension develops in a region of size commensurate to the hydrodynamic length. This convergent flow locally densifies the gel close to the edge of the wound, which further reinforces contractility and flow towards the edge. The centripetal flow rapidly decreases at the wound edge (it actually changes sign between instants I and II), generating a strong velocity gradient. Due to the flow-alignment effect associated to β<0, this velocity gradient increases nematic order near the edge parallel to the edge, whereas far away nematic order is weak with alignment perpendicular to the wound edge. The localized density and nematic order in the wound edge mobilize the generalized active nematic force -ρλ_q further driving order. In summary, local edge overactivity along with the traction-free boundary condition lead to the self-assembly of a dense contractile bundle with high alignment. Since here κ >0, this ring is highly contractile, creating a Laplace-like force in the curved wound edge, which tends to make it circular and overcomes cortical surface tension to close the wound, Fig. <ref>(III,IV). As the wound closes, the architecture of the contractile ring is stabilized by the interplay of cytoskeletal self-enhancing flows, flow-induced alignment, active alignment, diffusion, and turnover. This leads to a robust process of wound healing. When the size of the wound is very small relative to all other length-scales of the problem, we consider that the wound has closed and hence set the overactivity signal δλ=0. As a consequence, the self-reinforcing flows rapidly decrease and the contractile ring disassembles as cytoskeletal density reduces due to turnover (V). Hence, following a largely self-organized process of wound healing, the cortex recovers homeostasis. See Movie 1 for an illustration. A parametric sweep for different values of κ shows that κ needs to be sufficiently high for wound closure, Fig. <ref>(b). We then examined the effect of λ_ and β on wound healing, Fig. <ref>. A higher value of λ_ self-organizes a contractile ring with a higher nematic order on a shorter time scale. This leads to a quicker inhibition of wound opening and of reversal of boundary motion. Contrasting with this strong effect, the flow-aligning parameter β has a milder and more subtle effect. A higher β promotes the fast formation of the contractile ring, but also aligns radially the network far away, see inset, counteracting the effect of the ring. Close to the threshold between wound closing and opening, changes in β can have a dramatic effect on the dynamics, see blue curves. §.§ Defects in a confined colony of spindle-shaped cells Having examined a situation where nematic order has an active origin linked to self-reinforcing flows, we turn now to a more conventional situation in which ordering is driven by a<0. Elongated spindle-shaped contractile cells such as myoblasts or fibroblasts exhibit long-range nematic order in circular confined dense cultures <cit.> due to their tendency to mutually align parallel with each other <cit.>. Cells confined in these circular domains tend to align parallel or perpendicular to the boundary <cit.>. Because of this boundary alignment and the spontaneous tendency to nematic ordering, the net charge of the topological defects in the colony is +1, as required by the Poincaré-Hopf theorem <cit.>. This is satisfied by the generation of one or more pairs of topological defects with charge ± 1/2. Because nematic order controls the anisotropy of active stresses in the cell monolayer, topological defects actively move and may lead to a variety of out-of-equilibrium behaviors. Depending on the size of the geometrical confinement with respect to the characteristic lengths of the system, self-organized flows and motion of defects are either absent <cit.>, spiral or turbulent-like <cit.>. We examine next the dynamics of a spatially confined dense contractile active nematic system. In this dense cell colony, density variations are arguably small, and hence the density-dependent aspects of our model may not be important. However, an incompressible model for an active liquid crystal may not be pertinent as convergent/divergent flows are possible due to cell extrusion/proliferation. To simplify the model, we place ourselves in the limit of fast turnover rate, in which density is uniform and convergent/divergent flows are allowed. This only leaves us with the coupling between nematic and velocity fields. In order to model the propensity of cells towards mutual alignment, we set the susceptibility parameter such that the initial nematic order is close to S_0=√(-2a/b)=1. All model parameters are detailed in Table <ref>. We impose a boundary condition such that S=1 at the boundary and the director field is aligned either tangentially (n perpendicular to N) or perpendicularly (n = N) to the boundary, corresponding to parallel or homeotropic anchoring, see Fig. <ref>(a). We enforce that velocity normal to the boundary is zero (impermeable boundary) by introducing a penalty term in the Rayleighian given as ∫_∂ A K |v·N|^2 dl, where K is the penalty coefficient, but allow cells to slide tangentially to the boundary. We first explore the behavior of a passive nematic system for different ratios between the nematic correlation length ℓ_p = √(L/(2|a|)) and the radius of the domain ℓ_0. We start with a spatially correlated random initial condition for q. In agreement with previous results, we find that two +1/2 defects initially nucleate near the boundary, and then travel away from the wall into the bulk, reaching a quiescent steady-state <cit.>. During this process, the free-energy decreases, Fig. <ref>(c), and at steady-state velocities and rate of dissipation vanish. For systems with parallel (homeotropic) boundary conditions, the tips of the +1/2 defects point away from (towards) each other, Fig. <ref>(b). The size of the defect cores relative to system size is controlled by ℓ_p/ℓ_0. As this quantity increases, we expect the two defects to interact and possibly combine into a single +1 defect as observed in small-size cell colonies. To examine this, we tracked the distance between the two defects, d, as a function of ℓ_p/ℓ_0, finding that beyond a threshold, d abruptly drops close to zero, with configurations resembling a vortex or an aster depending on boundary conditions, Fig. <ref>(b). We note, however, that in the absence of activity, d stays finite, and hence the two defects do not strictly become a +1 defect, which can be understood in terms of a Coulomb-like repulsion <cit.>. We next explore the spatiotemporal behavior of defects in a contractile (λ_ aniso > 0) active nematic system by examining the velocity and nematic fields as a function activity, measured by the active length scale ℓ_a = √(L/λ_ aniso). We vary this inverse activity parameter at low nematic correlation length scale ℓ_p/ℓ_0 = 0.02 and focus on parallel boundary conditions for the nematic field, see Fig. <ref>. At low activity (large ℓ_a/ℓ_0), we observe that the active contractile stress λ_ anisoq leads to new steady-state with a smaller inter-defect distance, Fig. <ref>(b) and left panel of Movie 2. More importantly, the steady state of the active system exhibits persistent flows in the nose to tail direction around defects <cit.>, which push these defects by advection of nematic order, Eq. (<ref>). The passive nematic distribution is also distorted by the flow-induced alignment term involving β. At a smaller value, ℓ_a/ℓ_0 = 0.15, the two motile +1/2 defects move closer together and towards the wall, with an out-of-equilibrium velocity exhibiting two vortices. After a transient wiggling motion, these to defects stop moving and the system reaches an out-of-equilibrium steady-state, Fig. <ref>c,d and center panel of Movie 2. As we further reduce the ratio ℓ_a/ℓ_0 to 0.1, the defect distance increases and the two defects rotate in a chiral configuration leading to persistent rotation of the entire system, Fig. <ref>c,d and right panel of Movie 2. It has been suggested that the length-scale of such vortex driven by a defect pair is commensurate to ℓ_a <cit.>, and therefore, for lower activity this length-scale is too large for vortices to develop within the domain size. Upon further lowering the active length scale, ℓ_a/ℓ_0 ≤ 0.05, we observe the formation of splay bands, which destabilize into point defects as previously described <cit.>, Fig. <ref>d (right) and Movie 3. In this regime, active stresses overcome the restoring elastic stresses and distort the nematic field to form lines of splay-type disinclination in the bulk and close to the boundary. These disclination lines destabilize and split into pairs of ± 1/2 defects, Fig. <ref>e, which move and annihilate with defects of opposite charge. The persistent generation of disclination lines, their destabilization into point defects, and the motion and annihilation of these defects gives rise to active turbulence. At any given time, the system exhibits more than two defects but the total topological charge is conserved to +1. In summary, in a confined passive nematic system, our simulations recover the canonical spontaneous organization of two +1/2 defects with a steady inter-defect distance decreasing with increasing passive nematic length scale ℓ_p = √(L/(2|a| )). In an active system, our simulations exhibit a diversity of dynamical regimes depending on the active nematic length-scale ℓ_a= √(L/λ_ aniso). Because a gradient in nematic order in the vicinity of defects generates an active flow, defects become motile. At large or intermediate values of ℓ_a/ℓ_0, defects pairs move closer together, break symmetry and wobble closer to the boundary, or develop a persistent spiral motion. At small values of ℓ_a/ℓ_0 (high activity), we observe active turbulence characterized by persistent defect nucleation due to splay-type instabilities, motion and annihilation. § SUMMARY AND OUTLOOK We have proposed a general modeling framework for density-dependent active nematic systems in 2D. This framework relies on Onsager's variational principle for irreversible thermodynamics. We have reviewed the history of this variational approach and discussed how it naturally provides a simple and direct procedure to develop thermodynamically consistent models in fully nonlinear regimes. Focusing on density-dependent active nematic fluids, we have shown that this formalism enables a clear and systematic derivation of otherwise complex governing equations coupling nematic order, gel velocity and density. Neglecting density variations, we have recovered using our variational approach a standard model for incompressible active nematics. We have then particularized the general framework to develop a specific density-dependent active nematic fluid gel. We have developed a numerical finite element method to approximate this model and applied it to two studies of biological relevance. In the first study, we have explored the role of the self-organization of the actin cytoskeleton during wound repair. In our simulations, a slight overactivity around the wound drives a self-reinforcing flow of the actin gel, leading to the self-organization of a nematic bundle that efficiently constricts the wound. In this example, nematic oder arises due to activity, as self-reinforcing flows locally densify and orient nematic order. This mechanism of active pattern formation of dense nematic structures is further studied in <cit.> to understand the physical basis of self-organization of the actin cytoskeleton. In a second numerical study, we explore the self-organization of a dense nematic system in the limit of high turnover, modeling a constrained colony of contractile elongated cells. Here, the ordering mechanisms is crowding. Depending on the magnitude of the activity, the topological defects required by boundary conditions either reach a steady state or exhibit highly dynamical flows as well as active turbulence. If suitably extended to curved and time-evolving surface domains <cit.>, the framework presented here can help elucidate the interaction between nematodynamics and reshaping during morphogenesis from cellular to organism scales <cit.>. § ACKNOWLEDGMENTS The authors acknowledge the support of the European Research Council (CoG-681434) and the Spanish Ministry for Science and Innovation (PID2019-110949GB-I00). WM acknowledges the La Caixa Fellowship and the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie action (GA 713637). MA acknowledges the Generalitat de Catalunya (ICREA Academia prize for excellence in research). IBEC and CIMNE are recipients of a Severo Ochoa Award of Excellence. § CONDITIONS FOR NON-NEGATIVE ENTROPY PRODUCTION We identify here the conditions for non-negative entropy production. It is obvious that 𝒟[0,0]=0. We thus examine when 𝒟 is non-negative and convex. The integrand d can be written as d(v,d,q) = η( d_ab d_ab + (trd)^2 ) + η_rot/2q_abq_ab + β d_ab^ devq_ab + γ/2 v_a v_a ≥ 0 = 2η( d_11^2 + d_12^2 + d_22^2 + d_11d_22) + η_rot( q_1^ 2 + q_2^ 2) + β(d_11q_1 - d_22q_1 + 2d_12q_2) + γ/2(v_1^ 2 + v_2^ 2) ≥ 0 . and hence it is a quadratic form of its arguments that can be expressed as d = z_a M_ab z_b with z = ([ d_11; d_22; d_12; q_1; q_2; v_1; v_2; ]), M = ([ 2η η 0 β/2 0 0 0; η 2η 0 -β/2 0 0 0; 0 0 2η 0 β 0 0; β/2 -β/2 0 η_ rot 0 0 0; 0 0 β 0 η_ rot 0 0; 0 0 0 0 0 γ/2 0; 0 0 0 0 0 0 γ/2 ]). Because z is a linear function of (v,q), a direct argument shows that if the symmetric matrix M is positive semi-definite, then 𝒟[v,q] is a non-negative and convex functional. According to Sylvester's criterion <cit.>, M is positive semi-definite if and only if its leading principal minors are non-negative. As simple calculation shows that, since η>0, η_ rot>0 and γ>0, this condition is met when 2ηη_ rot - β^2 ≥ 0. § EQUIVALENCE BETWEEN DIFFERENT FORMS OF BALANCE OF LINEAR MOMENTUM In this section, establish an equivalence between the governing equations for balance of linear momentum obtained in Sections <ref> and <ref>. Comparing Eqs. (<ref>) and (<ref>), we need to prove that C_1 = ∫_A {- f/ρ∇_d (ρ u_d) + ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a . . + ∂ ( d+ p)/∂q_ab[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] }ρ dA + ∫_∂ A fρ N_c u_c dl - ∫_∂_N_L A L_ab[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e]dl -∫_∂_N_Γ AΓ_ab∇_b u_a dl , is equal to C_2 = ∫_A σ̅_ab∇_b u_a dA, where σ̅_ab is the total stress except for the terms involving ∂(d+p)/∂d and ∂(d+p)/∂w, and it is given by σ̅_ab = -ρ∂ f/∂∇_b q_dc∇_a q_dc + q_ad∇_c (ρ∂ f/∂∇_c q_bd) - q_bd∇_c (ρ∂ f/∂∇_c q_ad) - 1/2∇_c (ρ∂ ( d+p)/∂ζ_c) ϵ_ab . Integrating by parts the first term, and accounting for balance of generalized force in Eq. (<ref>) to substitute ∂(d+p)/∂q, we have C_1 = ∫_A { u_d ∇_d f + ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a . . + [1/ρ∇_d (ρ∂ f/∂∇_d q_ab) - ∂ f/∂ q_ab] [ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] }ρ dA - ∫_∂_N_L A L_ab[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e]dl -∫_∂_N_Γ AΓ_ab∇_b u_a dl. Integrating by parts the first term within square brackets in the second line of this equation, we obtain C_1 = ∫_A { u_d ∇_d f + ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a . - ∂ f/∂∇_d q_ab∇_d[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] . - ∂ f/∂ q_ab[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] }ρ dA - ∫_∂_N_L A( L_ab - ρ∂ f/∂∇_d q_ab N_d )[ u_c∇_cq_ab + 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e]dl -∫_∂_N_Γ AΓ_ab∇_b u_a dl. The boundary integral over ∂_N_L A vanishes because of the boundary condition in Eq. (<ref>). Noting that ∇_d f = ∂ f/∂ q_ab∇_d q_ab + ∂ f/∂∇_c q_ab∇_c ∇_d q_ab, we can cancel three terms in the first three lines of this equation to obtain C_1 = ∫_A {ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a . - ∂ f/∂∇_d q_ab∇_d[ 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] - ∂ f/∂∇_d q_ab∇_d u_c∇_cq_ab . - ∂ f/∂ q_ab[ 1/2(q_acϵ_cb -ϵ_acq_cb) ϵ_ef∇_fu_e] }ρ dA -∫_∂_N_Γ AΓ_ab∇_b u_a dl. Because of frame indifference of f as expressed by Eq. (<ref>), the term in the third line vanishes. Furthermore, using the symmetry of q, we can simplify the second line as C_1 = ∫_A {ϵ_ab/2∂ ( d+ p)/∂ζ_c∇_c ∇_b u_a . . - ∂ f/∂∇_d q_ab∇_d ( q_ae∇_b u_e - q_ae∇_e u_b) - ∂ f/∂∇_d q_ab∇_d u_c∇_cq_ab}ρ dA -∫_∂_N_Γ AΓ_ab∇_b u_a dl. Then, integration by parts of the term in the first line and the first term in the second line yields C_1 = ∫_A {-1/2∇_c(ρ∂ ( d+ p)/∂ζ_c) ϵ_ab∇_b u_a . . + ∇_d( ρ∂ f/∂∇_d q_ab) ( q_ae∇_b u_e - q_ae∇_e u_b) - ρ∂ f/∂∇_d q_ab∇_d u_c∇_cq_ab} dA -∫_∂_N_Γ A( Γ_ab - 1/2ρ∂ ( d+ p)/∂ζ_c N_c ϵ_ab)∇_b u_a dl -∫_∂_N_Γ Aρ∂ f/∂∇_d q_ab N_d ( q_ae∇_b u_e - q_ae∇_e u_b) dl. Renaming the dummy indices conveniently, we obtain C_1 = ∫_A {-1/2∇_c(ρ∂ ( d+ p)/∂ζ_c) ϵ_ab. . + q_ae∇_d( ρ∂ f/∂∇_d q_eb) - q_be∇_d( ρ∂ f/∂∇_d q_ea) - ρ∂ f/∂∇_b q_ef∇_aq_ef}∇_b u_a dA -∫_∂_N_Γ A{Γ_ab - [1/2ρ∂ ( d+ p)/∂ζ_cϵ_ab - q_ae( ρ∂ f/∂∇_c q_eb) + q_be( ρ∂ f/∂∇_c q_ea) ] N_c }∇_b u_a dl. Recalling the boundary condition in Eq. (<ref>), the boundary term vanishes. Finally, a direct comparison of the bulk term with Eq. (<ref>) shows that C_1 = C_2. § NUMERICAL CONVERGENCE We verify our numerical methods for the spatial discretization by solving the proposed model for various mesh sizes. For this purpose, we consider a passive nematic system with circular confinement, see Fig. <ref>, and ℓ_p/ℓ_0=0.04. We define the normalized error of free-energy at steady state E_x = |F^h -F^*|/|F^*| , where F^h is the free-energy of a finite element solution with mesh-size h and F^* is that of an overkill solution computed with a very fine mesh of n_e = 695,296 triangular elements. We plot the energy error for decreasing mesh sizes, Fig. <ref>(a). The results show the expected optimal convergence (slope of 2 in a log-log scale) for linear elements. Next for the same numerical experiment, we validate the temporal convergence by gradually decreasing the time step Δ t used to reach a fixed time point t|a|/η_ rot=1. We calculate the relative error of the time-discretization as E_t = |F^Δ t -F^t,*|/|F^t,*| , where F^Δ t is the free-energy obtained with time-step Δ t at t|a|/η_ rot=1 and F^t,* is the free-energy with a very small time-step Δ t^* = 10^-5 . As expected, Fig. <ref>(b), E_t converges as ∼Δ t. § PARAMETERS USED FOR THE NUMERICAL STUDIES § MOVIE CAPTIONS Movie 1 Dynamics during wound healing for various degrees of anisotropic activity quantified by κ. The density and nematic fields are represented by colormaps. The direction and length of black arrows indicate direction and magnitude of the velocity field v. The direction and length of red segments indicate the average molecular orientation n and the nematic order S. Movie 2 Effect of activity (quantified by the ratio between the active nematic length-scale ℓ_a and system size ℓ_0) on flows and defect structure and dynamics in a confined active nematic system. The velocity modulus and nematic fields are represented by colormaps. The direction and length of black arrows indicate direction and magnitude of the velocity field v. The direction and length of red segments indicate the average molecular orientation n and the nematic order S. Movie 3 Low Reynolds number active turbulence at high activity (quantified by the ratio between the active nematic length-scale ℓ_a and system size ℓ_0). The velocity modulus and nematic fields are represented by colormaps. The direction and length of black arrows indicate direction and magnitude of the velocity field v. The direction and length of red segments indicate the average molecular orientation n and the nematic order S. § BIBLIOGRAPHY 100 url<#>1#1urlprefixURL marchetti2013 Marchetti M C, Joanny J F, Ramaswamy S, Liverpool T B, Prost J, Rao M and Simha R A 2013 Reviews of modern physics 85 1143 doostmohammadi2018 Doostmohammadi A, Ignés-Mullol J, Yeomans J M and Sagués F 2018 Nature Communications 9 1–13 decamp2015 DeCamp S J, Redner G S, Baskaran A, Hagan M F and Dogic Z 2015 Nature materials 14 1110–1115 lemma2021 Lemma L M, Norton M M, Tayar A M, DeCamp S J, Aghvami S A, Fraden S, Hagan M F and Dogic Z 2021 Physical Review Letters 127(14) 148001 li2017 Li J, Biel T, Lomada P, Yu Q and Kim T 2017 Soft Matter 13 3213–3220 yu2018 Yu Q, Li J, Murrell M P and Kim T 2018 Biophysical Journal 115 2003–2013 ISSN 0006-3495 lehtimaki2021 Lehtimäki J I, Rajakylä E K, Tojkander S and Lappalainen P 2021 eLife 10 e60710 ISSN 2050-084X tee2015 Tee Y H, Shemesh T, Thiagarajan V, Hariadi R F, Anderson K L, Page C, Volkmann N, Hanein D, Sivaramakrishnan S, Kozlov M M et al. 2015 Nature Cell Biology 17 445–457 tojkander2015 Tojkander S, Gateva G, Husain A, Krishnan R and Lappalainen P 2015 eLife 4 e06126 wirshing2017 Wirshing A C and Cram E J 2017 Molecular Biology of the Cell (MBoC) 28 1937–1949 yolland2019 Yolland L, Burki M, Marcotti S, Luchici A, Kenny F N, Davis J R, Serna-Morales E, Müller J, Sixt M, Davidson A et al. 2019 Nature Cell Biology 21 1370–1381 jalal2019 Jalal S, Shi S, Acharya V, Huang R Y J, Viasnoff V, Bershadsky A D and Tee Y H 2019 Journal of Cell Science 132 jcs220780 weirich2017 Weirich K L, Banerjee S, Dasbiswas K, Witten T A, Vaikuntanathan S and Gardel M L 2017 Proceedings of the National Academy of Sciences 114 2131–2136 weirich2019 Weirich K L, Dasbiswas K, Witten T A, Vaikuntanathan S and Gardel M L 2019 Proceedings of the National Academy of Sciences 116 11125–11130 wioland2013 Wioland H, Woodhouse F G, Dunkel J, Kessler J O and Goldstein R E 2013 Physical Review Letters 110(26) 268102 duclos2017 Duclos G, Erlenkämper C, Joanny J F and Silberzan P 2017 Nature Physics 13 58–62 guillamat2020 Guillamat P, Blanch-Mercader C, Pernollet G, Kruse K and Roux A 2022 Nature Materials 21 588–597 doxzen2013 Doxzen K, Vedula S R K, Leong M C, Hirata H, Gov N S, Kabla A J, Ladoux B and Lim C T 2013 Integrative Biology 5 1026–1035 ISSN 1757-9708 callan2013 Callan-Jones A and Voituriez R 2013 New Journal of Physics 15 025022 Ruprecht:2015aa Ruprecht V, Wieser S, Callan-Jones A, Smutny M, Morita H, Sako K, Barone V, Ritsch-Marte M, Sixt M, Voituriez R and Heisenberg C P 2015 Cell 160 673–685 hannezo2015 Hannezo E, Dong B, Recho P, Joanny J F and Hayashi S 2015 Proceedings of the National Academy of Sciences 112 8620–8625 opathalage2019 Opathalage A, Norton M M, Juniper M P N, Langeslay B, Aghvami S A, Fraden S and Dogic Z 2019 Proceedings of the National Academy of Sciences 116 4788–4797 gao2017 Gao T, Betterton M D, Jhang A S and Shelley M J 2017 Physical Review Fluids 2(9) 093302 xia2019 Xia S, Lim Y B, Zhang Z, Wang Y, Zhang S, Lim C T, Yim E K and Kanchanawong P 2019 Cell Reports 28 1251–1267 mirza2022 Mirza W, Corato M D, Pensalfini M, Vilanova G, Torres-Sánchez A and Arroyo M 2022 TBA bechinger2016 Bechinger C, Di Leonardo R, Löwen H, Reichhardt C, Volpe G and Volpe G 2016 Reviews of Modern Physics 88 045006 patelli2019 Patelli A, Djafer-Cherif I, Aranson I S, Bertin E and Chaté H 2019 Physical review letters 123 258001 alaimo2017 Alaimo F, Köhler C and Voigt A 2017 Scientific reports 7 1–9 ehrig2017 Ehrig S, Ferracci J, Weinkamer R and Dunlop J W C 2017 Physical Review E 95(6) 062609 keber2014 Keber F C, Loiseau E, Sanchez T, DeCamp S J, Giomi L, Bowick M J, Marchetti M C, Dogic Z and Bausch A R 2014 Science 345 1135–1139 khoromskaia2017 Khoromskaia D and Alexander G P 2017 New Journal of Physics 19 103043 ellis2018 Ellis P W, Pearce D J, Chang Y W, Goldsztein G, Giomi L and Fernandez-Nieves A 2018 Nature Physics 14 85–90 mandato2001 Mandato C A and Bement W M 2001 The Journal of cell biology 154 785–798 anne2016 Reymann A C, Staniscia F, Erzberger A, Salbreux G and Grill S W 2016 eLife 5 e17807 ISSN 2050-084X vcopar2019  ČČopar S, Aplinc J, Kos i c v,  ŽŽumer S and Ravnik M 2019 Physical Review X 9(3) 031051 zhang2020 Zhang Y H, Deserno M, Tu Z C et al. 2020 Physical Review E 102 012607 napoli2020 Napoli G and Turzi S 2020 Physical Review E 101 022701 pearce2020 Pearce D J G 2020 New Journal of Physics 22 063051 nestler2018 Nestler M, Nitschke I, Praetorius S and Voigt A 2018 Journal of Nonlinear Science 28 147–191 hemingway2016 Hemingway E J, Mishra P, Marchetti M C and Fielding S M 2016 Soft Matter 12(38) 7943–7952 julicher2018 Jülicher F, Grill S W and Salbreux G 2018 Reports on Progress in Physics 81 076601 metselaar2019 Metselaar L, Yeomans J M and Doostmohammadi A 2019 Physical Review Letters 123(20) 208001 simha2002 Simha R A and Ramaswamy S 2002 Physical Review Letters 89 058101 hatwalne2004 Hatwalne Y, Ramaswamy S, Rao M and Simha R A 2004 Physical Review Letters 92(11) 118101 salbreux2022 Salbreux G, Jülicher F, Prost J and Callan-Jones A 2022 Physical Review Research 4(3) 033158 baskaran2008 Baskaran A and Marchetti M C 2008 Physical Review E 77(1) 011920 bertin2013 Bertin E, Chaté H, Ginelli F, Mishra S, Peshkov A and Ramaswamy S 2013 New Journal of Physics 15 085032 peshkov2012 Peshkov A, Aranson I S, Bertin E, Chaté H and Ginelli F 2012 Physical Review Lett. 109(26) 268701 marenduzzo2007 Marenduzzo D, Orlandini E, Cates M E and Yeomans J M 2007 Physical Review E 76(3) 031921 cates2009 Cates M E, Henrich O, Marenduzzo D and Stratford K 2009 Soft Matter 5(20) 3791–3800 desplat2001 Desplat J C, Pagonabarraga I and Bladon P 2001 Computer Physics Communications 134 273–290 ISSN 0010-4655 goudiaby2021 Goudiaby M S, Diagne A and Tine L M 2021 Communications on Pure and Applied Analysis 20 3499–3514 ISSN 1534-0392 becker2008 Becker R, Feng X and Prohl A 2008 SIAM Journal on Numerical Analysis 46 1704–1731 norton2018 Norton M M, Baskaran A, Opathalage A, Langeslay B, Fraden S, Baskaran A and Hagan M F 2018 Physical Review E 97(1) 012702 benink2000 Benink H A, Mandato C A and Bement W M 2000 Molecular Biology of the Cell 11 2553–2563 pMID: 10930453 duclos2014 Duclos G, Garcia S, Yevick H G and Silberzan P 2014 Soft Matter 10(14) 2346–2353 de2013non De Groot S and Mazur P 2013 Non-Equilibrium Thermodynamics Dover Books on Physics (Dover Publications) ISBN 9780486153506 kondepudi2014modern Kondepudi D and Prigogine I 2014 Modern Thermodynamics: From Heat Engines to Dissipative Structures CourseSmart Series (Wiley) ISBN 9781118371817 Prost:2015aa Prost J, Jülicher F and Joanny J F 2015 Nature Physics 11 111–117 Onsager1931 Onsager L 1931 Physical Review 37(4) 405–426 PhysRev.38.2265 Onsager L 1931 Physical Review 38(12) 2265–2279 Coleman:1963aa Coleman B D and Noll W 1963 Archive for Rational Mechanics and Analysis 13 167–178 Coleman_Gurtin Coleman B D and Gurtin M E 2004 The Journal of Chemical Physics 47 597–613 ISSN 0021-9606 ZIEGLER1987183 Ziegler H and Wehrli C 1987 The derivation of constitutive relations from the free energy and the dissipation function (Advances in Applied Mechanics vol 25) ed Wu T Y and Hutchinson J W (Elsevier) pp 183–238 Maugin Maugin G A 1999 The Thermomechanics of Nonlinear Irreversible Behaviors (WORLD SCIENTIFIC) rayleigh1873 Rayleigh L 1873 Proc. Math Soc. London 363 PhysRev.91.1505 Onsager L and Machlup S 1953 Physical Review 91(6) 1505–1512 peletier2014variational Peletier M A 2014 Variational modelling: Energies, gradient flows, and large deviations D0SM02076A Wang H, Qian T and Xu X 2021 Soft Matter 17(13) 3634–3653 Gyarmati Gyarmati I 1970 Non-equilibrium Thermodynamics: Field Theory and Variational Principles (Springer Berlin, Heidelberg) BIOT19841 Biot M 1984 New variational-lagrangian irreversible thermodynamics with application to viscous flow, reaction–diffusion, and solid mechanics (Advances in Applied Mechanics vol 24) ed Hutchinson J W and Wu T Y (Elsevier) pp 1–91 PhysRev.97.1463 Biot M A 1955 Physical Review 97(6) 1463–1469 EDELEN1972481 Edelen D G 1972 International Journal of Engineering Science 10 481–490 ISSN 0020-7225 ORTIZ1999397 Ortiz M and Repetto E 1999 Journal of the Mechanics and Physics of Solids 47 397–462 ISSN 0022-5096 Mielke:2003aa Mielke A 2003 Continuum Mechanics and Thermodynamics 15 351–382 mielke2016generalization Mielke A, Renger D R M and Peletier M A 2016 Journal of Non-Equilibrium Thermodynamics 41 141–149 MIEHE20022123 Miehe C, Schotte J and Lambrecht M 2002 Journal of the Mechanics and Physics of Solids 50 2123–2167 ISSN 0022-5096 doi2011 Doi M 2011 Journal of Physics: Condensed Matter 23 284118 arroyo2009 Arroyo M and DeSimone A 2009 Physical Review E 79(3) 031915 arroyo2018 Arroyo M, Walani N, Torres-Sánchez A and Kaurin D 2018 Onsager's Variational Principle in Soft Matter: Introduction and Application to the Dynamics of Adsorption of Proteins onto Fluid Membranes (Cham: Springer International Publishing) pp 287–332 ISBN 978-3-319-56348-0 kaurin-bal Kaurin D, Bal P K and Arroyo M 2022 Journal of The Royal Society Interface 19 20220183 Tozzi_2019 Tozzi C, Walani N and Arroyo M 2019 New Journal of Physics 21 093004 10.1122/8.0000475 De Corato M and Arroyo M 2022 Journal of Rheology 66 813–835 ISSN 0148-6055 Otto2001 Otto F 2001 Communications in Partial Differential Equations 26 101–174 torres2019 Torres-Sánchez A, Millán D and Arroyo M 2019 Journal of Fluid Mechanics 872 218–271 Noselli:2019aa Noselli G, Beran A, Arroyo M and DeSimone A 2019 Nature Physics 15 496–502 turlier2014 Turlier H, Audoly B, Prost J and Joanny J F 2014 Biophysical journal 106 114–123 PhysRevLett.127.110601 Goychuk I and Pöschel T 2021 Physical Review Letters 127(11) 110601 sens-PNAS Sens P 2020 Proceedings of the National Academy of Sciences 117 24670–24678 de1993 de Gennes P and Prost J 1993 The Physics of Liquid Crystals International Series of Monogr (Clarendon Press) ISBN 9780198517856 marsden1994 Marsden J E and Hughes T J 1994 Mathematical foundations of elasticity (Courier Corporation) Alexander Mielke A 2013 Thermomechanical modeling of energy-reaction-diffusion systems, including bulk-interface interactions mirza2023 Mirza W, Vilanova G, Sánchez A T and Arroyo M 2023 TBA waleed_thesis Mirza W A 2023 A theoretical and computational study of the active self-organization of nematic patterns in thin cytoskeletal layers and their effect on curvature Ph.D. thesis Universitat Politècnica de Catalunya cosserat1896theorie Altenbach J, Altenbach H and Eremeyev V A 2010 Archive of Applied Mechanics 80 73–92 salbreux2009 Salbreux G, Prost J and Joanny J F 2009 Physical Review Letters 103 058102 Onsager_shape Onsager L 1949 Annals of the New York Academy of Sciences 51 627–659 D0SM01733G Tozzi C, Walani N, Le Roux A L, Roca-Cusachs P and Arroyo M 2021 Soft Matter 17(12) 3367–3379 donea2003 Donea J and Huerta A 2003 Finite element methods for flow problems (John Wiley & Sons) Alice2008 Rodriguez‐Diaz A, Toyama Y, Abravanel D L, Wiemann J M, Wells A R, Tulu U S, Edwards G S and Kiehart D P 2008 HFSP Journal 2 220–237 pMID: 19404432 antiga2008 Antiga L, Piccinelli M, Botti L, Ene-Iordache B, Remuzzi A and Steinman D A 2008 Medical & biological engineering & computing 46 1097–1112 schroeder2003 Schroeder W, Martin K and Lorensen B 2003 The visualization toolkit, 3rd edn. kitware elsdale1968 Elsdale T 1968 Experimental Cell Research 51 439–450 ISSN 0014-4827 jubin2009 Jubin B 2009 A generalized poincaré-hopf index theorem arXiv hardouin2019 Hardoüin J, Hughes R, Doostmohammadi A, Laurent J, Lopez-Leon T, Yeomans J M, Ignés-Mullol J and Sagués F 2019 Communications Physics 2 1–9 giomi2014 Giomi L, Bowick M J, Mishra P, Sknepnek R and Cristina Marchetti M 2014 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372 20130365 thijssen2020 Thijssen K and Doostmohammadi A 2020 Physical Review Research 2(4) 042008 vafa2020 Vafa F 2022 Soft Matter 18(42) 8087–8097 ronning2022 Rønning J, Marchetti C M, Bowick M J and Angheluta L 2022 Proceedings of the Royal Society A 478 20210879 chandrakar2020 Chandrakar P, Varghese M, Aghvami S, Baskaran A, Dogic Z and Duclos G 2020 Physical Review Letters 125(25) 257801 ramaswamy2007 Ramaswamy S and Rao M 2007 New Journal of Physics 9 423–423 ramaswamy2010 Ramaswamy S 2010 Annual Review of Condensed Matter Physics 1 323–345 mayer2010 Mayer M, Depken M, Bois J S, Jülicher F and Grill S W 2010 Nature 467 617–621 maroudas2021 Maroudas-Sacks Y, Garion L, Shani-Zerbib L, Livshits A, Braun E and Keren K 2021 Nature Physics 17 251–259 doi:10.1080/00029890.1991.11995702 Gilbert G T 1991 The American Mathematical Monthly 98 44–46
http://arxiv.org/abs/2306.11298v1
20230620052415
On the $Δ_a$ invariants in non-perturbative complex Chern-Simons theory
[ "Shimal Harichurn" ]
math.GT
[ "math.GT", "hep-th", "math-ph", "math.MP", "math.QA", "57M27" ]
Analysis of a Skyrme energy density functional with deep learning K. Yoshida July 31, 2023 ================================================================= Recently a set of q-series invariants, labelled by Spin^c structures, for weakly negative definite plumbed 3-manifolds called the Z_a invariants were discovered by Gukov, Pei, Putrov and Vafa. The leading rational powers of the Z_a invariants are invariants themselves denoted by Δ_a. In this paper we further analyze the structure of these Δ_a invariants. We outline some of the foundations of the Δ_a invariants and provide answers to some questions in the literature. We also provide a way to compute Δ_0 for Brieskorn spheres. § INTRODUCTION In <cit.> a new invariant of (weakly negative definite) plumbed 3-manifolds called Z_a invariants were discovered. These invariants are labelled by Spin^c structures and for a given (weakly negative definite) plumbed 3-manifold Y := Y(Γ) they take the form Z_a(Y;q) = 2^-cq^Δ_a(c_0q^0 + c_1q^1 + ⋯) where c ∈ℕ and Δ_a ∈ℚ. Both the numbers c and Δ_a turn out to be topological invariants. In this paper we will be focusing on the Δ_a invariant. We will give a formal definition of these invariants later on, but, as equation <ref> suggests, one can think of as the unique rational pre-factored power of the Z_a invariants. In [read-delta]Lemma <ref> we will formalize this notion further. For now, let's look at a couple of examples where these invariants pop up. As recorded in <cit.>, if Y = L(p, 1) one has that Z_0(Y;q) = -2q^p-3/4 and Z_1(Y;q) = q^p-3/4(2q^1/p). Thus Δ_0(Y) = p-3/4 and Δ_1(Y) = p^2-3p +1/4p. By using the diffeomorphism between L(1, 1) and S^3 one obtains that Z_0(S^3;q) = q^-1/2(2q-2) and hence Δ_0(S^3) = -1/2. In <cit.> the Δ_a invariants were studied more intensely and in the process these invariants were related to the correction terms d to Heegaard Floer Homology. Moreover as stated in <cit.>, the Δ_a invariants under the "3d Modularity Conjecture" <cit.> are equal to the scaling dimension of associated log-VOA modules. The goal of this paper is to investigate the structure of these Δ_a invariants and provide some answers to open questions in the literature regarding them. We list the main results in this paper below. * In [new-brieskorn-form]Proposition <ref>, we derive a formula for Δ_0 for Brieskorn spheres. In the process we obtain a formula for Z_0 for Brieskorn spheres which is immediately of the form q^Δ_0ℤ[[q]]. In Examples <ref> and <ref> we compute Z_0 and Δ_0 for Σ(2, 9, 11) and Σ(3, 7, 8). * In [z-hat-cob-not]Theorem <ref> we show that neither Z_a nor Δ_a are homology cobordism or cobordism invariants by providing some counterexamples. This answers a question raised in <cit.>. We further conjecture, in [spinc-hom-cob-not]Conjecture <ref>, that Δ_a is not a Spin^c homology cobordism invariant and provide evidence for this. * In [delta-hom-sphere]Proposition <ref> we prove that Δ_0(Y) = 1/2 1 if Y is a negative definite plumbed integral homology sphere. * In [rel-sharp]Proposition <ref>, using an existing example from the literature, we show that if Δ_b(Y) = 1/2 - d(Y, b) x holds for all 3-manifolds Y, we must have x=1. This answers a question that was raised in <cit.>. * In [program]Section <ref> we compute the Z_0 and Δ_0 invariants for various Brieskorn spheres which have not appeared thus far in the literature. Organization of the paper. In Section 2 we begin with a quick recap of the Z_a invariants before defining the Δ_a invariants and providing a characterization for them. In Section 3 we will analyze Δ_0 and Z_0 further for Brieskorn spheres Y:= Σ(b_1, b_2, b_3) of the form (b_1, b_2, b_3) ≠ (2, 3, 5). In particular we will derive a formula for finding Δ_0(Y) and also put forward a proposition which puts Z_0(Y;q) immediately into the form q^Δ_0P(q) for some P(q) ∈ℤ[[q]]. To digest these results we also give a couple of examples, those being Σ(2, 9, 11) and Σ(3,7,8) for which we compute Δ_0 and Z_0. In Section 4 we will show that the Δ_a invariants are not homology cobordism invariants by providing a counterexample. We will also discuss our conjecture that Δ_a is not a Spin^c homology cobordism invariant. In Section 5 we will discuss the relation between Δ_a and the correction terms to Heegaard Floer homology. In this section we will also show that Δ_0(Y) = 1/2 1 for negative definite plumbed Y. Acknowledgements. The author would like to thank Sergei Gukov for guidance throughout the course of this project and feedback on a draft of this paper. The author would further like to thank Eveliina Peltola for helpful comments during the author's Master's thesis, from which this paper is derived. The author would like to also thank Mrunmay Jagadale and Josef Svoboda for stimulating discussions. The author was supported by the FirstRand FNB 2020 Fund Education scholarship. § THE DELTA INVARIANT We will begin with a quick recap of the Z_a-invariants. First let us recall the definitions of negative definite and weakly negative definite plumbed manifolds. The reader may find the definition of a plumbed manifold in <cit.>. A weighted tree Γ, and its resulting plumbed manifold Y(Γ) is called * negative definite if the linking matrix M it produces is negative definite. * weakly negative definite if the linking matrix M it produces is invertible and M^-1 is negative definite on the subgroup of ℤ^s generated by the vertices of degree ≥ 3. By this we mean that if v_1, …, v_s are all the vertices of Γ and v_1, …, v_k where k < s are all the vertices of degree ≥ 3 then we require M^-1 is negative definite on the subgroup ℤ^k := {(l_v_1, …, l_v_k, 0, …, 0) ∈ℤ^s | l_v_i∈ℤ for 1 ≤ i ≤ k} that is for every ℓ∈ℤ^k we require that (ℓ, M^-1ℓ) < 0. The following is an adapted definition from <cit.>. Let P(z) be a Laurent series which is holomorphic in some open set containing {z ∈ℂ| 1 -c ≤ |z| ≤ 1 + c} except possibly at |z| =1 (that is P(z) can possibly be singular for |z|=1). We define v.p.∫_|z|=1 P(z) dz := 1/2[∫_|z|=1-ϵ P(z) dz + ∫_|z|=1+ϵ P(z) dz] where c>ϵ > 0 is some real number. The reason why the above definition doesn't depend on ϵ is by virtue of Cauchy's Theorem. Furthermore, if P(z) is not singular for |z| = 1 then one sees that v.p.∫_|z|=1 P(z) dz = ∫_|z|=1 P(z) dz If f(z_0, …, z_s) is a complex function, then we will write v.p.∫_|z_0|=1⋯∫_|z_s|=1 f(z_0, …, z_s) dz_s ⋯ dz_0 to mean v.p.∫_|z_0|=1v.p.∫_|z_1|=1⋯v.p.∫_|z_s|=1 f(z_0, …, z_s) dz_s ⋯ dz_0. Now let us turn to the definition of the Z_a-invariants. These invariants are constructed in <cit.>. The following definition has been taken with minor (but equivalent) modifications from <cit.>. Let Γ be a weighted tree with s vertices which produces a weakly negative definite plumbed manifold Y:= Y(Γ) with associated linking matrix M such that b_1(Y) = 0. Recall that Spin^c(Y) ≅ (2ℤ^s + δ)/2Mℤ^s (where δ = (δ_v)_v ∈Vert(Γ)∈ℤ^s denotes the vector comprised of the degrees of the vertices from Γ). For any a ∈Spin^c(Y) let a∈ 2ℤ^s + δ be a representative of a, then letting q be a formal variable, we define Z_a(Y;q) by Z_a(Y;q) = (-1)^π q^3σ - Tr(M)/4 ×v.p.∮_|z_v_1|=1⋯∮_|z_v_s|=1[∏_i=1^s (z_v_i - 1/z_v_i)^2-(v_i)] ·Θ_a^-M(z) dz_v_s/2π i z_v_s⋯dz_v_1/2π i z_v_1 where Θ_a^-M(z) = ∑_l = (l_v_1, …, l_v_s), l∈ 2Mℤ^s + aq^-(l, M^-1l)/4∏_i=1^sz_v_i^l_v_i, where in the above we have that v.p. denotes taking the principal part of the integral, σ is the signature of the matrix M (the number of positive eigenvalues of M minus the number of negative eigenvalues of M) and π is the number of positive eigenvalues of M. Let us now state a lemma which summarizes a few structural results on the Z_a invariants. The second of which formalizes a comment made in <cit.>. Let Y:= Y(Γ) be a weakly negative definite plumbed 3-manifold with b_1(Y) = 0. Let a⃗ be a representative of a ∈Spin^c(Y), then * Z_a(Y;q) can be written in the form Z_a(Y;q) = (-1)^π 2^-ηq^3σ - Tr(M)/4∑_ℓ∈ 2Mℤ^s + a⃗ c_ℓ· q^-(ℓ, M^-1ℓ)/4, where η∈ℕ∪{0} and c_ℓ⃗∈ℤ. * If v_i is a vertex of Γ which has (v_i) ≤ 2, then the number of values of l_v_i in the tuple ℓ⃗ = (l_v_1, …, l_v_s) ∈ 2Mℤ^s + a⃗ that yields a non-zero contribution to the q-series Z_a(Y;q) is finite. * Suppose that Z_a(Y;q) is given in the form of equation <ref>. Letting I = {ℓ∈ 2Mℤ^s + a| c_ℓ⃗≠ 0} then we have that max_l∈ I (l, M^-1l) exists. (i) One applies the result (cf. <cit.>) that ∫_|z|=1z^kdz = 0 if k ≠ 1 and it equals 2π i if k = -1 to the definition of the Z_a invariants and the result follows. (ii) The proof we outlines builds upon a basic argument laid out in <cit.>. Proving (ii) basically boils down to the fact that if we let q be a formal variable, then an integral of the form ∮_|z|=1(z-1/z)^m ∑_l ∈ℤ z^lq^l dz/2π i z for 0 ≤ m ≤ 2 is non-zero for only finitely many values of l. One then applies this general fact to the integrals which occur in Z_a(Y;q) to complete the proof of the lemma. (iii) The proof we provide fleshes out the details of the basic argument laid out in <cit.>. Suppose without loss of generality that Γ has k < s vertices of degree ≥ 3 and if v_1, …, v_s are the vertices of Γ then v_1, …, v_k are the vertices of degree ≥ 3. Since Y is a weakly negative definite plumbed manifold we have that M^-1 is negative definite on the subgroup ℤ^k := {(l_v_1, …, l_v_k, 0, …, 0) ∈ℤ^s | l_v_i∈ℤ for 1 ≤ i ≤ k} that is for every ℓ∈ℤ^k we have that (ℓ, M^-1ℓ) < 0. In particular this shows that max_l∈ℤ^k (l, M^-1l) exists (via [glob-min-cor]Corollary <ref>) and hence that max_l∈ I ∩ℤ^k (l, M^-1l) exists. Now for any l = (l_v_1, …,l_v_k, l_v_k+1, … l_v_s) ∈ I ⊆ 2Mℤ^s + a we see from part (ii) above that each of the l_v_k+1, …, l_v_s can only take on finitely many different values from ℤ. Thus one can view the indexing set I as I = ℤ^k∪ F where F is some finite set. Thus we can conclude that max_l∈ I (l, M^-1l) exists from the fact that max_l∈ I ∩ℤ^k (l, M^-1l) exists. In the case when Y:=Y(Γ) is negative definite the above proof of (iii) can be shortened as we show in the lemma below. Let Y := Y(Γ) be a a negative definite plumbed manifold arising from a weighted tree Γ with linking matrix M. Then for any a ∈Spin^c(Y), if a ∈ 2ℤ^s + δ is a representative of a, we have that max_l∈ 2Mℤ^s + a (l, M^-1l) exists. Since Y is a negative definite plumbed manifold, we have that M is an s× s integer valued negative definite matrix. Then note that [lem-neg-def]Lemma <ref> (part e.) shows that M^-1 is also an integer valued negative definite matrix and so [glob-min-cor]Corollary <ref> then immediately shows that max_l∈ 2Mℤ^s + a (l, M^-1l) exists. §.§ The definition of the Delta invariant Let Γ be a weighted tree, with |Vert(Γ)|=s which produces a weakly negative definite plumbed manifold Y:=Y(Γ) with linking matrix M and such that b_1(Y) = 0. Let a ∈Spin^c(Y), and a⃗∈ 2ℤ^s + δ be a representative of a, suppose that Z_a(Y;q) is given in the form as per Lemma <ref> part (i) Z_a(Y;q) = (-1)^π 2^-ηq^3σ - Tr(M)/4∑_ℓ∈ 2Mℤ^s + a c_ℓ· q^-(ℓ, M^-1ℓ)/4 where η∈ℕ∪{0} and c_ℓ⃗∈ℤ. Let I = {ℓ∈ 2Mℤ^s + a| c_ℓ⃗≠ 0}, then we define Δ_a = 3σ - Tr(M)/4 - max_l∈ I(l, M^-1l)/4 where (l, M^-1l) means the inner product of l with M^-1l. * We will sometimes write Δ_a(Y) to show the explicit dependence on the manifold Y, but we will for the most part simply write Δ_a instead. * The definition we provided above was produced in a slightly different manner in <cit.>. In <cit.> the indexing set I was not mentioned in the maximum max_l∈ I(l, M^-1l)/4 which occurs in the definition of Δ_a, though we believe that this indexing set was implicitly assumed, as is done in <cit.>. * Lemma <ref> part (iii) ensures that the maximum used in the definition above exists, and hence that the definition of Δ_a is well-defined. The following lemma shows that the Δ_a-invariants really are invariants of weakly negative definite plumbed manifolds. Let Γ_0, Γ_1 be two weighted trees, which produce weakly negative definite plumbed manifolds Y_0 := Y(Γ_0) and Y_1 := Y(Γ_1) with linking matrix M_0 and M_1 respectively. If Y_0 is diffeomorphic to Y_1 then we have that Δ_a(Y_0) = Δ_a(Y_1) This follows immediately since we have that Z_a(Y_0;q) = Z_a(Y_1;q) because the fact that Y_0 is diffeomorphic to Y_1 implies that Γ_0 and Γ_1 are related by a sequence of Neumann moves and the Z_a-invariants are invariant under Neumann moves. We expect to factorize Z_a(Y;q) into an algebraic object of the form 2^-ηq^Δ_aP(q) where Δ_a is some rational power, η∈ℕ∪{0} and P(q) ∈ℤ[[q]]. Let us now prove this. The result of the following proposition was stated in <cit.>, but was not proven. Let Γ be a weighted tree, which produces a weakly negative definite plumbed manifold Y(Γ) with linking matrix M. Then for any a ∈Spin^c(Y), we have that Z_a(Y;q) ∈ 2^-ηq^Δ_aℤ[[q]] where η∈ℕ∪{0} and 2^-ηq^Δ_aℤ[[q]] := {2^-ηq^Δ_aP(q) | P(q) ∈ℤ[[q]]}. From [z-hat-expanded-lemma]Lemma <ref> part (i) we know that Z_a(Y;q) is given in the form Z_a(Y;q) = (-1)^π 2^-ηq^3σ - Tr(M)/4∑_ℓ∈ I c_ℓ· q^-(ℓ, M^-1ℓ)/4 where I ⊆ 2Mℤ^s + a is an indexing set such that c_ℓ⃗≠ 0 for each ℓ⃗∈ I, a∈ 2ℤ^s + δ is a fixed representative of a and η∈ℕ∪{0}. Since Y is a weakly negative definite plumbed manifold by Lemma <ref> part (iii) we see that γ:= max_ℓ∈ I (ℓ, M^-1ℓ) exists. Thus we can factorize Z_a(Y;q) into the form Z_a(Y;q) = (-1)^π 2^-ηq^3σ - Tr(M) - γ/4∑_ℓ∈ I c_ℓ· q^γ - (ℓ, M^-1ℓ)/4 Now let ℓ⃗_0 ∈ I be such that γ = (ℓ_0, M^-1ℓ_0). Then to show that Z_a(Y;q) ∈ 2^-ηq^Δ_aℤ[[q]], where Δ_a = 3σ - Tr(M) - γ/4 it suffices to show that for any ℓ⃗∈ I that γ-(ℓ⃗, M^-1ℓ⃗) = (ℓ_0, M^-1ℓ_0) - (ℓ⃗, M^-1ℓ⃗) ∈ 4 ℤ_+ where ℤ_+ = {x ∈ℤ| x ≥ 0} because then it would follow that all the exponents of q on the right hand side of equation (<ref>) are non-negative integers. To that end let ℓ⃗∈ I be given arbitrarily. First notice that since γ:= max_ℓ∈ I (ℓ, M^-1ℓ) we have γ-(ℓ⃗, M^-1ℓ⃗) ≥ 0. Then secondly notice that we can express ℓ⃗ = ℓ_0 + 2Mv where v ∈ℤ^s. The reason for this is that both ℓ⃗ and ℓ_0 are elements of the set 2Mℤ^s + a and since a is fixed we have that ℓ⃗ - ℓ_0∈ 2Mℤ^s, which implies that ℓ⃗ = ℓ_0 + 2Mv. One can then compute (using the fact that M is a symmetric matrix and properties of the transpose) that (ℓ⃗, M^-1ℓ⃗) = ℓ⃗^TM^-1ℓ⃗ = (ℓ_0 + 2Mv)^TM^-1(ℓ_0 + 2Mv) = γ + 4ℓ_0^Tv + 4(v, Mv). Noting that ℓ_0^Tv ∈ℤ and (v, Mv) ∈ℤ, we can thus see that γ-(ℓ⃗, M^-1ℓ⃗) = -4ℓ_0^Tv - 4(v, Mv) ∈ 4 ℤ. Combining this with the fact that γ-(ℓ⃗, M^-1ℓ⃗) ≥ 0 implies that γ-(ℓ⃗, M^-1ℓ⃗) ∈ 4ℤ_+. This completes the proof of the proposition. §.§ A characterization of the Delta invariant The following lemma provides a very useful way to simply 'read off' the Δ_a-invariant from a given expression of Z_a(Y;q). A small statement within the statement of the lemma below appeared in <cit.> in a different form. Let Y = Y(Γ) be a weakly negative definite plumbed manifold with linking matrix M. Suppose that Z_a(Y;q) = q^δ(c_0 + c_1q^n_1 + c_2q^n_2 + ⋯ ) where δ∈ℚ, c_i ∈ℚ∖{0} and n_i ∈ℕ∪{0} for each i. Suppose moreover that n_0 = 0 (so that c_0 = c_0q^n_0 in the above) and n_i ≠ n_j for i ≠ j, then δ = Δ_a. From Z_a(Y;q) = q^δ(c_0 + c_1q^n_1 + c_2q^n_2 + ⋯ ), simply multiply the q^δ factor out and use the fact that n_0 = 0 to get Z_a(Y;q) = c_0q^n_0+δ + c_1q^n_1 + δ + c_2q^n_2 + δ + ⋯ = ∑_i ∈ I'c_iq^n_i + δ. where I' ⊆ℤ is some indexing set. From Lemma <ref> part (i) we see that we can also write Z_a(Y;q) = ∑_ℓ∈ I c_ℓ· q^3σ -Tr(M)-(ℓ, M^-1ℓ)/4 where each of the powers of q are unique, c_ℓ are some choice of rational numbers and I ⊆ 2Mℤ^s + a⃗ is an indexing set such that each of the rational numbers c_ℓ are non-zero [We can express Z_a(Y;q) in the form of equation <ref> if we absorb the leading factors of 2^-η in Lemma <ref> part (i).]. Comparing equations (<ref>) and (<ref>) we see that we must have n_i + δ = 3σ -Tr(M) - (ℓ⃗_i, M^-1ℓ⃗_i)/4 for each i ≥ 0 and moreover we must have a bijection between the indexing sets I' and I. In particular equation <ref> implies that δ = n_0 + δ = 3σ -Tr(M) - (ℓ⃗_0, M^-1ℓ⃗_0)/4 for some ℓ⃗_0 ∈ 2Mℤ^s + a⃗. Equation (<ref>) then implies that n_i = n_i + δ - δ = (ℓ⃗_0, M^-1ℓ⃗_0) - (ℓ⃗_i, M^-1ℓ⃗_i)/4. The assumptions that n_i = 0 and n_i ∈ℕ∪{0} for all i ≥ 0 along with the assumption that n_i ≠ n_j for any i and j imply that for i > 1 we have that each n_i > 0. This then implies that 1/4((ℓ⃗_0, M^-1ℓ⃗_0) - (ℓ⃗_i, M^-1ℓ⃗_i)) > 0 and hence that (ℓ⃗_0, M^-1ℓ⃗_0)> (ℓ⃗_i, M^-1ℓ⃗_i) for each i > 1. Thus (ℓ⃗_0, M^-1ℓ⃗_0) = max_l∈ I 1/4(l, M^-1l) and hence we find that δ = 3σ -Tr(M)/4 - max_l∈ I (l, M^-1l)/4 = Δ_a as desired. With the proof of this lemma, our earlier remark in the introduction that one can view Δ_a as the unique rational pre-factored power of q in Z_a(Y;q) is justified. §.§ The Delta invariant and orientation reversal Suppose that Y := Y(Γ) is a negative definite plumbed manifold. The orientation reversal of Y, that being -Y, is the plumbed manifold given by -Y := Y(-Γ) where -Γ is the weighted tree consists of the same vertices and edges as Γ but with the negation of each weight (see <cit.>). If M is the linking matrix of Y, then we see that -M is the linking matrix of -Y. Suppose that we wanted to find Z_a(-Y;q) and in particular Δ_a(-Y). Given the current definitions of both these objects we run into an issue, namely that the linking matrix -M of -Y will be positive definite rather than negative definite (this is because Y being a negative definite plumbed manifold implies that M is negative definite and hence that -M is positive definite). We can salvage a way to calculate Δ_a(-Y), by means of the following definition. Let Y := Y(Γ) be a negative definite plumbed manifold. If -Y is the orientation reversal of Y, then we define Δ_a(-Y) := -Δ_a(Y). This definition is justified by the fact that, if by Lemma <ref> part (i) we express Z_a(Y;q) in the following form: Z_a(Y;q) = (-1)^π 2^-ηq^3σ - Tr(M)/4∑_ℓ∈ I c_ℓ· q^-(ℓ, M^-1ℓ)/4 where I ⊆ 2Mℤ^s + a is some indexing set such that all the c_ℓ are non-zero, then we find that -Δ_a(Y) = -(3σ - Tr(M)/4) + max_l∈ I (l, M^-1l)/4 = 3σ(-M) - Tr(-M)/4 - min_l∈ I (l, -M^-1l)/4 where by σ(-M) we mean the signature of the matrix -M. Note that min_l∈ I(l, -M^-1l)/4 exists since -M is positive definite (and thus so is -M^-1). § DELTA FOR BRIESKORN SPHERES Given relatively co-prime integers 0 < b_1 < b_2 < b_3 one can define the Brieskorn sphere Σ(b_1, b_2, b_3) := {(x, y, z) ∈ℂ^3 | x^b_1 + y^b_2 + z^b_3 = 0}∩ S^5. Brieskorn spheres are negatively definite plumbed 3-manifolds which are also integral homology spheres. Hence for Brieskorn spheres there is a unique Spin^c structure, which we label as 0. Hence there is only a single corresponding Z_a-invariant that being Z_0(Σ(b_1, b_2, b_3);q) and a single Δ_0-invariant. The method for finding a plumbing description for Σ(b_1, b_2, b_3), was mentioned in <cit.>. Let us explain this method. The Brieskorn sphere Σ(b_1, b_2, b_3) is the Seifert manifold M(b; a_1/b_1,a_2/b_2, a_3/b_3 ) where b < 0 and a_1, a_2, a_3 >0 are chosen to satisfy the equation b_1b_2b_3 · b + b_2b_3a_1 + b_1b_3a_2 + b_1b_2a_3 = -1. The integers b, a_1, a_2, a_3 are chosen as there is far more than a single solution to equation <ref> above (thus there is also more than one plumbing description). Then the plumbing graph Γ (see Figure <ref>) which produces Σ(b_2, b_2, b_3) has a central vertex labelled by b and three "legs" whose vertices are given by the integers -k_i_1, …, -k_i_s_i for 1 ≤ i ≤ 3 which come from the continued fraction decomposition of b_i/a_i, i.e. b_i/a_i = [k_i_1, …, k_i_s_i] =k_i_1 - 1k_i_2 - 1⋯ - 1k_i_s_i. Let us give an example to illustrate this procedure. Consider the Brieskorn sphere Σ(2, 9, 11). The integers b = -1, a_1 = 1, a_2 = 2 and a_3 = 3 satisfy equation <ref> with b_1 = 2, b_2 = 9 and b_3=11. One can then compute the continued fractions b_1/a_1 = 2 = [2], b_2/a_2 = 9/2 = [5,2] and b_3/a_3 = 11/3 = [4,3] to produce the following plumbing graph for Σ(2, 9, 11). For the rest of this section let us fix the following notation: suppose we are given co-prime integers b_i, 1 ≤ i ≤ 3 which satisfy 0 < b_1 < b_2 < b_3 and we are considering the Brieskorn sphere Σ(b_1, b_2, b_3). We then define p := b_1b_2b_3 and α_1 := b_1b_2b_3 - b_1b_2 - b_1b_3 - b_2b_3, α_2 := b_1b_2b_3 + b_1b_2 - b_1b_3 - b_2b_3, α_3 := b_1b_2b_3 - b_1b_2 + b_1b_3 - b_2b_3, α_4 := b_1b_2b_3 + b_1b_2 + b_1b_3 - b_2b_3. In <cit.> a simplification of the general formula for Z_0(Σ(b_1, b_2, b_3);q) was put forward which incorporated functions known as false theta functions.[As mentioned in <cit.>, another derivation of the formula for Z_0(Σ(b_1, b_2, b_3);q) was given by Chung in <cit.>.] These functions are defined in <cit.> as Ψ^(a)_p(q) := ∑_n=0^∞ψ^(a)_2p(n) q^n^2/4p∈ q^a^2/4pℤ[[q]] where ψ^(a)_2p(n) = 1 if n= a +m· 2p for m ∈ℤ -1 if n= -a +m· 2p for m ∈ℤ 0 otherwise and, following the convention in <cit.>, the notation Ψ^c_1(a_1)+ c_2(a_2) + ⋯_p(q) is used as a shorthand for c_1Ψ^(a_1)_p(q) + c_2Ψ^(a_2)_p(q) + ⋯. We noticed however, that there was a sign error contained in the proof of <cit.>. Correcting this sign error we state the simplified formula below. For (b_1, b_2, b_3) ≠ (2, 3, 5), consider the Brieskorn sphere Y = Σ(b_1, b_2, b_3) which has linking matrix M, where the integers 0 < b_1 < b_2 < b_3 are pairwise relatively prime. Then we have that Z_0(Y;q) = q^ξ·(Ψ_b_1b_2b_3^(α_1) - (α_2) - (α_3) + (α_4)(q)) where α_i for 1 ≤ i≤ 3 is defined as in Notation <ref> and ξ = 1/4(∑_i=1^3 h_i - 3s - Tr(M) - b_2b_3/b_1 - b_1b_3/b_2 - b_1b_2/b_3) ∈ℚ wherein h_i is the absolute value of the determinant of the linking matrix of the graph obtained by deleting a terminal vertex on the i-th leg from the plumbing description Γ which produces Σ(b_1, b_2, b_3), and s is the number of vertices in Γ. [Note that in <cit.>, the notation Δ was adopted instead of the notation ξ we use here. One should however not confuse the notation Δ used in <cit.> with the Δ_0 invariant for Σ(b_1, b_2, b_3). From what we will show below one can see that Δ and Δ_0 are related but not equal.] The sign error that occurred in <cit.> occurs towards the end of the proof. We do not consider (b_1, b_2, b_3) = (2, 3, 5) because in that case the form of Z_0(Σ(b_1, b_2, b_3);q) which appears in equation (<ref>) is not entirely correct, namely there is an extra term that one needs to include. One can see this extra term in the statement of <cit.>. To simplify our proofs below and for brevity we exclude this case for the rest of the section. §.§ A formula for Delta for Brieskorn spheres We first state a few useful results that will help us. Suppose we are given co-prime integers b_i, 1 ≤ i ≤ 3 which satisfy 0 < b_1 < b_2 < b_3. Let p and α_i for 1 ≤ i ≤ 4 be given as in Notation <ref>. Then * min{α_1,α_2,α_3,α_4} = α_1. * α_i^2 -α_1^2/4p∈ℤ for 1 ≤ i ≤ 4. * Suppose further that (b_1, b_2, b_3) ≠ (2, 3, 5), then 1/b_1 + 1/b_2 + 1/b_3 < 1 and 0 < α_i < 2p for 1 ≤ i ≤ 4. (i) We have the following series of equalities α_1 = α_2 - b_1b_2 = α_3 - b_1b_3 = α_4 -b_1b_2- b_1b_3. Then the fact that b_i > 0 for 1 ≤ i ≤ 3 implies that α_1 < α_j for 2 ≤ j ≤ 4 which completes the proof. (ii) Note that to show that α_i^2 -α_1^2/4p∈ℤ for i = 1, 2, 3, 4 it suffices to show that 4p | (α_i^2 - α_1^2) for i = 2,3,4. Since α_i^2 - α_1^2 = (α_i+α_1)(α_i-α_1), if we can show that 4p divides (α_i+α_1)(α_i-α_1) for i = 2,3,4 then we are done. To that end observe that we have the following series of equalities: (α_2+α_1)(α_2-α_1) = (2b_1b_2b_3-2b_1b_3-2b_2b_3)(2b_1b_2) (α_3+α_1)(α_3-α_1) = (2b_1b_2b_3-2b_1b_2-2b_2b_3)(2b_1b_3) (α_4+α_1)(α_4-α_1) = (2b_1b_2b_3-2b_2b_3)(2b_1b_2+2b_1b_3) In all the cases above one can just expand the right hand side and check directly that 4p where p=b_1b_2b_3 divides all the quantities above. (iii) Proof omitted - follows from elementary number theory arguments. Suppose (b_1, b_2, b_3) ≠ (2, 3, 5) where the integers 0 < b_1 < b_2 < b_3 are pairwise relatively prime. Suppose the Brieskorn sphere Y = Σ(b_1, b_2, b_3) has linking matrix M then we have that: Z_0(Y;q) = q^Δ_0·[q^-α_1^2/4p(Ψ_b_1b_2b_3^(α_1) - (α_2) - (α_3) + (α_4)(q))] where Δ_0= ξ + α_1^2/4p wherein p = b_1b_2b_3 and ξ = 1/4(∑_i=1^3 h_i - 3s - Tr(M) - b_2b_3/b_1 - b_1b_3/b_2 - b_1b_2/b_3) ∈ℚ wherein h_i is the absolute value of the determinant of the linking matrix of the graph obtained by deleting a terminal vertex on the i-th leg from the decomposition Γ of Σ(b_1, b_2, b_3), and s is the number of vertices in Γ. Let Y = Σ(b_1, b_2, b_3). Using the notation from [corrected-brie]Theorem <ref> (and that p:= b_1b_2b_3 for later bits in the proof) we know that Z_0(Y;q) = q^ξ·(Ψ_b_1b_2b_3^(α_1) - (α_2) - (α_3) + (α_4)(q)) where ξ was defined in [corrected-brie]Theorem <ref>. Then using the fact that in equation <ref> we defined that Ψ^(a)_p(q) := ∑_n=0^∞ψ^(a)_2p(n) q^n^2/4p∈ q^a^2/4pℤ[[q]] we see that Ψ_b_1b_2b_3^(α_i)(q) = q^α_i^2/4pP_i(q) where P_i(q) ∈ℤ[[q]] for 1 ≤ i ≤ 4. Thus, by making use of how we defined Ψ_b_1b_2b_3^(α_1) - (α_2) - (α_3) + (α_4)(q) as a linear combination, we can factorize Z_0(Y;q) into the form: Z_0(Y;q) = q^ξ·(Ψ_b_1b_2b_3^(α_1)(q) - Ψ_b_1b_2b_3^(α_2)(q) - Ψ_b_1b_2b_3^(α_3)(q)+ Ψ_b_1b_2b_3^(α_1)(q)) = q^ξ(q^α_1^2/4pP_1(q) - q^α_2^2/4pP_2(q) - q^α_3^2/4pP_3(q) + q^α_4^2/4pP_4(q)) = q^ξ + α_1^2/4p(P_1(q) - q^α_2^2-α_1^2/4pP_2(q) - q^α_3^2-α_1^2/4pP_3(q) + q^α_4^2-α_1^2/4pP_4(q)). Note that [brie-pow-int]Lemma <ref> part (ii) implies that α_i^2-α_1^2/4p∈ℤ for 1≤ i ≤ 4. Moreover [brie-pow-int]Lemma <ref> part (i) implies that α_i^2-α_1^2/4p∈ℕ∪{0} for 1≤ i ≤ 4. Thus in particular we see that q^α_i^2-α_1^2/4pP_i(q) ∈ℤ[[q]] for 1 ≤ i ≤ 4. Moreover since Ψ_b_1b_2b_3^(α_1)(q) = q^α_1^2/4pP_1(q) we then see by expanding the definition of Ψ_b_1b_2b_3^(α_1)(q) that q^α_1^2/4pP_1(q) = Ψ_b_1b_2b_3^(α_1)(q) = q^α_1^2/4p + ∑_m ≥ 1^∞ q^α_1^2/4p+α_im+m^2p - q^α_1^2/4p-α_im+m^2p = q^α_1^2/4p(1 + ∑_m ≥ 1^∞ q^α_im+m^2p - q^-α_im+m^2p) which implies that P_1(q) = 1 + ∑_m ≥ 1^∞ q^α_im+m^2p - q^-α_im+m^2p (to see this simply multiply[This multiplication is well-defined in 𝐤, the Novikov field] both sides of the above equation by q^-α_1^2/4p) and importantly from this we deduce that the leading term of P_1(q) is 1. From this we can apply [read-delta]Lemma <ref> to the form of Z_0(Y;q) obtained in equation <ref> to see that Δ_0(Y) = ξ +α_1^2/4p. §.§ Computing Z-hat and Delta-0 for Brieskorn Spheres Suppose a Brieskorn sphere Σ(b_1, b_2, b_3) is given such that (b_1, b_2, b_3) ≠ (2, 3, 5) and one wants to compute Z_0 and Δ_0 for it. Then one needs to do the following. * First find integers b < 0 and a_1, a_2, a_3 >0 which satisfy b_1b_2b_3 · b + b_2b_3a_1 + b_1b_3a_2 + b_1b_2a_3 = -1 and produce the plumbing graph Γ of Σ(b_1, b_2, b_3). * Then delete the terminal vertices to produce new graphs Γ_i with linking matrices M_i for 1 ≤ i ≤ 3 and compute h_i := det(M_i). * Then compute α_i (as defined in Proposition <ref>) for 1 ≤ i ≤ 4. * Finally, use all of this data as input to Proposition <ref> to compute Z_0(Σ(b_1, b_2, b_3)) and Δ_0(Σ(b_1, b_2, b_3)). This process can be viewed as an algorithm and thus be coded into a program. Let's now consider an example to see how to use this proposition to compute the Z_0 and Δ_0 invariants for Brieskorn spheres in practice. Consider Σ(2, 9, 11). The plumbing description for this Brieskorn sphere was computed in [sigma-2-9-11]Example <ref> and is depicted below: We first notice that s = |Vert(Γ)| = 6. The linking matrix is given by: M = [first-row]cccccc v_1 v_2 v_3 v_4 v_5 v_6 -1 1 1 0 1 0 1 -2 0 0 0 0 1 0 -5 1 0 0 0 0 1 -2 0 0 1 0 0 0 -4 1 0 0 0 0 1 -3 We find that Tr(M) = -17. If we delete the terminal vertex v_2 on the first leg we obtain the linking matrix M_1 for the corresponding plumbing graph Γ_1. Deleting the terminal vertex v_4 on the second leg yields the linking matrix M_2 for the corresponding plumbing graph Γ_2. Deleting the terminal vertex v_6 on the third leg yields the linking matrix M_3 for the corresponding plumbing graph Γ_3. The linking matrices M_i corresponding to Γ_i for 1 ≤ i ≤ 3 are collected below: M_1 = [first-row]ccccc v_1 v_2 v_3 v_4 v_5 -1 1 0 1 0 1 -5 1 0 0 0 1 -2 0 0 1 0 0 -4 1 0 0 0 1 -3 M_2 = [first-row]ccccc v_1 v_2 v_3 v_4 v_5 -1 1 1 1 0 1 -2 0 0 0 1 0 -5 0 0 1 0 0 -4 1 0 0 0 1 -3 M_3 = [first-row]ccccc v_1 v_2 v_3 v_4 v_5 -1 1 1 0 1 1 -2 0 0 0 1 0 -5 1 0 0 0 1 -2 0 1 0 0 0 -4 Now one needs to compute h_i = |(M_i)| for 1 ≤ i ≤ 3. One finds that h_1 = 50, h_2 = 3 and h_3 = 2. Letting b_1 = 2, b_2 = 9, b_3 = 11 and recalling that p = b_1b_2b_3 = 198 we further calculate that α_1 = 59, α_2 = 95, α_3= 103, α_4 = 139 and moreover, α_2^2 - α_1^2/4p = 7, α_3^2 - α_1^2/4p = 9, α_4^2 - α_1^2/4p = 20. Then by substituting in the values we just found in the formulas we derived for Δ_0 and Z_0 for Brieskorn spheres in Proposition <ref>, we find that Δ_0(Σ(2, 9, 11)) = 9/2 and Z_0(Σ(2, 9, 11);q) = q^9/2·[q^-59^2/198(Ψ_198^(59) - (95) - (103) + (139)(q))]. Consider the Brieskorn sphere Σ(3, 7, 8). The integers b = -1, a_1 = 1, a_2 = 2 and a_3 = 3 satisfy equation <ref> with b_1 = 3, b_2 = 7 and b_3=8. One can then compute the continued fractions b_1/a_1 = 3 = [3], b_2/a_2 = 7/2 = [4,2] and b_3/a_3 = 8/3 = [3,3] to produce the following plumbing graph for Σ(3, 7, 8). We leave it as an exercise to the reader to use this plumbing description to find Δ_0 and Z_0 for Σ(3, 7, 8) from Proposition <ref>. One finds that Δ_0(Σ(3, 7, 8)) = 13/2 and further that Z_0(Σ(3, 7, 8);q) = q^13/2·[q^-67^2/168(Ψ_168^(67) - (109) - (115) + (157)(q))]. § DELTA AND HOMOLOGY COBORDISM We begin with a brief review of homology cobordism, the interested reader may consult either <cit.>, <cit.> or <cit.>. Let Σ_0 and Σ_1 be two oriented integral homology 3-spheres. We call Σ_0 and Σ_1, homology cobordant if there exists a smooth, compact, oriented 4-manifold W with boundary ∂ W = -Σ_0 ⊔Σ_1 such that inclusions ι_0 : Σ_0 ↪ W and ι_1 : Σ_1 ↪ W induce isomorphisms (ι_0)_* : H_i(Σ_0) → H_i(W) and (ι_1)_* : H_i(Σ_1) → H_i(W) for all i ≥ 0. Homology cobordism is an equivalence relation on the class of all oriented integral homology 3-spheres. We define the (3-dimensional) homology cobordism group, Θ^3, to be the class of oriented integral smooth homology 3-spheres modulo the equivalence relation of homology cobordism. An addition operation is given by [M] + [N] := [M # N] wherein [M], [N] denote the homology cobordism classes of M and N. The identity of the group is [S^3] and for every [M] an additive inverse is given by [-M]. Accordingly, we say that an oriented integral smooth homology 3-sphere is homology cobordant to zero if it is homology cobordant to S^3. A homology cobordism invariant is a function f : Θ^3 → X where X is any set. In particular this means that if M and N are two oriented homology 3-spheres which are homology cobordant, then f(M) = f(N). Two examples of homology cobordism invariants are given by the Rokhlin invariant μ : Θ^3 →ℤ/2 and the correction terms to Heegaard Floer homology d : Θ^3 →ℤ which we'll see later. A question was raised in <cit.> which asked whether the Δ_0 invariants were homology cobordism invariants. In this section we will show that this is not the case. In searching for a counterexample, we will make use of the following theorem which appears in a different (but equivalent) form in <cit.> and <cit.>. The following families of Brieskorn spheres are all homology cobordant to S^3. * Σ(p, pq-1, pq+1) for p even and q odd, * Σ(p, pq+1, pq+2) for p odd and any q. By <cit.> and <cit.> manifolds from the families (i) and (ii) bound smooth contractible 4-manifolds. Let Σ denote any manifold from either of the families (i) or (ii) above and let W denote the smooth contractible 4-manifold W which bounds Σ. Then since W is a contractible 4 -manifold with non-empty boundary which is a homology 3-sphere, W is compact and orientable. It is known (see <cit.>) that a smooth oriented integral homology 3-sphere M bounds a smooth compact orientable 4-manifold W with H_i(W) = 0 for all i ≥ 0, if and only if M is homology cobordant to S^3. From this above characterization, we thus conclude that Σ is homology cobordant to S^3. The Brieskorn manifolds Σ(2, 9, 11) and Σ(3, 7, 8) are both homology cobordant to S^3. From Theorem <ref>, in (i) take p=2 and q=5 and in (ii) take p=3 and q=2. Z_a and Δ_a are neither homology cobordism invariants, nor cobordism invariants. We saw in Corollary <ref> that Σ(2, 9, 11) is homology cobordant to S^3. Moreover since any homology cobordism between two manifolds is by definition a cobordism between them, Σ(2, 9, 11) is also cobordant to S^3. If in general Z_a and Δ_a were homology cobordism invariants, then we would have in this particular case that Z_0(Σ(2, 9, 11);q) = Z_0(S^3;q) and also that Δ_0(Σ(2, 9, 11)) = Δ_0(S^3). However in [sigma-2-9-11-inv]Example <ref> and [delta-s3]Example <ref> respectively we computed that Z_0(Σ(2, 9, 11);q) = q^9/2(1-q^7-q^9+q^20+q^79+⋯) Z_0(S^3;q) = q^-1/2(2q-2) Δ_0(Σ(2, 9, 11)) = 9/2 Δ_0(S^3) = -1/2. Thus the Z_a and Δ_a invariants are not invariants of homology cobordism. Similarly since Σ(2, 9, 11) is also cobordant to S^3 neither the Z_a nor Δ_a invariants are invariants of cobordism. In the above we can use other Brieskorn manifolds Σ(b_1,b_2, b_3) which appeared in [hom-cob-brie-sphere-thm]Theorem <ref>. §.§ Delta-invariants and Spin-c homology cobordism <cit.> Suppose we have a oriented n-dimensional manifold Y with boundary Σ. There exists a collar neighbourhood U ⊆ Y such that U is diffeomorphic to Σ× [0, 1]. Now Σ× [0,1] is homotopy equivalent to Σ. Then recall that Spin^c(Σ) ≅ H^2(Σ;ℤ). We thus have a sequence of isomorphisms followed by an induced inclusion map Spin^c(Σ) ≅_aff H^2(Σ;ℤ) ≅ H^2(Σ× [0, 1];ℤ) ≅ H^2(U;ℤ) H^2(Y;ℤ) ≅_affSpin^c(Y). Then we define a relative Spin^c structure on Y, to be a choice of mapping a Spin^c structure on Y to the element in Spin^c(Σ) ≅ H^2(Σ;ℤ) which corresponds to the first Chern class c_1 ∈ H^2(Σ;ℤ) via the sequence of maps above. The set of all relative Spin^c structures are denoted Spin^c(Y, ∂ Y). If Y is homology cobordant to S^3, then Y is Spin^c cobordant to S^3 We have that both Y and S^3 are integral homology spheres and hence also rational homology spheres. Let W be a cobordism between Y and S^3 such that H_i(Y;ℤ) → H_i(W;ℤ) and H_i(S^3;ℤ) → H_i(W;ℤ) are isomorphisms for all i. From the fact that Y and S^3 are integral homology spheres we see that H_i(W;ℚ) = H_i(W;ℤ) ⊗ℚ = 0 for i = 1,2. Moreover both Y and S^3 have only the trivial Spin^c structure on them. Furthermore since H_i(W;ℤ) ≅ H_i(S^3;ℤ) in particular, via the Universal Coefficients Theorem one sees that H^2(W;ℤ) = 0, this implies that W has only the trivial Spin^c structure on it. By [rel-spinc-ii]Definition <ref> above we see that the trivial Spin^c structure on W restricts to the trivial Spin^c structures on Y and S^3. Thus Y is Spin^c cobordant to S^3. Δ_a is not a Spin^c homology cobordism invariant. Consider Σ(2, 9, 11). Note that Σ(2, 9, 11) is homology cobordant to S^3 and if [claim-spin-cob]Claim <ref> holds, then Σ(2, 9, 11) is Spin^c cobordant to S^3. The proof now follows in exactly the same manner as the proof of [delta-hom-cob-not]Theorem <ref>. The homology cobordism group Θ^3 embeds into the Spin^c homology cobordism group θ^c First note that if Y_1 and Y_2 are two integral homology spheres which are homology cobordant via a cobordism W, in the sense that H_i(Y_j;ℤ) → H_i(W;ℤ) are isomorphisms for all i and j = 1,2, then since for any space X we have H_n(X;ℚ) = H_n(X;ℤ)⊗ℚ (see e.g. <cit.>) we see that H_i(Y_j;ℚ) → H_i(W;ℚ) are isomorphisms for all i and j = 1,2 and further since integer homology spheres are also rational homology spheres that H_i(W;ℚ) = 0 for i = 1,2. We can also compute that H^2(W;ℤ) ≅Hom(H_2(W;ℤ);ℤ) ⊕Ext(H_1(W;ℤ);ℤ) = Hom(0;ℤ) ⊕Ext(0;ℤ) = 0, since H_i(W;ℤ) ≅ H_i(Y_j;ℤ) for j = 1,2 and H_i(Y_j;ℤ) ≅ H_i(S^3;ℤ) since Y_1, Y_2 are both integral homology spheres. This implies that W has only the trivial Spin^c structure on it. By [rel-spinc-ii]Definition <ref> above we see that the trivial Spin^c structure on W restricts to the trivial Spin^c structures on Y_1 and Y_2. Thus Y_1 is Spin^c cobordant to Y_2. In [claim-spin-cob]Claim <ref>, [spinc-hom-cob-not]Conjecture <ref> and [hom-cob-spin-cob]Claim <ref>, the only thing that is preventing us from turning all three into propositions is that we are somewhat uncertain as to whether [rel-spinc-ii]Definition <ref> agrees with conventions. Suppose that (Y_1, t_1) and (Y_2, t_2) are two rational homology three-spheres which are Spin^c cobordant via a cobordism W. Then H_i(W;ℚ) = 0 for i = 1, 2 if and only if H_i(Y_1;ℚ) → H_i(W;ℚ) and H_i(Y_2;ℚ) → H_i(W;ℚ) are isomorphisms for all i. The reverse direction () is immediate since Y_1 and Y_2 are rational homology three spheres. For the forward direction (), looking at the long exact sequence of the pairs (Y_1, W) and (Y_2, W) should show that H_i(Y;ℚ) → H_i(W;ℚ) is an isomorphism for all i making use of the fact that Y_1 and Y_2 are rational homology three spheres. § DELTA AND CORRECTION TERMS, D, TO HEEGAARD FLOER HOMOLOGY We introduce Heegard Floer homology closely following <cit.> and <cit.>. §.§ Heegard Floer Homology Let Y be a closed orientable 3-manifold with Heegard decomposition Y = U_0 ∪_Σ U_1. From the Heegard decomposition we can produce a Heegard diagram (Σ_g, α_1, …, α_g, β_1, …, β_g) which is sometimes written as (Σ_g, α, β). We then define Sym(Σ_g) = (∏_i=1^gΣ_g)/𝔖_g where 𝔖_g is the symmetric group on g letters which acts on (∏_i=1^gΣ_g) in the following way: (σ, (x_1, …, x_g)) ↦ (x_σ(1), …, x_σ(g)) for σ∈Σ_g and (x_1, …, x_g) ∈∏_i=1^gΣ_g (cf. <cit.>). As similarly stated in (cf. <cit.>), the "attaching circles" α_i and β_i produce tori 𝕋_α = α_1 ×⋯×α_g and 𝕋_β = β_1 ×⋯×β_g which are subsets of Sym(Σ_g). A certain map s_z : 𝕋_α∩𝕋_β→Spin^c(Y) is then defined (cf. <cit.>). Making use of this map s_z we can define groups: * CF^∞(α, β, a) to be the free abelian group generated by the pairs [x, i] where the x ∈𝕋_α∩𝕋_β with s_z(x) = a and i ∈ℤ. * CF^-(α, β, a) to be the free abelian group generated by the pairs [x, i] where the x ∈𝕋_α∩𝕋_β with s_z(x) = a and i <0 where i is an integer. * CF^+(α, β, a) := CF^∞(α, β, a)/CF^-(α, β, a). One can assign a (relative) grading to each of these groups which then allow us to define chain complexes from them. From the resulting chain complexes we get homology groups HF^-(Y, a), HF^+(Y, a) and HF^∞(Y, a) as is usually done in algebraic topology. According to <cit.>, if Y is additionally a rational homology sphere, we have that HF^-(Y, a), HF^+(Y, a), HF^∞(Y, a) are ℚ-graded. This means, for example in the case of HF^+(Y, a) , that HF^+(Y, a) = ⊕_ω∈ℚ HF_ω^+(Y, a). We have a similar direct sum decomposition for HF^∞(Y, a). Furthermore for each ω∈ℚ there is a family of homomorphisms HF_ω^+(Y, a) → HF^∞_ω(Y, a) which come from a long exact sequence (see <cit.>) ⋯ HF^-(Y, a) → HF^+(Y, a) → HF^∞(Y, a) →⋯. The following is a slightly expanded version of <cit.>. Let Y be an oriented rational homology 3-sphere and a ∈Spin^c(Y). We define a rational number d(Y, a) ∈ℚ, called the correction term, to be the minimal ω∈ℚ such that an element x in the image of the homomorphism HF_ω^+(Y, a) → HF^∞_ω(Y, a) is non-torsion, i.e. x^n ≠ 0 for any integer n. In the case when Y is an oriented 3-manifold with b_1(Y) = 0, then there is a unique Spin^c structure on Y and we simply write d(Y) instead of d(Y, a). * The correction terms give group homomorphisms: * d : θ^c →ℚ where θ^c is the Spin^c homology cobordism group (<cit.>) * d : Θ^3 →ℤ where Θ^3 is the homology cobordism group (<cit.>) * Let (Y, a) be a rational homology 3-sphere with Spin^c structure a. Then it follows that d(-Y, a) = - d(Y, a) where -Y is the orientation reversal of Y. (<cit.>) * Let (Y, a) be a rational homology 3-sphere with Spin^c structure a. Then it follows that d(Y, a) = d(Y, a) where a is the conjugation of a. (<cit.>) The following is a corollary to part (i) of Theorem <ref> above. The correction term for S^3 is zero, i.e. d(S^3) = 0 The homology cobordism class of S^3 is the identity of the group Θ^3. Thus it must be mapped to 0 ∈ℤ since d is a group homomorphism. §.§ Connections between Delta and d Notice that for an oriented rational homology 3-sphere Y with Spin^c structure a, we have the following connections, outlined in <cit.>, between Δ_a(Y) and d(Y, a): * Δ_a(Y) and d(Y, a) are both labelled by Spin^c structures on Y. * Δ_a(Y) and d(Y, a) are both negated under orientation reversal of Y. * Δ_a(Y) and d(Y, a) both remain unchanged under conjugation of Spin^c structures. Therefore it is natural to ask the question: "Just how closely related are Δ_a(Y) and d(Y, a)?". In particular d(Y, a) is a homology cobordism invariant, so a further question one could ask is: "Is Δ_a(Y) also a homology cobordism invariant"? To partially answer these questions, the following proposition was proven in <cit.>. If Y = Y(Γ) is a negative definite plumbed manifold then Δ_a(Y) = 1/2 - d(Y, a) 1. For the proof of this fact, we refer the reader to <cit.> The result that Δ_a(Y) = 1/2 - d(Y, a) 1 could equivalently be rewritten as as Δ_a(Y) = n+ 1/2 - d(Y, a) for some n ∈ℤ. The function f:{class of negative definite plumbed manifolds}→ℚ defined by f(Y) = Δ_a(Y) 1 is a homology cobordism invariant If Y_1 is homology cobordant to Y_2. Then Δ_a(Y_1) = 1/2 - d(Y_1, a) + n and Δ_a(Y_2) = 1/2 - d(Y_2, a) + m for some n, m ∈ℤ by Proposition <ref>. Since the correction term d is a homology cobordism invariant we have that d(Y_1, a) = d(Y_2, a). Thus Δ_a(Y_2) = 1/2 - d(Y_1, a) + m. From this we obtain that Δ_a(Y_1) = Δ_a(Y_2) + n-m. Thus this implies that Δ_a(Y_1) = Δ_a(Y_2) 1 and hence that f(Y_1) = f(Y_2). Let us now take a look at an example to see this phenomenon. [Part of this example was inspired by discussion with Mrunmay Jagadale.] We know from [2-9-11-3-7-8-hom-cob-zero]Corollary <ref> that Σ(2, 9, 11) and Σ(3, 7, 8) are homology cobordant to S^3. We also saw in Examples <ref>, <ref> and <ref> that Δ_0(S^3) = 1/2, Δ_0(Σ(2, 9, 11)) = 9/2 = 1/2 + 4 and Δ_0(Σ(3, 7, 8)) = 13/2 = 1/2 + 6. Then notice that Δ_0(S^3) = Δ_0(Σ(2, 9, 11)) = Δ_0(Σ(3, 7, 8)) 1 as expected. §.§ Delta and integral homology spheres Let Y := Y(Γ) be a negative definite plumbed 3-manifold which is an integral homology sphere, then Δ_0(Y) = 1/2 1 By [d-conj]Proposition <ref> Δ_0(Y) = 1/2 + d(Y) + n for some n ∈ℤ. Since d : Θ^3 →ℤ is a homomorphism we have that d(Y) ∈ℤ which completes the proof. §.§ Sharpness of the relation between Delta invariants and correction terms Suppose now that [d-conj]Proposition <ref> instead stated that Δ_a(Y) = 1/2 - d(Y, a) x where x is some integer. Intuitively the possibility of showing that the relation Δ_a(Y) = 1/2 - d(Y, a) x holds, for a higher value of x is desirable since this leads to a stronger relation between Δ_a(Y) and d(Y, a). It was stated in <cit.> that the best hope would be to find such a relation Δ_a(Y) = 1/2 - d(Y, a) x for x=2. This conclusion was based off the examples computed within <cit.>. Let us now provide an independent proof of this. Suppose that the relation Δ_a(Y) = 1/2 - d(Y, a) x holds for all negative definite plumbed manifolds Y = Y(Γ). Then x ≤ 2. The relation that Δ_a(Y) = 1/2 - d(Y, a) x can be equivalently restated as Δ_a(Y) = nx + 1/2 - d(Y, a) for some n ∈ℤ. Now we have that Δ_a(Σ(2, 9, 11)) = 9/2 = 1/2 + 4 and Δ_a(Σ(3, 7, 8)) = 13/2 = 1/2 + 6 since we have that d(Σ(2, 9, 11)) = d(Σ(3, 7, 8)) = d(S^3) = 0 since Σ(2, 9, 11) and Σ(3, 7, 8) are both homology cobordant to S^3. Now by equation (<ref>) we expect that 4 = nx and 6 = mx for some n, m ∈ℤ. Thus we see that x is a divisor of both 4 and 6. The only possible common divisors are 1 and 2 hence x ∈{1, 2} thus completing the proof. A question was raised in <cit.> as to whether it would be possible to improve the relation that Δ_a(Y) = 1/2 - d(Y, a) 1 to Δ_a(Y) = 1/2 - d(Y, a) 2? This question was asked generally for 3-manifolds Y which aren't necessarily plumbed. We will show below that this cannot be the case and that in fact an example within the paper <cit.> answers this question. If Δ_a(Y) = 1/2 - d(Y, a) x holds for all 3-manifolds Y and all a ∈Spin^c(Y), then x=1. The relation that Δ_a(Y) = 1/2 - d(Y, a) x can be equivalently restated as Δ_a(Y) = nx + 1/2 - d(Y, a) for some n ∈ℤ. If there exists some manifold Y and a ∈Spin^c(Y) such that Δ_a(Y) = 1/2 - d(Y, a) ± 1, then the only possible solution for n and x in equation (<ref>) are n, x ∈{-1, 1}. Thus to prove this result, it suffices to find a 3-manifold Y, such that Δ_a(Y) = 1/2 - d(Y, a) ± 1 since this will force x to be equal to 1. To that end, in <cit.> it was computed that for Y := S^3_-1/2(4_1) we have Δ_0(Y) = -1/2 and d(Y) = 0. Thus we have that Δ_0(Y) = 1/2 - d(Y) 1 Δ_0(Y) = 1/2 - d(Y) + n -1/2 = 1/2 + n n=-1 and this proves the proposition. § COMPARING DELTA AND THE CORRECTION TERMS FOR SOME FURTHER EXAMPLES Let us look at some further examples with which we can compare Δ_0 and the correction terms d. For many classes of 3-manifolds there exist techniques which aid the computation of both Heegard Floer homology and the correction terms. Here is one such example that we shall use in the next section. The paper <cit.> gives us the following computational tool. For 1/1-Dehn surgery on the torus knot T_p, p+1 embedded in S^3, that is for S^3_+1(T_p, p+1) = -Σ(p, p+1, p(p+1)-1) we have d(S^3_+1(T_p, p+1)) =- ⌊p/2⌋(⌊p/2⌋ +1). A proof can be found in <cit.>. Using [d-brie-family-p]Proposition <ref> we will compute the correction term, d, for various manifolds of the form -Σ(p, p+1, p(p+1)-1) and compare the values of d obtained to Δ_0 for these manifolds. These examples are new in that they haven't been explicitly produced in this form before, but are not new in the sense that they say anything new about Proposition <ref>. There is a subtlety that the reader should be aware of however, as stated in <cit.>, S^3_+1(T_p, p+1) = -Σ(p, p+1, p(p+1)-1) is not a negative definite plumbed manifold, however Σ(p, p+1, p(p+1)-1) is a negative definite plumbed manifold, thus we will compare Δ_0 and d for manifolds of the form Σ(p, p+1, p(p+1)-1). To compute d(Σ(p, p+1, p(p+1)-1)) we simply use the fact that d(Σ(p, p+1, p(p+1)-1)) = -d(-Σ(p, p+1, p(p+1)-1)). We produce the following examples of Brieskorn spheres Σ(p, p+1, p(p+1)-1) for which we can compare Δ_0 to d. § SOME COMPUTATIONS OF Z-HAT AND DELTA FOR BRIESKORN SPHERES §.§ Further computations for Brieskorn Spheres homology cobordant to the three-sphere. § NEGATIVE AND POSITIVE DEFINITE MATRICES We recall the following definition from <cit.>. Let A ∈ M_n × n(ℝ) be an n× n matrix with real entries. We say that A is * negative definite if for all x ∈ℝ^n ∖{0} we have that x^T Ax = (x, Ax) < 0. * positive definite if for all x ∈ℝ^n ∖{0} we have that x^T Ax > 0. The following lemma, collected from <cit.>, can be reconstructed from the arguments and statements given in <cit.> and using the fact that, as stated in <cit.>, A is negative definite if and only if -A is positive definite. Negative definite matrices possess the following properties: * If A = (a_ij) is a negative definite n× n matrix with real entries, then the the diagonal entries of the matrix, a_ii for 1 ≤ i ≤ n, are all negative, that is a_ii < 0 for 1 ≤ i ≤ n. * Let A be a negative definite n× n symmetric matrix, then A is invertible. * Let A be a negative definite n× n matrix. Then all of the eigenvalues of A are negative. * If A is a negative definite n× n symmetric matrix with real entries, then A^-1 is also a negative definite n× n symmetric matrix with real entries. * If A = (a_ij) is a negative definite n× n symmetric matrix with integral entries, that is a_ij∈ℤ for all i, j. Then A^-1 is also a negative definite n× n symmetric matrix with integral entries. An important, standard fact about negative definite matrices that we will use is the following (which can be reconstructed from material in <cit.>): Let A ∈ M_n × n(ℝ) be a symmetric negative definite n× n matrix with real entries. Then the quadratic form q : ℝ^n →ℝ defined by q(x) = x^TAx has a global maximum. We include the standard proof below for the benefit of the reader. One can reconstruct parts of the proof below from material in <cit.>. Parts of the proof below were inspired by <cit.>. By standard linear algebra, the matrix A can be diagonalized into the form A = P^-1BP for some invertible n × n matrix P such that B is a diagonal matrix containing all the eigenvalues of A, those being λ_i for 1 ≤ i ≤ n on the diagonal and zero's elsewhere. That is B = [ λ_1 ; ⋱ ; λ_n ]. Then A and B are similar matrices (see <cit.>) and so represent the same linear map L : ℝ^n →ℝ^n (see <cit.>). Hence q can be written as q(x) = x^TBx. One sees that for x = (x_1, …, x_n) we have that q(x) = ∑_i=1^n λ_i x_i^2. Since A is negative definite, by Lemma <ref> part (c.) above, we have that λ_i < 0 for each 1 ≤ i ≤ n. Thus q has a global maximum given at x = (0, …, 0) = 0∈ℝ^n. Let A ∈ M_n × n(ℤ) be a negative definite n× n matrix with integral entries. Let V ⊆ℤ^n be some set, then the function q : V →ℤ defined by q(x) = x^TAx for all x ∈ V has a global maximum. One simply views A as an element of A ∈ M_n × n(ℝ) and then applies the above proposition to find a supremum of the set W={x^TAx ∈ℤ| x ∈ V ⊆ℤ^n}. Since ℤ^n is a discrete subset of ℝ^n, so is V. This implies that W is a discrete subset of ℝ. Since the supremum of W exists and W is discrete, W must contain it's supremum. Thus let y^T A y = sup W, then y is the global maximum of q. There is a corresponding result and corollary for positive definite matrices. Let A ∈ M_n × n(ℝ) be a positive definite n× n matrix with real entries. Then the quadratic form q : ℝ^n →ℝ defined by q(x) = x^TAx has a global minimum. Let A ∈ M_n × n(ℤ) be a positive definite n× n matrix with integral entries. Let V ⊆ℤ^n be some set, then the function q : V →ℤ defined by q(x) = x^TAx for all x ∈ V has a global minimum.
http://arxiv.org/abs/2306.03041v1
20230605170340
Embedding Delay-Constrained VNF Forwarding Graphs into Reconfigurable WDM Optical Networks -- Extended Version
[ "Valentin Kirchner", "Holger Karl" ]
cs.NI
[ "cs.NI" ]
SERT: A Transfomer Based Model for Spatio-Temporal Sensor Data with Missing Values for Environmental Monitoring [ July 31, 2023 ================================================================================================================ Operators of reconfigurable wavelength-division multiplexed (WDM) optical networks adapt the lightpath topology to balance load and reduce transmission delays. Such an adaption generally depends on a known or estimated traffic matrix. Network function virtualization (NFV) allows to implicitly change this traffic matrix. However, these two degrees of freedom have largely been considered separately, using resources suboptimally. Especially for delay-sensitive services, an optimal use of resources can be crucial. We aim to jointly optimize the embedding of virtualized network function (VNF) forwarding graphs with delay constraints and the lightpath topology of WDM optical networks. Unlike previous work, we consider all three types of delays: propagation, processing and forwarding-induced queuing delay. We model the latter two as M/M/1 queues. We formulate and analyze a mixed-integer nonlinear program (MINLP), reformulate it as a mixed-integer quadratic constrained program (MIQCP) and approximate it by a mixed-integer linear program (MILP). We evaluate our approach for small-scale examples of a multicast service. § INTRODUCTION Classic networks route data over a fixed substrate network topology. The transmission demands can be perceived as the data rate to be provided, averaged over longer time horizons and are usually described by a traffic matrix. Providers route the traffic based on available knowledge of the traffic matrix, service-level agreements and hardware capacities. The options range from fully decentralized, traffic-oblivious solutions <cit.> to more complex, traffic-aware approaches <cit.>. Besides routing, another degree of freedom is introduced by adjusting the traffic matrix via network function virtualization (NFV). Data flows pass through certain network functions in hardware and software, e.g. a firewall, a network address translation device or a deep packet inspector. These functions might change the data rate, e.g. by dropping packets or re-encoding the payload, and take time for processing, using up some of the delay budget of a flow. With virtualization, these functions can be placed and scaled dynamically, changing the traffic patterns in the network. Exploiting this effect has been extensively studied for a fixed substrate topology <cit.>. If a network's substrate allows to reconfigure its topology, it enables a third degree of freedom. An example for such a substrate is an optically switched and wavelength-division multiplexed (WDM) network. It sends data on direct physical lightpaths between potentially non-adjacent vertices. The topology of lightpaths on top of the substrate is the lightpath topology and can be optimized for a given traffic matrix. Leveraging this degree of freedom can reduce forwarding delay, energy consumption and capital and operational expenditure by a significant factor compared to static, electrically switched networks <cit.>. However, there is a lack of studies that jointly exploit flexible VNF placement and scaling and the lightpath topology configurations in WDM optical networks. This joint approach can enable delay-critical applications that would otherwise not be possible. We illustrate that with a small example. Consider the generic optical network depicted in Figure <ref>. The network offers data transport on two wave lengths (blue and red) over 5 fibers. We assume that lightpaths between two vertices are bidirectional on exactly one wavelength and that the number of available transceivers per vertex equals its degree. These assumptions guarantee that lightpaths can be established between all physically adjacent vertices, realizing the substrate topology in Figure <ref>. Besides data transmission, the network offers the execution of VNFs f, g at vertices v_3 and v_4. For the simplicity of this particular example, we assume that f, g do not change the data rates. Hardware capacities are often heterogeneous due to an incremental evolution of the network, so we assume that the capacity of v_4 is much larger than the capacity of v_3. The bigger square around vertex v_4 in Figure <ref> illustrates that. We consider two flows: The first flow originates at vertex v_1 with destination v_5 and requests VNF f; the second flow originates at vertex v_2 with destination v_6 and requests VNF g. For both flows, we model the arrival of packets as Poisson processes with independent, exponentially distributed packet lengths. We further assume that the time it takes a VNF to process a packet as well as the time it takes a transceiver to forward a packet only depend on the packet length. Hence, we model forwarding at each optical transmitter and processing at each VNF instance as separate M/M/1 queues <cit.>. We use processing/forwarding delay to designate the sum of the processing/sending times and the associated queuing times (i.e. the sojourn times). In addition, packets experience propagation delay. In total, both flows experience three different delays each: a processing delay, a forwarding delay and a propagation delay. In this simple example, both flows experience a propagation delay independent of the VNF placement and lightpath topology configurations. This allows us to focus our further considerations on the execution and forwarding delay. First, we optimize the VNF placement for the given substrate topology, as usual. Second, we additionally allow reconfigure the lightpath topology after the VNF placement in the substrate topology. Third, we jointly optimize the VNF placement and the lightpath topology. Given the substrate topology, the placement and scaling of f and g does not change the loads of the lightpaths and only affects the processing delays. Due to the much larger computing capacity at v_4, the processing is fastest if both f and g are placed at vertex v_4. Given this embedding, we could subsequently optimize the lightpaths. But in the given example, optical transceiver constraints make it impossible to reconfigure the lightpaths. Thus, the first two approaches result in the same topology configuration and VNF placement shown in Figure <ref>. As for the third option, the lightpath between v_3 and v_5 in Figure <ref> is potentially highly utilized and queuing can cause significant forwarding delays. If we place f at v_3 instead of at v_4 we can separate the two flows using the topology depicted in Figure <ref>. Although this choice increases the execution delay of f, it can drastically decrease the forwarding delay of both flows. Only the joint optimization model can find this optimal tradeoff. We formulate the joint optimization model as an MINLP, reformulate it as a mixed-integer quadratically constrained program (MIQCP) and approximate it by an mixed-integer linear program (MILP). The problem combines three sub-problems: The placement, scaling and routing for VNF services for a given topology <cit.>; the lightpath topology design for a given traffic matrix <cit.>; and the routing and wavelength assignment (RWA) for a given lightpath topology <cit.>. All three sub-problems are difficult to solve; the nonlinear delay constraints per VNF service requests add even more complexity. This makes even small instances very hard to solve. We reformulate the problem as a non-convex MIQCP and use the highly optimized solver Gurobi <cit.>, rather than a general purpose MINLP solver. To bound the execution time of the solver, we provide an MILP approximation and evaluate it on small examples. § RELATED WORK Researchers extensively studied the problem of configuring optical networks; Nance_Hall2021-bs survey diverse research directions. Mukherjee1996-rb propose an MINLP that is related to our proposed one, but we consider delay constraints for individual services instead of minimizing the mean delay. Banerjee2000-oy study a simplified linear model and provides two heuristics. More recently, Jin2016-vd analyzed optical network configurations in emulations using heuristics; Poutievski2022-tu analyzed optical networks configurations in production using ILP formulations. Both do not consider explicit queuing delays. All of the mentioned studies rely on estimated traffic matrices and cannot be directly transferred to virtual network services. The concept of NFV raises the question of optimal resource allocation; Gil_Herrera2016-jv provide an introduction and overview. Our model of NFV requests follows the approach of Keller2014-mj and Draxler2018-hc, where NFV forwarding graphs act as templates and concrete scaling decisions are made during the optimization process. Agarwal2019-li make use of a stricter request model with a fixed forwarding graph; however, they incorporate M/M/1 queues instead of just considering propagation to describe end-to-end delays. We intend to combine the strengths of a more flexible VNF forwarding graph model with the M/M/1 queuing model in the additional context of lightpath topology configuration. There are only a few publications on joint VNF service provisioning and lightpath topology reconfiguration. All of them are in the context of routing and spectrum allocation (RSA) in elastic optical networks (EON) and focus on EON-related problems like spectrum fragmentation. Zeng2016-dk formulate an exact MILP and compare several heuristics for placement and RSA of tree-type VNF forwarding graphs with depth three. Fang2016-dm and Khatiri2022-cg provide exact ILP formulations and heuristics to embed simple VNF chains. All three papers do not consider delay constraints and work with restrictive VNF forwarding graph models. § PROBLEM FORMULATION §.§ Remarks on Notation For N ∈ with N ≥ 1 we abbreviate [N] = {1, …, N} and [N]_0 = {0, …, N}. For a directed graph = (, ) we abbreviate an edge (v, v^') ∈ by vv^'. For v ∈ we denote the neighbourhood of incoming and outgoing vertices by ^-(v) = { v^'∈ | v^' v ∈} and ^+(v) = { v^'∈ | v v^'∈}, respectively. We denote the incoming and outgoing degree by ^-(v) = |^-(v)| and ^+(v) = |^+(v)|, respectively. We denote vectors by bold lowercase letters, e.g. 𝐱 = (x_i)_i ∈ [N] = (x_1, …, x_N) ∈^N and write 𝐱≥ 0, iff x_i ≥ 0 for all i ∈ [N]. For a set A we denote its power set by 2^A. We denote the Dirac function in y ∈ by δ_y, i.e. δ_y: →, with δ_y(y) = 1 and δ_y(x) = 0 else. In the problem formulation below we will use variables with multiple indices. To ease readability, we use lower indices for information regarding the substrate optical network and upper indices for information regarding the network function chains and requests. We hope that there will be no confusion of the upper indices with powers. §.§ Network Model We model the substrate optical network as a directed and weighted graph = (, ). We refer to v ∈ as vertex and to e ∈ as edge. Every vertex v has a computing capacity c_v ≥ 0 and can host one or more VNFs f ∈ at adjustable rate. Furthermore, every vertex v has (v) many transceivers which may be used. Every edge e offers data transmission on a wavelength γ∈Γ at rate ≥ 0 that propagation delay d_e ≥ 0. We assume that lightpaths are bidirectional and that the network has no wavelength conversion capabilities. We use the M/M/1 queuing model to approximate the forwarding and VNF execution delay. In particular, we model the arrival of data at the vertices as poisson processes and assume that the execution times of VNFs and the forwarding rates are exponentially distributed. The latter follows from the assumption that packet lengths are exponentially distributed <cit.>. We model the distributions as independent. §.§ VNF Forwarding Graph We model a VNF forwarding graph as an acyclic, directed and weighted graph G = (N, A) with |N| ≥ 2. It has source nodes S = { n ∈ N | ^-(v) = 0} and destination nodes D = {n ∈ N | ^+(v) = 0}. We refer to n ∈ N as nodes and to a ∈ A as arcs to distinguish between the substrate network graph and the VNF forwarding graph G. Every non-source and non-destination node n ∈ N represents a network function f^n ∈. We denote these functional nodes by N_ = N ∖ (S ∪ D). For n ∈ N_ we approximate its resource consumption c^n ≥ 0 as affine linearly dependent on the assigned service rate μ^n ≥ 0. This takes into account that some VNF are more complex than others. In particular, we set c^n = α^n μ^n + β^n, for α^n, β^n ≥ 0. We approximate the outgoing data rate as affine linearly dependent on the incoming data rates. This takes into account that a VNF can alter the incoming date, e.g. a firewall may drop half of the packets or a server may add advertisement. More precisely, for incoming data rates ^n_in = (λ^n^n)_n^∈ N^-(n) we approximate the outgoing data rates λ^nn^≥ 0 for n^∈ N^+(n) by λ^nn^ = (^nn^)^T ^n_in + β^nn^, for ^nn^, β^nn^≥ 0. Requests of VNF forwarding graphs G for a concrete substrate can restrict the position of nodes n ∈ G. In particular, it is possible to decide the placement and rates of multiple source vertices for one source node, or the placement of multiple destinations for one destination node. Also, it is possible to decide the placement of certain VNFs in the network, e.g. if some of the functionalities need to be processed at specialized hardware boxes is the substrate. §.§ Requests A network function service request is modeled by a VNF forwarding graph G = (N,A), a maximal flow completion delay d_max≥ 0, initial data rates on all outgoing arcs of source nodes specified by ⊂_+ and possible restrictions on the positions in the substrate described by , ⊂ N ×× [0,1]. An element (n, v, λ^n_v) ∈ requires that the proportion of data flowing out of n at v is λ^n_v. Similarly, (n, v, λ^n_v) ∈ requires that the proportion of data flowing into n at v is λ^n_v. With the help of these two requirements we can determine, for example, the positions of sources and destinations in the substrate network. The set of requests is denoted by . To reduce the amount of indices, we almost always omit the dependence of G on a particular request r ∈. §.§ Objectives We consider a lexicographic order of four objectives. In the first place, we want to maximize the number of embedded requests that satisfy the delay constraints. We call these fulfilled requests, where 'fulfilled' refers to satisfied delay constraints. In the second place, we want to maximize the number of requests that do not satisfy the delay constraints but satisfy the network-capacity constraints. We call those the unfulfilled but embedded requests. In that case, we want to minimize the maximal lateness of fullfilled requests. This is our third objective. Our fourth objective is an minimal usage of the network's resources, particularly the accumulated data rates over the lightpaths, the assigned service rates, and the number of established lightpaths. Our model can easily be adjusted to maximize a notion of profit by assigning a weight to each (un-)fulfilled but embedded requests and maximizing the sum over them. § MINLP FORMULATION §.§ Variables The main decision variables are , and . The variable ∈{0, 1} decides the lightpath topology, ≥ 0 the embedding of VNF forwarding-graph arcs and ≥ 0 the service rates of VNFs. The variables , and are derived from the main variables. Here, = (x_1, x_2, x_3, x_4) holds information on the status of requests (fulfilledness, embeddedness, lateness, maximal lateness), on the VNF placement and on the usage of lightpaths. All variables except and x_4 depend on a particular request r ∈, but we omit the upper index r for readablility. More precisely, l_w, w^', e, γ∈{0,1} decides if a lightpath is set up from vertex w to w^' over the edge e ∈ on wavelength γ. The variable λ^a_v, v^', w, w^'≥ 0 decides if and how much data is routed from vertex v to v^' over a lightpath from vertex w to w^' to serve arc a. In particular, this means for n^∈ N^-(n) if λ^n^n_v, v^', w, w^' > 0, then the node n^ is placed at vertex v and node n is placed at vertex v^'. The vertices w and w^' determine the route from v to v^'. This is necessary as we might not be able to establish a direct lightpath from v to v^'. The assigned service rate to n ∈ N_ at v ∈ is decided by μ^n_v ≥ 0. The binary variables x_1, x_2 ∈{0,1} indicate if a request is fullfilled embedded or just embedded, i.e. x_1 = 1 implies x_2= 1. The lateness of a request is denoted by x_3 ≥ 0, the maximal lateness of all requests by x_4 ≥ 0. The auxiliary variables y^n_v, z^a_v, v^', w, w^'∈{0,1} indicate if an instance of VNF n is placed on vertex v and if λ^a_v, v^', w, w^' > 0, respectively. §.§ Constraints Our model allows the decision whether or not a request is embedded and where the VNFs should be served. That means that the (multi-class) traffic matrix describing demands between substrate vertices is variable. The affine linear dependency between incoming and outgoing data rates further complicates dependencies. Also, the logical topology is allowed to be a complete directed graph with undirected self-loops: the last two lower indices w, w^' of λ^a_v, v^', w, w^' can be thought of as an edge in the logical topology. This generality makes the formulation of flow conservation rules more subtle than standard formulations. Recall that we suppress dependencies on requests r ∈ for readability. The following constraints hold for all r ∈. §.§.§ Auxiliary Variables x_1 ≤x_2 x_3 ≤M ·(1 - x_1 ) x_3 ≤x_4 ∀ v ∈, n ∈ N_, n^∈ N^-(n): ∑_v^', w^'∈ λ^n^n_v^', v, w^', v ≤M ·y^n_v y^n_v ≤M ·∑_v^', w^'∈ λ^n^n_ v^', v, w^', v ∀ a ∈ A, v, v^', w, w^'∈: λ^a_v, v^', w, w^' ≤M ·z^a_v, v^', w, w^' z^a_v, v^', w, w^' ≤M ·λ^a_v, v^', w, w^'. M > 0 denotes a large enough constant. Constraints (<ref>) and (<ref>) ensure that a request cannot be fulfilled if it is not embedded or its lateness is greater than zero. Constraint (<ref>) defines x_4 as the maximal lateness over all requests. Constraints (<ref>) and (<ref>) activate the binary auxiliary variable y^n_v if vertex v is the endpoint of any route serving arc n^n. The variable is needed to add constant terms in (<ref>), (<ref>) and to compute the execution delay in (<ref>). Constraint (<ref>) and (<ref>) are used to activate the auxiliary variable . This variable is used to prohibit multiple paths in (<ref>) and to compute the propagation and forwarding delay in (<ref>), (<ref>). §.§.§ Placement and Routing Constraints ∀ s ∈ S, n ∈ N^+(s), λ^ns∈: λ^ns ·x_2 = ∑_v, v^', w^'∈ λ^sn_v, v^', v, w^' ∀ (n, v, λ^n_v) ∈: λ^n_v ∑_n^ ∈N^+(n) ∑_v^', v^'', w^'∈ λ^nn^_v^'', v^', v^'', w^' = ∑_n^ ∈N^+(n) ∑_v^', w ∈ λ^nn^_v, v^', v, w^' ∀ (n, v, λ^n_v) ∈: λ^n_v ∑_n^ ∈N^-(n) ∑_v^', v^'', w ∈ λ^n^n_v^'', v^', w, v^' = ∑_n^ ∈N^-(n) ∑_v^'', w ∈ λ^n^n_v^'', v, w, v ∀ n ∈ N ∖ S, n^∈ N^+(n), v ∈: ∑_n^ ∈N^-(n) ∑_v^', w^'∈ α^nn^_n^n ·λ^n^n_v^', v, w^', v + β^nn^ ·y^n_v = ∑_v^', w^'∈ λ^nn^_v, v^', v, w^' ∀ a ∈ A, v,v^'∈, w ∈∖{v, v^'}: 0 = ∑_w^'∈ λ^a_v, v^', w^', w - ∑_w^'∈ λ^a_v, v^', w, w^' ∀ a ∈ A, v,v^', w ∈: ∑_w^' z^a_v, v^', w, w^' ≤1 ∀ a ∈ A, v, v^', w ∈, s.t. v ≠ w v^'≠ w: λ^a_v, v^', w, w = 0 λ^a_w, w, v, v^' = 0 ∀ a ∈ A, v, v^', w ∈, s.t. v ≠ v^' : λ^a_v, v^', w, v = 0 λ^a_v, v^', v^', w. = 0 Constraint (<ref>) sets the initial data rate of embedded requests according to . Constraints (<ref>) and (<ref>) restrict the placement of nodes on vertices according to and . Constraint (<ref>) establishes the affine linear relation between incoming and outgoing data rates. But this constraint has two more purposes. On the one hand, it ensures the correct embedding of destination demands specified by (n^, v_d) ∈ if n^+ is a destination node. On the other hand, it is a flow conservation rule at the level of network function placements: it ensures that the endpoint v of an embedded network function chain arc n^n is the start point of the next network function chain arc nn^. Constraint (<ref>) is a flow conservation constraint. To obtain a unique delay on a route from v to v^' serving arc a, constraint (<ref>) enforces that this route is unique. The Constraints (<ref>)-(<ref>) prohibit useless self-loops and ensures that sources cannot be sinks and vice versa. §.§.§ Capacity Constraints ∀ v ∈: ∑_r ∈ ∑_n ∈N_ α^n ·μ^n_v + β^n ·y^n_v ≤c_v ∀ v ∈, n ∈ N_: ∑_n^ ∈N^-(n) ∑_v^', w ∈ λ^n^n_v^', v, w, v ≤μ^n_v ∀ w, w^'∈, w ≠ w^': ∑_r ∈ ∑_a ∈A ∑_v, v^'∈ v ≠v^' λ^a_v, v^', w, w^' ≤∑_γ∈Γ ∑_u ∈^+(w) ·l_w, w^', wu, γ. Constraint (<ref>) has two purposes. Firstly, it establishes the affine linear connection between the assigned service rates and the used computing resources at the substrate network vertices. Secondly, it ensures that the computational capacity of the vertices is not exceeded. Constraint (<ref>) guarantees that the assigned service rate on v to n suffices. Constraint (<ref>) connects the two decision variables and : if variable λ^a_v, v^', w, w^' > 0 indicates the usage of a lightpath from vertex w to w^', then the lightpath has to be specified in terms of variables l_w, w^', e, γ. In particular, the lightpath has to start on an edge adjacent to w. Furthermore, the total traffic over a lightpath should not exceed the maximal data rate . §.§.§ Optical Constraints ∀ w, w^'∈, u ∈∖{w, w^'}, γ∈Γ: ∑_u^'∈^-(u) l_w, w^', u^'u, γ = ∑_u^'∈^+(u) l_w, w^', u u^', γ ∀ e ∈, γ∈Γ: ∑_w, w^'∈ l_w, w^', e, γ ≤1 ∀ w, w^'∈, e ∈: ∑_γ∈Γ l_w, w^', e, γ ≤1 ∀ w, w^'∈, uu^'∈, γ∈Γ : l_w, w^', uu^', γ = l_w^', w, u^'u, γ ∀ w ∈ : ∑_w^'∈ ∑_u ∈^+(w) ∑_γ∈Γ l_w, w^', wu, γ ≤(w) ∀ w ∈, e ∈ γ∈Γ: l_w, w, e, γ = 0 ∀ w, w^'∈, u ∈^-(w), u^'∈^+(w^') γ∈Γ: l_w, w^', uw, γ = 0 l_w, w^', w^'u, γ = 0 Constraint (<ref>) is a flow conservation constraint on the level of wavelengths. Constraint (<ref>) ensures that distinct lightpaths use distinct wavelengths over the same edge. Constraint (<ref>) restricts a lightpath to one wavelength. Constraint (<ref>) ensures that lightpaths are bidirectional. Constraint (<ref>) makes sure that the number of transceivers is less than the node degree. Similar to Constraints (<ref>)-(<ref>) the constraints (<ref>)-<ref>) prohibit useless cycles. §.§.§ Delay Constraints ∀ (n)_j ∈ [J]∈ P, ∈^J: ∑_j ∈[J-1] ∑_w, w^'∈ z^n_j n_j+1_v_j, v_j+1, w, w^' ·ψ_w, w^' + ∑_j ∈[J-1] ∑_w, w^'∈ w ≠w^' z^n_j n_j+1_v_j, v_j+1, w, w^' ·φ_w, w^' + ∑_j ∈[J-2] y^n_j+1_v_j+1 ·ρ^n_j+1_v_j+1 ≤d_max + x_3, where ψ_w, w^' := ∑_e ∈ ∑_γ∈Γ d_e · l_w, w^', e, γ φ_w, w^' := ( - ∑_r∈ ∑_a ∈ A ∑_v, v^'∈λ^a_v, v^', w, w^')^-1 ρ^n_v := ( μ^n_v - ∑_n^∈ N^-(n) ∑_v^', w ∈λ^n^n_v^', v, w, v)^-1. There might be multiple paths p ∈ P from source to destination nodes in G. As we allow to scale out each VNF, we need to bound the flow completion delay of any possible embedding of p into . This is formalized in (<ref>) - (<ref>). More precisely, the constraint ensures that the total delay of any flow on the path (n_1, …, n_J) ∈ P embedded onto the vertices (v_1, …, v_J) ∈^J does not exceed the sum of the delay bound d_max and the lateness x_3. The total delay of such a flow is given by the sum of the propagation, forwarding and execution delays. The propagation delay (<ref>) is the sum of lightpath propagation delays. The forwarding (<ref>) and execution (<ref>) delays are given by the sojourn time of a M/M/1 queue. §.§ Objective Function We split the overall objective function into four partial objective functions with lexicographic order. Our main objective is to maximize the number of fulfilled requests o_1 = ∑_r ∈ x^r_1. The secondary objective is to maximize the number of fulfilled and unfulfilled but embedded requests o_2 = ∑_r ∈ x^r_2. Third, we aim to minimize the maximal lateness of all embedded requests o_3 = x_4. Our fourth objective is to minimize the weighted sum of the number of established lightpaths weighted by their propagation delay, the accumulated data rates over the configured substrate network and the accumulated service rates of the VNFs o_4 = c_1 o_path + c_2 o_data + c_3 o_proc, where c_1, c_2, c_3 ≥ 0. Here, we set o_path = ∑_w, w^'∈∑_e ∈∑_γ∈Γ d_e · l_w, w^', e, γ o_data = ∑_r ∈ ∑_a ∈ A( ∑_v, v^', w, w^'∈λ^a_v, v^', w, w^' - 1/2∑_v ∈λ^a_v, v, v, v) o_proc = ∑_n ∈ N_ ∑_v ∈μ^n_v In the definition of o_data, we scale down variables of the form λ^a_v,v, v, v by 1/2 as they correspond to embeddings of arcs that do not generate any traffic in the substrate network. The total objective is the minimization of o = C_3o_3 + C_4o_4 -C_1o_1 - C_2o_2 for C_i ≥ 0, i∈ [4]. Here, C_i should be choosen such that the o_i satisfy a lexicographic order with o_i+1≤ o_i. §.§ Complexity First, we look at the space complexity of the problem. We denote the maximal number of arcs in any NFV Forwarding Graph request by m = max_|A| and obtain that the size of variable is in (m||||^4). This is not surprising, as we need to decide for any request r ∈ and arc a ∈ A a start and endpoint v, v^'∈ and a route in the complete graph over with 2||^2 edges. The size of variable is in (||^2 || |Γ|), as we need to decide for any lightpath between w, w^'∈ the routing and wavelength assignment. If we drop the Delay constraints (<ref>)-(<ref>) and assume that |Γ| ≤ m|| (more wavelengths than arcs to embed are unnecessary), we obtain a number of constraints in (m||||^3). The delay constraints strongly depend on the length j of the longest path from any source to any destination node in a requested VNF forwarding graph. Denoting p = max_ |P| we then obtain that the delay constraints are in (p ||^j). Since the problem contains two NP-complete problems as a sub-problem (placement, scaling and routing of VNFs <cit.> and RWA <cit.>) it is NP-complete. §.§ MIQCP Reformulation As the MINLP has a large space and time complexity, even small instances are difficult to handle with general mixed-integer, non-linear and non-convex solvers. Therefore, we reformulate the problem as a non-convex MIQCP for which specialized solvers like Gurobi <cit.> are available. (MIQCP Reformulation) We can reformulate the MINLP as an equivalent MIQCP with the same space complexity up to a constant factor. We replace the rational terms of the form (μ - λ)^-1 in the delay constraints (<ref>), (<ref>) by auxiliary variables: for (<ref>), we introduce the auxiliary variables with constraints ∀ w, w^'∈, w ≠ w^': 0 ≤η_w, w^' 1 + ∑_r∈ ∑_a ∈A ∑_v, v^'∈ λ^a_v, v^', w, w^' ·η_w, w^' ≤·η_w, w^', for (<ref>) we introduce the auxiliary variables with constraints ∀ n ∈ N_, v ∈: 0 ≤θ^n_v y^n_v + ∑_n^ ∈N^-(n) ∑_v^', w ∈ λ^n^n_v^', v, w, v ·θ^n_v ≤μ^n_v ·θ^n_v. We can then rewrite the delay constraints (<ref>)-(<ref>) as quadratic constraints: ∀ (n)_j ∈ [J]∈ P, ∈^J: ∑_j ∈[J-1] ∑_w, w^'∈ ∑_e ∈ ∑_γ∈Γ d_e ·z^n_j n_j+1_v_j, v_j+1, w, w^' ·l_w, w^', e, γ + ∑_j ∈[J-1] ∑_w, w^'∈ w ≠w^' z^n_j n_j+1_v_j, v_j+1, w, w^' ·η_w, w^' + ∑_j ∈[J-2] y^n_j+1_v_j+1 ·θ^v_j+1_n_j+1 ≤d_max + x_3. The number of added variables and constraints does not alter the space-complexity classes determined in Section <ref>. § MILP APPROXIMATION A significant part of the problem's time complexity arises from the nonlinear terms in the delay constraints (<ref>)-(<ref>). We propose two steps to reduce this complexity. First, we only allow to establish lightpaths on a fixed given route, e.g. the shortest path with respect to the propagation delay. The quadratic expression in (<ref>) then simplifies to a linear one. Second, we approximate the nonlinear terms in (<ref>) and (<ref>) by piecewise-linear functions. In total we obtain an MILP formulation that approximates the exact MINLP formulation. §.§ Wavelength Assignment Only We see two possibilities for restricting the route on which lightpaths can be established: we could either extend the above model by additional constraints or reduce it accordingly. For completeness, we state our reduction in Appendix <ref>. §.§ Piecewise-Linear Approximation For Ω⊂^2_++ let g: (0, ∞) →, x ↦ x^-1, h: Ω→_, (x, y) ↦ g(x) y and h(0,0) := 0. The nonlinear terms in (<ref>) and (<ref>) are of the form h(μ - λ, y) for 0 ≤λ≤μ and y ∈{0, 1}. Our goal is to find a piecewise (affine) linear function h̃ for the relaxation y ∈ [0,1] such that h̃ approximates h for y ∈{0, 1}. All service rates μ are bounded from above by some constant E > 0: either μ is the constant forwarding rate of the transceivers, or μ is the variable service rate μ^n_v and due to the vertex capacity constraint (<ref>) bounded by (c_v - β^n)/α^n. Since μ - λ is not naturally bounded from below, we introduce an artificial lower bound > 0 and demand that < μ - λ if μ >0, i.e. we bound the proportional utilization of forwarding and processing capabilities by λ / μ∈ [0, 1 - / μ]. Thus allows us to restrict g to the interval [, E] on which the function is bounded. First, we approximate g by a piece-wise affine linear function on the interval [, E] for ∈ (0,E), see Figure <ref>. Assume we are given a partition Π = {π_k}_k ∈ [K]_0 of the interval, i.e. π_0 =, π_k-1 < π_k for k ∈ [K] and x_K = E. Any point x ∈ [, E] can be expressed as a convex combination of two consecutive points in Π, that is x = ξ π_k-1 + (1 - ξ) π_k for appropriate ξ∈ [0,1], k ∈ [K]. We approximate g by on [π_k-1, π_k] by the linear segment between g(π_k-1) and g(π_k), shifted by a constant c ≥ 0: (x) = ξ (g(π_k-1) + c ) + (1 - ξ) ( g(π_k) + c ) = ξ ( π_k-1^-1 + c ) + (1 - ξ) ( π_k^-1 + c ). An algorithmic approach to find a near optimal partition Π and constant c for a bounded convex function and given cardinality K can be found in <cit.>. Second, we approximate h on Ω = {α (x, y) | α∈ [0,1], x ∈ [, E], y ∈ [0, 1] } by piece-wise affine linear segments, see Figure <ref>. Let = {}_k ∈ [K+1]_0⊂Ω with _k = (π_k, 1) for k ∈ [K]_0 and _K+1 = (π_K, 0). Any point ω∈Ω can be expressed as a conic combination of two consecutive points in , that is ω = ξ_k-1 _k-1 + ξ_k _k for appropriate k ∈ [K+1], 0≤, ξ_k-1 + ξ_k≤ 1. For k ∈ [K] we approximate h(ω) by h̃(ω) = ξ_k-1 ( π_k-1^-1 + c ) + ξ_k ( π_k^-1 + c ), for k = K + 1 by h̃(ω) = ξ_K (π_K^-1 + c). We add variables (ξ_k)_k ∈ [K+1]_0∈ [0,1]^K+2 for each queue which represent the conic variables (<ref>), (<ref>). Since at most two consecutive entries can be non-zero, (ξ_k)_k ∈ [K+1]_0 lies in the special ordered set of type 2 denoted by <cit.>. This membership is implemented by state-of-the-art MILP solvers like Gurobi <cit.>. For completeness, we state the additional variables and constraints in Appendix <ref>. § EVALUATION We evaluate the topology reconfiguration by considering the exact MIQCP and the approximative MILP formulation with reconfiguration as stated in Section <ref> and <ref> and without reconfiguration with additional constraints fixingx the substrate topology. We demonstrate that a reconfiguration can reduce the end-to-end delay of services, and that our MILP formulation approximates the original formulation well, while bounding the execution time. We consider a multicast request with one source, three destinations and a VNF f in between. We assume that the substrate has two vertices for processing with different capacity and that source, destinations and computing vertices are distinct. We evaluate our models on three topologies with six vertices: a path, a barbell and a cycle, see Figure <ref>. To determine which delay bound can be satisfied for the request, we only consider two of our four objectives: we maximize the number of embedded requests and minimize the lateness. Since we set the requested delay bound to zero, the lateness equals the end-to-end delay of the request. For each topology we evaluated the 6·5·4 = 120 possibilities of assigning two different processing capacities and the source of the multicast request to three distinct vertices, using the remaining three vertices as the multicast's destinations. The models are described via Pyomo <cit.> and solved with Gurobi <cit.> on a machine with 8 cores and 16GB RAM. Table <ref> summarizes the parameters of the setup. To apply the MILP approximation we need to specify bounds μ - λ∈ [, E], where μ is the service rate and λ is the arrival rate of each active forwarding/processing queue, respectively. In our evaluation example, we focus on the lateness of the request and don't minimize the resource consumption. For this reason, the processing capacities of a node are either not, or fully assigned to the request. Since λ≤ 3, we obtain bounds [1, 4] for the forwarding queues, [2, 5] for the processing vertex with smaller capacity and [47, 50] for the vertex with larger capacity. We choose six base points for [1, 4], four base points for for [2, 5] and 2 base points (i.e. just the upper and the lower bound) for [47, 50] as these choices lead to an approximation error smaller than 0.01 per queue. The results of our evaluation are shown in Figure <ref>. As we expected, the reconfiguration of the lightpath topology can decrease the end-to-end delay: here, by a factor up to 2.8. However, the gain depends on the substrate topology, available transceivers on the vertices and the distribution of ingresses, egresses & processing capabilities. The MILP formulation approximate the exact MIQCP well; in 80% of our evaluated scenarios the proportion of the absolute difference of the computed minimal latenesses and the exact MIQCP lateness was smaller than 0.01, see Figure <ref>. Moreover, the MILP approximation bounded the execution time by ∼ 100s. In contrast, solving the MIQCP took in 15% of our evaluated scenarios more than 1000s and exceeded our processing bound of 1h in more than 5%. Nevertheless, the approximation could not always decrease the execution time, compare Figure <ref>. § SUMMARY AND OUTLOOK We showed that a reconfiguration of the lightpath topology in the context of NFV can decrease the end-to-end delay of services. Solving the large optimization problems in practice will be challenging. However, our work can be a base for reducing the model in order to obtain smaller MIQCP/MILP instances or serve as a baseline for heuristics. For completeness and as a reference for implementation we add the details of the MILP approximation. §.§ Wavelength Assignment Only Formulation For completeness, we describe our adoption of the problem. For all w, w^'∈ we denote the unique given path from w to w^' with length J by p_w, w^'∈^J + 1 and its associated propagation delay by d_w, w^'≥ 0. After deciding which lightpath to establish, we no longer have to decide the routing, but are still left with the wavelength assignment; instead of considering binary variables l_w, w^', e, γ∈{0, 1} we consider l_w, w^', γ∈{0, 1} to indicate a lightpath establishment from w to w^' on γ∈Γ. Here, we express that e ∈ is used in path p_w, w^' by writing e ⊏ p_w, w^'. ∀ w, w^'∈: ∑_r ∈ ∑_a ∈A ∑_v, v^'∈ v ≠v^' λ^a_v, v^', w, w^' ≤∑_γ∈Γ ·l_w, w^', γ ∑_γ∈Γ l_w, w^', γ ≤1 ∀ e ∈, γ∈Γ: ∑_w, w^'∈ e ⊏p_w, w^' l_w, w^', γ ≤1 ∀ w, w^'∈, γ∈Γ: l_w, w^', γ = l_w^', w, γ ∀ w ∈: ∑_w^'∈ ∑_γ∈Γ l_w, w^', γ ≤(w) The propagation delays (<ref>) in the delay constraints reduce to the linear term ∑_j ∈ [J-1]∑_w, w^'∈ d_w, w^'· z^n_j n_j+1_v_j, v_j+1, w, w^'. In the objective function we set o_path = ∑_w, w^'∈ ∑_γ∈Γ d_w, w^'· l_w, w^', γ. §.§ Piecewise Linear Approximation For simplicity, we assume that the lower and upper bounds 0 < < E and the partitions Π⊂ [, E] containing K+1 base points are the same for all queuing terms. It is straight forward to use different parameters for every queue. In that case, some caution is needed to deal with vertices where E <=, e.g. for vertices with no processing capacity at all. We replace the utilization constraints (<ref>), (<ref>) by ∀ v ∈ n ∈ N_: ·y^n_v ≤μ^n_v - ∑_n^ ∈N^-(n) ∑_v^', w ∈ λ^n^n_v^', v, w, v ∀ w, w^'∈, w ≠ w^': ∑_γ∈Γ l_w, w^', γ ≤∑_γ∈Γ ·l_w, w^', γ - ∑_r ∈ ∑_a ∈A ∑_v, v^'∈ v ≠v^' λ^a_v, v^', w, w^', set π_K = π_K+1 and add the the constraints ∀ n ∈ N_, a ∈ A, v, v^', w, w^'∈, k ∈ [K+1]_0 : ξ^n_v, k, ξ^a_v, v^', w, w^', k ∈[0,1] (ξ^n_v, k)_k ∈[K+1]_0 ∈ (ξ^a_v, v^', w, w^', k)_k ∈[K+1]_0 ∈ ∀ n ∈ N_, v ∈: y^n_v = ∑_k∈[K]_0 ξ^n_v, k μ^n_v - ∑_n^ ∈N^-(n) ∑_v^', w ∈ λ^n^n_v^', v, w, v = ∑_k∈[K+1]_0 π_k ·ξ^n_v, k ∀ a ∈ A, v, v^', w, w^'∈: z^a_v, v^', w, w^' = ∑_k∈[K]_0 ξ^a_v, v^', w, w^', k - ∑_r^'∈ ∑_a^'∈A ∑_u, u^'∈ λ^a^'_u, u^', w, w^' = ∑_k∈[K+1]_0 π_k ·ξ^a_v, v^', w, w^', k. The sum of the forwarding and processing delays (<ref>), (<ref>) is then approximated by ∑_j ∈ [J-1] ∑_w, w^'∈ w ≠ w^' ∑_k ∈ [K]_0 (π_k^-1 + c ) ·ξ^n_jn_j+1_v_j, v_j+1, w, w^', k + ∑_j ∈ [J-2] ∑_k ∈ [K]_0( π_k^-1 + c ) ·ξ^n_j+1_v_j+1, k.
http://arxiv.org/abs/2306.06650v2
20230611111302
Scanning NV magnetometry of focused-electron-beam-deposited cobalt nanomagnets
[ "Liza Žaper", "Peter Rickhaus", "Marcus Wyss", "Boris Gross", "Martino Poggio", "Floris Braakman" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "quant-ph" ]
Focused-electron-beam-induced deposition is a promising technique for patterning nanomagnets for spin qubit control in a single step. We fabricate cobalt nanomagnets in such a process, obtaining cobalt contents and saturation magnetizations comparable to or higher than those typically obtained using electron-beam lithography. We characterize the nanomagnets using transmission electron microscopy and image their stray magnetic field using scanning NV magnetometry, finding good agreement with micromagnetic simulations. The magnetometry reveals the presence of magnetic domains and halo side-deposits, which are common for this fabrication technique. Finally, we estimate dephasing times for electron spin qubits in the presence of disordered stray fields due to these side-deposits. Predicting Software Performance with Divide-and-Learn Tao Chen ===================================================== Nanomagnets with precisely defined geometries are of interest for a variety of applications, including magnetic resonance force microscopy<cit.>, as mediating elements between spins and mechanical degrees of freedom<cit.>, magnetic memories<cit.>, and for the implementation of quantum logic with spin-based qubits<cit.> such as electron spins confined in quantum dots. Such electron spin qubits can be controlled and manipulated using high-frequency voltages applied to metallic gates<cit.>, and selective spin rotation can be implemented by periodically displacing the electron wave function inside a magnetic field gradient resulting from a nearby nanomagnet <cit.>. Recent experiments have shown successful operation in fault-tolerant regimes with gate fidelities above the required thresholds<cit.>. Realizing fast spin rotation, while at the same time keeping dephasing and relaxation rates acceptably low, requires precise engineering of strong magnetic field gradients. This places stringent constraints on the geometry, relative location and alignment, and magnetic properties of the used nanomagnets<cit.>. Furthermore, when scaling up to larger qubit arrays, the variability between a large number of individual nanomagnets will need to be characterized and minimized. Spatial characterization of nanomagnet stray fields is therefore important in order to facilitate qubit device fabrication, precise positioning of quantum dots relative to the nanomagnet, and to correctly assess and minimize qubit decohering mechanisms. Typically, nanomagnets are patterned using a multi-step procedure, involving resist-coating, electron-beam lithography, metallization, and lift-off. Such a procedure is prone to introducing impurities in the devices due to residual resist particles, as well as to introducing possible misalignment. Furthermore, such techniques are limited to fabrication of 2D patterns. Here, we use focused-electron-beam-induced deposition (FEBID) to pattern Co nanomagnets in a single step<cit.>. FEBID is an appealing technique for the fabrication of nanomagnets integrated in qubit devices, since it generates no impurities in the form of residual resist, eases fabrication due to its single-step nature, and allows for the fabrication of 3D geometries<cit.>, opening up new ways of engineering magnetic gradients optimized for spin qubit control. FEBID of Co has been demonstrated as a reliable technique for growing highly magnetic nanostructures, reaching Co content of up to ∼96 atomic percent of bulk values<cit.>. FEBID also allows for patterning with lateral resolution in the nm range<cit.>, approaching the intrinsic limit of the process imposed by the electron beam diameter<cit.>. For Co nanostructures, lateral resolutions of below 30 have thus far been achieved<cit.>. We characterize the properties of FEBID nanomagnets using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), high-resolution energy dispersive spectroscopy (EDS) analysis, and atomic force microscopy (AFM). Next, We use scanning NV magnetometry (SNVM)<cit.> to image the magnetic stray field of the Co deposits, both at externally applied magnetic field sufficiently high to achieve magnetization saturation, and at zero field. We find good agreement of our measurements with micromagnetic simulations. From our SNVM measurements of the disordered magnetic stray field of unintended deposits surrounding the nanomagnet, we estimate spin qubit dephasing times in the presence of charge noise. The sketch in Fig.<ref>a illustrates the working principle of our FEBID<cit.> fabrication technique. First, a precursor molecule containing cobalt, Co_2(CO)_8, is introduced inside a chamber pumped to high vacuum. Irradiating the precursor with an electron beam causes it to decompose, leaving Co deposits on a nearby sample substrate<cit.>. By directing the beam using a scanning electron microscope, this technique can be used to, in a single step, pattern Co nanodeposits with high resolution<cit.>. We use a Thermo Fisher FEI Helios 650 NanoLab FIB/SEM, fitted with a Co_2(CO)_8 gas injection system. The nanomagnets are patterned on the top surface of a Si substrate covered with 290 of thermally grown SiO_2. We fabricate the nanomagnets with a nanowire (NW) shape, in order to obtain a magnetic configuration with a single magnetic easy axis, enabling simple alignment of our scanning probe and straightforward comparison to simulations. To achieve high Co content and high lateral resolution, we used the following FEBID parameters<cit.>: an acceleration voltage of 10, a beam current of 3.2, a dwell time of 1, and a precursor flux corresponding to a vacuum chamber pressure of 4e-6. Using these settings, we have achieved patterning Co structures with lateral widths down to 50 and heights down to 15nm, as measured via AFM (See Supporting Information). After FEBID fabrication, we use scanning and transmission electron microscopy (SEM and TEM) to characterize the geometry and composition of representative nanomagnets. Fig.<ref>b shows an SEM top-view image of a Co NW deposit (dimensions: 14.4 length, 230 width, and 130 height) and Fig.<ref>c shows an HAADF-STEM image of a cross-section of such a deposit. In Fig.<ref>c, the rounded cross-section of the Co NW can be discerned, as well as "halo" side-deposits of nm thickness extending laterally for several microns. EDS mapping along the linecut indicated in Fig.<ref>c reveals a composition consisting mostly of Co (82 ± 2.5 %), with additional smaller amounts of C (14 ± 2.5 %) and O (4 ± 2.5 %) (see Fig.<ref>d). We find that this composition is rather uniform throughout the deposit, including similar proportions in the halo side-deposits (see Supporting Information for additional EDS data). The halo is commonly deposited as a side-effect in FEBID, produced through precursor dissociation by secondary electrons scattering off the substrate and the pattern that is being grown<cit.>. Such halo deposits are typically undesirable and various approaches can be used to mitigate their formation. The amount of halo and its composition can vary depending on the deposition parameters, in particular the exact amount of precursor gas present in the chamber. Furthermore, by performing FEBID at low temperatures, halo effects can potentially be reduced. Finally, the halo can in principle be removed by means of argon ion milling (see Supporting Information), although at the same time a layer of the intended deposited structure and surrounding substrate may be removed and charge defects may be introduced into the device. Compared to other scanning probe magnetometry techniques<cit.>, such as scanning superconducting quantum interference device (SQUID) magnetometry and magnetic force microscopy (MFM), SNVM<cit.> offers several advantages which make it suitable for our use. Of particular relevance for our application is the high spatial resolution that can be achieved with SNVM, which can reach 1525<cit.>, making it possible to image magnetic fields and currents at length scales relevant for spin qubit devices. Also, SNVM yields quantitative measurements of the magnetic fields as the Zeeman energy of a single NV-center defect can be probed directly. Furthermore, due to its high magnetic field sensitivity on the order of μT/√(Hz), SNVM allows imaging the weak fields associated with nanoscale magnetic domains <cit.> and other spatially inhomogeneous magnetic stray fields, making it a useful tool to study the magnetization properties of FEBID Co halo structures and their impact on spin qubit performance. Fig.<ref>a illustrates the SNVM setup employed here: a commercial system (Qnami ProteusQ) operating under ambient conditions. We use a diamond cantilever (Qnami Quantilever MX) hosting a single negatively charged NV center embedded inside its protruding tip. The cantilever is attached to a quartz Akiyama tuning fork, allowing for frequency-modulated AFM. For our measurements we use diamond tips hosting an NV center with a spin quantization axis oriented parallel to the principal axis of the NW magnet, i.e. along the x-axis as defined in Fig.<ref>a. This type of diamond tips are fabricated from 110 diamond blankets<cit.>. We estimate the distance d_NV of the NV center to the apex of the diamond tips to be []3050<cit.>, and corresponding best achievable lateral spatial resolutions of 0.86· d_NV. During the measurements, an external magnetic field B_ext is applied along the NV quantization axis. This direction coincides with the principal NW axis and its easy magnetic axis. To perform magnetometry, we employ measurements of optically detected electron spin resonance as well as of fluorescence. See e.g. Celano et al.<cit.> for a more in-depth description of the SNVM setup and measurement techniques used here. Fig.<ref>b shows an SNVM scan of a Co NW nanomagnet, taken with B_ext = 202.5, which falls within the typical operating range of spin qubits. The scan is taken with a tip-sample distance < 5, and consequently the magnetometry measurements are taken at a distance ∼ d_NV from the sample surface. The scan shows the x-component of the magnetic stray field of the nanowire-shaped magnet, revealing a pole at each end of the magnet. At this value of B_ext, the nanomagnet is almost fully saturated along its magnetic easy axis. The associated stray field profile features large regions surrounding the nanomagnet where field components transverse to the quantization axis of the NV center are small. In these regions, relatively little quenching of NV fluorescence<cit.> occurs and it is straightforward to reconstruct the x-components of the stray field from the SNVM measurements. Even so, we blacked out regions in Fig.<ref>b where we could not reliably track the Zeeman splitting of the NV center. This can occur when the magnetic stray field is too large, transverse components are too large, or when the optical read-out signal is quenched. Especially at the ends of the NW, we expect strong out-of-plane stray field components. These out-of-plane fields are transverse to the NV axis and lead to a quenching of the NV signal<cit.>. We compare the SNVM measurement with finite-element simulations of the x-component of the stray field in the same area around the NW (see Fig.<ref>b, lower panel), using the software package MuMax3.<cit.> Here, we simulate the stray field of a rectangular Co box geometry with a width of 250, height of 130, and a length of 14.4. We use a typical value of the exchange constant for Co, A_ex = 14· 10^-12J/m, and a 5x5x5 nm^3 cell size. Using this model, we obtain a B_x stray field profile that qualitatively agrees well with the experiment, as shown in Fig.<ref>b. Fig.<ref>c shows plots of vertical and horizontal linecuts taken at the corresponding lines shown in Fig.<ref>b. We find best agreement between simulation and experiment when we use a saturation magnetization of M_s = 1.2· 10^6A/m in the simulation. Such a saturation magnetization corresponds to 85% of the bulk value, agreeing well with the atomic fraction of Co measured in our deposit (Fig.<ref>d). We note that exactly aligning the simulation with the experimental data in the xy-plane is to some degree hindered by imperfect knowledge of the precise location of the NV center inside the scanning tip, as well as by the pixel size of the scan. Some deviations between simulation and experiment may also result from the fact that in the simulation we do not take the rounded shape of the NW or the halo into account. We further investigate the presence of magnetic structures of characteristic sizes of 50200. Such dimensions are of the same order of magnitude as the length scales relevant for spin qubit devices, such as the typical dimensions of quantum dots, confinement gate electrodes, nanomagnets, and coupling elements<cit.>. Moreover, also tunneling lengths and typical wave function displacements are of a similar order of magnitude. Of particular relevance for spin qubits are unintended variations of the magnetic stray field on short length scales. In the presence of small displacements of the electron wave function, such variations can translate to magnetic noise, which can limit qubit decoherence <cit.>. In the Co nanomagnet devices shown here, we find small magnetic structures in the form of magnetic domains inside the NW at low B_ext, as well as grain-like stray fields produced by the halo deposits surrounding the NW. We investigate these smaller structures in a Hall bar device consisting of 3 crossing Co NWs fabricated through FEBID using similar parameters as before, see Fig.<ref>a. Fig.<ref>b shows SNVM scans of a part of one of the Co NWs, in the region delineated in Fig.<ref>a. In the upper panel of Fig.<ref>b, SNVM data taken at B_ext = 240 is shown, at which field the magnetization of the horizontal NW section is saturated, resulting in a homogeneous stray field above the NW. In the lower panel in Fig.<ref>b, SNVM of the same section is shown at B_ext = 13. In this case, multiple domains of characteristic size of several hundred nanometer can be discerned in the observed stray field of the magnet including its halo. Next, we use SNVM to study the halo side-deposits in more detail. Figs.<ref>a, and b show optical microscopy and SNVM images, respectively, of a Co Hall bar structure patterned via FEBID, which exhibits a significant halo side deposit. As can be seen, the halo can be distinguished as a dark shade of inhomogeneous shape in the optical microscopy image. The SNVM image of the same area further reveals that the halo presents a magnetic stray field of grainy composition, see Fig.<ref>b (see Supporting Information for a magnified figure). The grainy pattern follows the same shape as the dark shade discernible in Fig.<ref>a: it surrounds the intended deposit and becomes smooth further away from the deposit. We investigate the size distribution of the grainy structures, using a segmentation analysis (Gwyddion) on a subset of the data shown in Fig.<ref>b. We find a typical equivalent square side a_eq of roughly 100, larger than the pixel size of 50x50 nm of the scan. Furthermore, we find associated stray field fluctuations of up to 3 mT. We estimate the effect of such magnetic stray field fluctuations on the dephasing of electron spin qubits when placed inside the stray field of Fig.<ref>b. Specifically, we consider dephasing as a result of spin qubit displacements inside the halo stray field due to charge noise. Typical rms displacement amplitudes of QD electron spin qubits in this scenario are 110<cit.>, along x and y. Out-of-plane displacements are typically negligible for quantum well or MOS quantum dots, since the confinement potential in this direction is much larger than in the xy-plane. Since such displacements are orders of magnitude smaller than the grain size of the halo stray field, we can restrict our analysis to using the first derivative at each point, and neglect high-order derivatives of the stray field. Taking the spin qubit quantization axis to be parallel to x, the stray field derivatives that are relevant for dephasing are therefore dB_x/dx and dB_x/dy. By differentiating the scan of Fig.<ref>b with respect to x and y, we find that both dB_x/dx and dB_x/dy are largest for positions near the intended Co Hall bar deposit (top left and right corners, slightly above bottom left and right corners of plot in Fig.<ref>b), but do not exceed 425μT/nm at any point of the scan. Using these derivatives of the stray field, we can estimate the inhomogeneous dephasing time T_2^* of a spin qubit placed inside the grainy stray field induced by the halo. For each point (x,y) of the scan of Fig.<ref>b, we calculate T_2^*(x,y) using T_2^* = ∑_i(2π√(2)·μ_B/ħ· dB_x/di·Δ i)^-1, with i∈x,y. Here we use Δ x = Δ y = 10<cit.>, an electron spin Landé g-factor of 2, and we assume a quasi-static 1/f-like spectral density of the charge noise<cit.>. Fig.<ref>c shows the corresponding map of T_2^*(x,y). In this case, T_2^*(x,y) decreases from several hundreds of μs on the bottom-right side of the scan, where almost no halo is present, to roughly 10 in the top-left of the scan, where the halo is most intense. We find that T_2^* exceeds 1 for each point of the scan. Note that the contours in the plot of Fig.<ref>c have been obtained by smoothing the data. While these contours indicate the trend of decreasing T_2^* when approaching the Co deposit, the grainy pattern visible in the colorplot of Fig.<ref>c originates from the disordered halo stray field, and hence should not be ignored. The estimated T_2^*(x,y) shown in Fig.<ref>e are on par with those found for various kinds of high-quality spin qubits in Si- and Ge-based quantum dots<cit.>, indicating that the spatially inhomogeneous stray fields of the halo side-deposits need not limit coherence more than other factors, such as charge noise in the presence of strong intended field gradients or spin-orbit coupling, and hyperfine interactions. In future work, we aim to characterize also the time-dependent magnetic noise originating from the halo and evaluate its impact on spin qubits. from Our TEM and SNVM characterization show that our FEBID structures have Co content and saturation magnetization comparable or higher than what is typically obtained using Co evaporation and standard electron beam lithography patterning. Moreover, past results have shown that depositions of Co content in excess of 95 atomic percent can be obtained using FEBID. Such high Co contents, in combination with the ability of FEBID to produce 3D magnet geometries would enable further optimization of nanomagnets for spin qubit control. Finally, future research may target cryo-FEBID for the patterning of magnetic nanostructures on sensitive spin qubit devices, since it allows to pattern deposits with electron doses of order 10^3 μC/cm^2, which is ∼10^4 times less than needed for FEBID at room temperature<cit.>, and similar to what is used in electron-beam exposure of resists. Hence, it can be expected that sample damage due to electron irradiation is comparable for cryo-FEBID and resist-based electron-beam lithography techniques. We thank Prof. José María De Teresa, Prof. Patrick Maletinsky, and Dr. Monica Schönenberger for useful discussions and assisting with the AFM measurements. Calculations were performed at sciCORE (<http://scicore.unibas.ch>) scientific computing center at University of Basel. We acknowledge funding from the Swiss National Science Foundation via NCCR SPIN as well as Project grant 200020207933. *
http://arxiv.org/abs/2306.04398v1
20230607125844
Weak-Valued Correlation Functions: Insights and Precise Readout Strategies
[ "Yuan Feng", "Xi Chen", "Yongcheng Ding" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "hep-th" ]
ifundefinedtextcolor
http://arxiv.org/abs/2306.09808v2
20230616124644
Motivic homotopy theory of the classifying stack of finite groups of Lie type
[ "Can Yaylali" ]
math.AG
[ "math.AG", "math.KT", "math.RT", "14C15 (Primary) 20C33, 14A20 (Secondary)" ]
Amortized Inference for Gaussian Process Hyperparameters of Structured Kernels (Supplementary Material) Aldo Lipani ======================================================================================================= Let G be a reductive group over 𝔽_p with associated finite group of Lie type G^F. Let T be a maximal torus contained inside a Borel B of G. We relate the (rational) Tate motives of G^F with the T-equivariant Tate motives of the flag variety G/B.On the way, we show that for a reductive group G over a field k, with maximal Torus T and absolute Weyl group W, acting on a smooth k-scheme X, we have an isomorphism A^n_G(X,m)_ℚ≅ A^n_T(X,m)_ℚ^W extending the classical result of Edidin-Graham to higher equivariant Chow groups. § INTRODUCTION Let G be a reductive group over _q, a finite field of characteristic p>0, and φ G→ G the q-Frobenius. Then G acts on itself via φ-conjugation, i.e. (g,h)↦ ghφ(g)^-1. The stabilizer of the neutral element is denoted by G^F. If _q denotes an algebraic closure of _q, then G^F(_q)=G(_q) is a finite group of Lie-type. The representation of finite groups of Lie type over fields of characteristic 0 was studied by Deligne and Lusztig (cf. <cit.>). In their article they construct representations of G(_q) by an action on the ℓ-adic cohomology of certain varieties, for ℓ≠ p. Roughly, the varieties in question are constructed by intersection of Bruhat strata and graph of Frobenius.Let us fix a Borel B of G.[Any reductive group over a finite field is quasi-split and thus admits a Borel.] The Bruhat strata of G/B are induced by the Bruhat decomposition of G via pullback along G/B→ G/B×^GG/B≅B G/B. In this article, we want to analyze the the cohomological connection between G^F and B G/B, i.e. study how their motivic categories are related.The derived category of ℓ-adic sheaves D(G^F,_ℓ), for ℓ≠ p, encodes information about the action of G^F on ℓ-adic cohomology. One can show that G^F≅G/_φG, where G acts on itself via φ-conjugation. The restriction of the φ-conjugation to B yields an adjunction D(G/_φG,_ℓ)[r,"",shift left = 0.3em] [l,"",shift left = 0.3em]D(G/_φB,_ℓ). On the other hand there is an adjunction D(G/_φB,_ℓ)[r,"",shift left = 0.3em] [l,"",shift left = 0.3em]D_B(G/B,_ℓ), which is induced via the graph of φ. Thus, it seems natural that the study of these two adjunctions should lead to information about the geometric representation theory of G^F and connection to the classical theory of Deligne-Lusztig.Instead of rewriting the theory of Deligne-Lusztig in the derived setting, we want to understand the adjunctions above in the motivic setting with rational coefficients. The idea is that first after ℓ-adic realization, we get the classical situation back and further this could lead to information about the -representations of G^F, as we are naturally working with rational coefficients. §.§ Motives and connection the representation theory Motives were famously envisioned by Grothendieck to capture similar behavior of cohomology theories in an abelian category. The construction of such a category is not an easy task and has been studied for many years. The main approach is to define a derived category of motives with the hope to find a t-structure on it, so that the heart of this t-structure defines the abelian category of motives. To capture functorial behavior on cohomology theories one demands a full six functor formalism for the derived category of motives. There are several versions of the derived category of motives which agree under certain assumptions. One version was constructed by Cisinki and Déglise in the case of rational coefficients, which we denote by (cf. <cit.>). They show that the assignment X↦(X) from smooth k-schemes indeed admits a six functor formalism (⊗⊣,f^*⊣ f_*,f_!⊣ f^!) and agrees with the classical construction of Morel. In particular, they show that motivic cohomology, i.e. _(X)(1_X,1_X(n)[m]) agrees with Bloch's higher Chow groups A^n(X,2n-m)_. With the help of the 6-functor formalism, we can define the motive of an k-scheme π X→(k) resp. the global sections via M_k(X)π_!π^!1_Y RΓ_S(X,)π_*π^*1_k computing motivic cohomology resp. homology. The existence of a t-structure is a more delicate problem and in general not known. Levine shows that for a particular class of schemes X, e.g. finite fields or affine spaces over finite fields, a t-structure exists on the full triangulated subcategory of Tate-motives (X)⊆(X) generated by the 1_X(n) for n∈ (cf. <cit.>). Further, using weight structures one can see that (_p) is equivalent to the bounded derived category of -vector spaces. We also have realization functors _ℓ(X)→ D_(X,_ℓ) for ℓ≠ p that is conservative and t-exact for the perverse t-structure on D_(X,_ℓ). Let us remark that if for a morphism f X→ Y we have f_*1_X∈(Y), then this automatically induces an adjunction of (X) and (Y). In particular, the adjunction f^*⊣ f_* restricts to Tate motives. We call morphisms with such a property Tate.Let us now explain the relation between Tate-motives and geometric representation theory. For simplicity, let us stay in our setting above, i.e. G/_p is a split reductive group. Let T be a split maximal torus inside a Borel B of G. In <cit.>, Soergel and Wendt show that for schemes stratified by affine spaces, such as the flag variety G/B, one can define a subcategory of called stratified Tate-motives that admits a t-structure. This t-structure is glued from the t-structure on the strata. For G/B with the Bruhat stratification, we will denote the category of stratified Tate-motives with _(B)(G/B). Soergel and Wendt show that _(B)(G/B) is equivalent to the bounded derived category D^b(^,ev) of graded H^*(G/B)-modules concentrated in even degrees. To connect this to representations, we have to go further and endow motivic cohomology with group actions.For this, we need to define equivariant motives. To make sense of the following construction, we need to work in the setting of ∞-categories. The idea is to define () for an Artin-stack via gluing along the atlas. As we essentially have to glue the derived category this only makes sense in the ∞-categorical framework. Then this gluing can be defined via right Kan-extension from schemes to Artin-stacks (cf. <cit.>). As one would expect motivic cohomology of a quotient stack [X/G], where X is a smooth k-scheme and G is a linear algebraic group, yields the equivariant Chow-groups A^n_G(X,2n-m) of Edidin and Graham (cf. <cit.>). In this way, one can also extend the stratified Tate-motives to Artin-stacks. In the case of the flag variety, Soergel, Virk and Wendt show that (B G/B) is equivalent to the bounded derived category of bi-Soergel-modules (cf. <cit.>). Further, they show that applying K_0 yields an isomorphism to the Iwahori-Hecke algebra and Verdier duality yields the Kazhdan-Lusztig involution. In particular, the stratified Tate-motives of B G/B with the weight structure and 6-functor formalism carries information about the ℓ-adic geometric representations of G. §.§ Connection between finite groups of Lie type and their associated flag variety As we have seen above the geometric representation theory of the flag variety is linked to stratified Tate motives. This particular connection uses that the flag variety is stratified by affine spaces and that Tate-motives behave nicely under this stratification. We expect that the geometric representation theory of G^F is linked to Tate-motives on G^F the classifying space of G^F. In this case, we cannot apply the theory of <cit.> as G^F is not split reductive. But we can still link Tate motives of G^F to Tate motives on B G/B in the following way. The stack G^F is equivalent to G/_φG, where G acts on itself via φ-conjugation. Let us fix a maximal torus T⊆ B. Let T act on G also via φ-conjugation. We can embed T into T× T via t↦ (t,φ(t)). If we now let T× T act on G via (t,t',g)↦ tgt'^-1, we get a zigzag of Artin-stacks G/_φG [l,"a",swap] G/_φ T[r,"b"] G/T× T. It is a fact that (G/T× T)≃(B G/B). Thus, on the level of motivic categories, this zigzag yields adjunctions between (G^F) and (B G/B). Now we can formulate the leading question of this article: ∗Do these adjunctions preserve Tate-motives? The answer to this question is positive and yields a first point to access motivic representation theory of G^F via the motivic geometric representation theory of G. §.§ Equivariant motivic homotopy theory of split reductive groups From now on let k be a field and S a regular k-scheme of finite type. We will first work with split reductive S-group schemes in this generality. Later on, we are going to focus on the case where S=(k) and as any reductive group over a field becomes split after finite Galois extension, we deduce the answer to our leading question (<ref>) from the split case. Thus, for now let G be a split reductive S-group scheme with split maximal torus T contained in a Borel B of G. Our main question is about the behavior of Tate motives under the induced maps on corresponding to the zigzag (<ref>). We will work in a more general setting and look at a and b separately. §.§.§ Equivariant motives and passage to Tori The morphism a resembles the motivic version of a more classical problem on classical problem on Chow groups. Let X be an S-scheme with G-action, what is the relation between A^∙_G(X) and A^∙_T(X)? In <cit.> Edidin and Graham answer this question for rational Chow groups in the case S=(k), i.e. A^∙_G(X)_≅ A^∙_T(X)^W_, where W denotes the Weyl group of T in G.This isomorphism is just a shadow of an equivalence that can be seen motivically. [<ref>] Let S be smooth over k. Let G be a split reductive S-group scheme with split maximal torus T and Weyl group W. Assume G acts on a locally of finite type S-scheme X. Then the natural map X/T→X/G is Tate.Further, we have RΓ_S(X/G,)≃ RΓ_S(X/T,)^W. In particular, applying this result to motivic cohomology in the case, where S=(k), we can extend the classical result to higher Chow groups even in the non-split case. [<ref>] Let G be a reductive k-group scheme with split maximal torus T and absolute Weyl group W. Assume G acts on a smooth k-scheme X. Then for all n,m∈, we have A^n_G(X,m)_≅ A^n_T(X,m)_^W. The idea of the proof of Theorem <ref> is to factorize X/T→X/G into X/T→G/N_G(T)→X/G. Then the first map of the factorization is naturally a W-torsor and the second map a G/N_G(T)-bundle. For torsors under finite groups étale descent relates motives via W-invariants. For G/N_G(T)-bundles it suffices to see that RΓ_S(G/N_G(T),) is trivial. We will prove this by reducing the triviality to the equivalence of the map K_0(S)→ K_T(G)^W, which by classical results in equivariant K-theory is known. §.§.§ Motives of T-torsors Let return to our main setting and consider the embedding T↪ T× T given by the graph of φ. The quotient T× T/T under this embedding is isomorphic to T. In particular, this isomorphism gives the map bG/_φT→G/T× T the structure of a T-torsor. So, next we want to understand motives of T-torsors. So let X→ Y be a morphism of Artin-stacks that is a T-torsor. Classically, Chow groups in this setting can be computed rather easily. For each character χ∈ T→ we get a 1-dimensional representation κ(χ) of T. This yields a line bundle L_χ X×^Tκ(χ) on Y. Multiplication with the first Chern class of L_χ yields an action of the character group of T on A^∙(Y). In the case of quotient stacks, like the morphism b, we get A^∙_T(G)≅ A^∙_T× T(G)/ A^∙_T× T(G) (cf. <cit.>).Again, this is just a shadow of of computations for oriented cohomology theories.The idea is the following. As T is split reductive, we have T≅^r for some r∈. By applying successive -quotients, we can write X→ X_1→ X_2→…→ X_r≅ Y, where X_iX/^i. Each of the maps X_i-1→ X_i is a -torsor. So we may reduce to the case, where T=. In this case, we can follows <cit.> and assign the line bundle X×^Å^1 over Y. Multiplication with the first Chern class of yields a fiber sequence M_k(X)→ M_k(Y)→ M_k(Y)(1)[2]. Applying this result to arbitrary T-torsors using the sequence above yields the following. [<ref>] Let f X→ Y be a T-torsor of smooth Artin stacks over S. Then f is a Tate map. Applying this again to motivic cohomology yields the result for on Chow groups for Artin-stacks. [<ref>] Assume that S=(k) and let X→ Y be a T-torsor of smooth Artin stacks over k. Then A^∙(X)_≅ A^∙(Y)_/ A^∙(Y)_. If X and Y are represented by quotients of qcqs schemes by diagonalizable group schemes, for example for b as above, we can actually replace the Chow ring with equivariant K_0. More generally, the analogous statement holds for any oriented cohomology theory that is m-connective (cf. Remark <ref>). §.§.§ Applications to quotients up to conjugation by isogeny We have seen above, that the maps relating G^F and G/T× T are Tate. But we haven't particularly used the fact that we are interested in Frobenius-conjugation but rather conjugation up to isogeny. Thus, we can work in a more general setting, that we describe in the following.Let S be a quasi-compact smooth k-scheme. Let G be a split reductive S-group scheme, P resp. Q be parabolics inside G with Levi-components L resp. M. Let φ L→ M be an isogeny. Then L acts on G via (l,g)↦ lgφ(l)^-1. Let T be a split maximal torus of G contained in L. Fix a g_0∈ G(S) such that g_0φ(T)g_0^-1 =T and denote by φ the composition of φ and g_0-conjugation. We can embed T into T× T via t↦ (t,φ(t)). [<ref>] In the setting above, we have the following zigzag of Tate maps G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T. Further, if S=(k) we can compute the motivic cohomology of G/_φL as A^n(G/_φL)_≅( A^n_T(G/B)_/ A^n_T(G/B)_)^W_L Using results about equivariant K-theory by Uma and Krishna and our results on the cohomology theory of T-torsors, we can extend the situation above to equivariant K-theory and generalize a result by Brokemper. [<ref>] In the setting above assume that S=(k), then we have K_0(G/_φL)_≅ R(T)_^W_L/(f- f| f∈ R(T)_^W_G), where W_G denotes the Weyl group of T in G. Let us give two interesting examples, where Theorem <ref> can be used. * Let k=_q be a finite field with q-elements and assume S=(k). If we set L=G and φ the q-Frobenius, then we precisely get the situation of the beginning back. In particular, we see that there is an adjunction between (G^F) and (B G/B). * Let k be a finite field of characteristic p>0 and assume S=(k). Another interesting example is the stack of G-zips of Pink-Wedhorn-Ziegler (cf. Example <ref>). In particular, we can partially recover the computations of Brokemper <cit.> for Chow groups and we can generalize these further to Grothendieck ring of the stack of G-zips. §.§ The case of arbitrary reductive groups over fields As before let k be a field. Further, let G be a reductive group over k with maximal torus T. As before, let φ L→ M be an isogeny between Levi components of parabolics. As we have seen φ-conjugation yields an action of L on G and we get a zigzag G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T, where b is induced via the graph of φ (up to conjugation by an element of G). We are ready to answer the leading question of this article (<ref>) by reducing to the split-case.To prove Theorem <ref>, we need to understand torsors under finite groups. This also becomes helpful in this situation. Any reductive group over a field k becomes split after passing to a finite Galois extension K/k. In particular as (K)→(k) is a (K/k)-torsor and therefore, we can deduce from Theorem <ref> the analogous statement for arbitrary reductive groups. [<ref>] Let G be a reductive group scheme over a field k and T a maximal torus. Further, let φ L→ M be an isogeny between Levi components of parabolics. As before, the maps in the induced diagram G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T. are Tate. §.§.§ Structure of this article We start this article by recalling properties of the ∞-category of motives and how to extend this to arbitrary Artin stacks. After defining the necessary notions for this article, we quickly recollect some computational aspects.Afterwards, we start to focus on motives on schemes with group action. First, we explain how to achieve a group action on motives and how torsors under finite groups of Artin-stacks have a particular behavior. Then we concentrate on the case T⊆ G, a split maximal torus inside a reductive group. Namely, we show that the relation between T-equivariant Chow groups and G-equivariant Chow groups extend to the motivic case.Next, we show that T-torsors of Artin-stacks are Tate and explicitly compute motivic cohomology implying the classical case of Chow groups.In the end, we focus on split reductive groups with conjugation up to isogeny. We use our results from before to get the desired adjunction of Tate motives with the T-equivariant flag variety. Further, we show to extend all of these results to the case of quasi-split reductive groups. We end the paper with ideas for generalization, that we want to address in the future. §.§.§ Setup Throughout, we fix a noetherian excellent scheme of dimension at most 2 and a regular scheme S of finite type over . An Artin-stack is an algebraic stack in the sense of <cit.>. Every Artin stack will be of finite type over S and any morphism of Artin stacks will be an S-morphism.Throughout, we will work in the setting of ∞-categories and freely use the language of ∞-categories.Throughout denotes the Beilinson motives with coefficients in .Let X be a S-scheme. Note that (X)≃ D_Å^1,(X,) by <cit.>. Since we work over an excellent base, the functor X↦(X) satisfies h-descent (cf. <cit.>). §.§ Acknowledgement I would like to thank Paul Ziegler, who communicated this project and shared his thoughts with me. Further, I would like to thank Torsten Wedhorn for multiple discussions and comments. Finally, I would like to thank Arnaud Éteve, Marc Hoyois, Adeel Khan, Jakob Scholbach, Fabio Tanania, Timo Richarz, Thibaud van den Hove for fruitful discussions and feedback. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 Geometry and Arithmetic of Uniformized Structures, project number 444845124 and by the LOEWE grant `Uniformized Structures in Algebra and Geometry'. § RATIONAL EQUIVARIANT MOTIVIC HOMOTOPY THEORY In this section, we want to recall some properties of the category of (rational) motives and how to extend this to Artin stacks. We expect that most readers are familiar with the notion of motives and the 6-functor formalism and refer to <cit.> for an overview of the properties of the 6-functor formalism.Nevertheless, to prevent confusion, let us quickly recall some notation of loc.cit.. In the following any scheme and any morphism will be considered in the category of finite type S-schemes, _S^. (i) For any S-scheme X, (X) is a stable, presentable, closed symmetric monoidal ∞-category. The ⊗-unit will be denoted by 1_X. It has all limits and colimits. (ii) The assignment X↦(X) can be upgraded to a presheaf of symmetric monoidal ∞-categories ^*_S^→^⊗, X↦(X), f↦ f^*. For any morphism of schemes f X→ Y, there is an adjunction f^*(Y)[r,"",shift left = 0.3em] [l,"",shift left = 0.3em](X) f_*. (iii) If f is smooth, then f^* has a left adjoint, denoted f_♯. (iv) The assignment X ↦(X) can be upgraded to a presheaf of ∞-categories : (_S^)^→, X ↦(X), f ↦ f^!. For each f, there is an adjunction f_! : (X) ⇄(Y): f^!. For any factorization f = p ∘ j with j an open immersion and p a proper map, there is a natural equivalence f_! ≅ p_* j_♯. (v) For the projection p: _,S×_S X → X, and any M ∈(X), the map p_♯ p^* M[-1] → M[-1] in (X) is a split monomorphism. The complementary summand is denoted by M(1). The functor M ↦ M(1) is an equivalence with inverse denoted by M ↦ M(-1). For any integer n the n-fold composition is denoted by M↦ M(n) and in the future, we will abbreviate ⟨ n⟩ (n)[2n]. Let be a prestack, i.e. presheaf of anima on the category of rings. There are several approaches to the ∞-category (). If is an Artin-stack over a field k, there are constructions of its motive similar to equivariant Chow groups. One resolves by open substacks (_i) such that on each _i there is a vector bundle V_i together with an open U_i with a free G-action such that the codimension of V_i∖ U_i tends towards infinity (cf. <cit.>). This construction was already used for the motive of classifying stacks G by Morel-Voevodsky (cf. <cit.>). Totaro then gave an explicit computation of the motive of over a field (cf. <cit.> and Example <ref>). Alternatively, Richarz-Scholbach give a construction via certain left and right Kan extension (cf. <cit.>). Their approach is based on gluing the motivic structure on Beilinsion motives to arbitrary prestacks. Indeed, (-) satisfies h-descent, so it is rather formal to extend the six functor formalism to Artin stacks, we will use this approach. One should note that this was also discussed in <cit.>, to extend the 6-functor formalism to higher Artin stacks. For computations of the underlying motives it seems to be better to work with the definition of Hoskins-Lehalleur resp. Morel-Voevodsky. Let f→(k) be an Artin stack and let M() denote the k-linear motive of defined in <cit.>. This defines an object in ^*((k)). Let 1_k denote the unit in ^*((k)). We will see in Corollary <ref> that if f is smooth, we have M()≃ f_♯f^*1_k. In particular, if we use the approach of <cit.> and define a motive of a prestack as the ♯-push/*-pull of the unit, we see that our notion of motives on Artin stacks agrees with the classical ones. [<cit.>] Let y_S^↪ P(_S^) be the Yoneda embedding, where _S^ denotes the (Nerve of the) category of affine schemes of finite type over S. We denote the right Kan extension of ^!_S (_S^)^→_ along y, where the transition functors are given via !-pullback, with _S - here _ denotes the ∞-category of presentable stable -linear dg-∞-categories with colimit preserving functors. For a prestack , we define the ∞-category of S-linear motives (with rational coefficients) of as _S(). Note that in <cit.>, Richarz and Scholbach give a definition of for presheaves on all rings to anima. But as we only work with Artin-stacks that are of finite type over S our definition suffices.Khan showed in <cit.> that this method of extending the theory of motives to (derived) Artin stacks, does not loose the 6-functor formalism. One way to see this, is that we can use the DESCENT program in <cit.>, since the Beilinsion motives satisfy étale descent, in our context. As mentioned in <cit.>, this is equivalent to the construction of <cit.>. Let be the restriction of to Artin-stacks of finite type over S. Then is compatible with the 6-functor formalism in the sense of <cit.>. The proof is the same as <cit.>[As mentioned in op.cit., the method of extending the 6-functor formalism works with any motivic category that satisfies étale descent.]. Let be an Artin S-stack with structure morphism f→ S. Then we define the (rational) S-linear motive of as M_S() f_!f^!1_S.If S=(A) is affine, we write M_A().We further define the global sections of over S to be RΓ_S(,) f_*f^*1_S. Let f→ S be an Artin-stack over S. If f is smooth, then relative purity implies that f_♯f^*≃ f_!f^! and in particular, we see with _(S)(M_S(),(n)[m]) ≃_(S)((-n)[-m],RΓ_S(,)) that M_S() computes motivic cohomology and RΓ_S(,) motivic homology. Let G be an S-group scheme acting on an S-scheme X, via a morphism a. For the quotient stack X/G, we can define a simplicial object in finite type S-schemes, via its Bar-resolution …[r,"", shift left = 0.6 em][r,"",shift right= 0.6 em][r,""] G×_S X[r,"p", shift left = 0.3 em][r,"a",shift right= 0.3 em,swap] X We denote the corresponding simplicial functor with ^∙(X,G). There is also an alternative way to define motives of algebraic stacks via the Bar-resolution. For each n≥ 0 let (^n(X,G)) be the free étale sheaf with coefficients in associated to ^n(X,G). This yields a simplicial object in étale sheaves of finite type S-schemes with rational coefficients. The complex associated to this simplicial object induces a motive M_S(^∙(X,G)) in (S). For S=(k), the spectrum of a field, Hoskins-Lehalleur explain in <cit.> that this definition is equivalent to their definition of a motive of an Artin stack. The naturally arising question is if M_k(^∙(X,G)) is equivalent to M_k(X/G) as defined in Definition <ref>. If X/G is representable by a smooth scheme, then the answer is positive and follows by cohomological descent for with respect to the h-topology (cf. <cit.>). Thus, the answer stays positive for smooth Artin stacks, as satisfies h-descent by gluing (cf. <cit.>). Let k be a field. Let X be a smooth k-scheme of finite type and G be a smooth k-scheme acting on X with structure map fX/G→(k). Then M_k(X/G) is equivalent to M_S(^∙(X,G)). This follows from <cit.> and the discussion above. We can use Corollary <ref> to compute the motive of B as in <cit.>. Let k be a field. Further, let _,k act trivially on (k). Then M_k(B_,k)≃_i∈M_k(^i_k)≃⊕_i≥ 01_k⟨ i⟩. In the following we want to understand the Gysin sequence for algebraic stacks. Let us quickly recall it in the scheme case.Let i Z↪ X be a closed immersion of S-schemes of pure codimension n with open complement U. Let us assume that Z and X are smooth over S. In particular, we see that i is equivalently a regular closed immersion of codimension n. Then there exists a fiber sequence of the form M_S(U)→ M_S(X)→ M_S(Z)⟨ n⟩ (cf. <cit.>). We are going to replace S by a smooth Artin stack over S and X by a smooth Artin stack over . For this let us also recall the notion of a (regular) closed immersion of a certain codimension for Artin stacks.Let ι↪ be a closed immersion of locally noetherian Artin stacks. Let X→ be a smooth atlas. Then ι is representable and we define the codimension of as the codimension of ×_ X in X (cf. <cit.>). We can also define the notion of a regular immersion in that way (cf. <cit.>) and the notion of its codimension. In particular, a closed immersion of S-smooth Artin stacks ↪ is automatically regularly immersed and the codimension of the regular immersion agrees with the codimension as a closed substack. Let f→ be a smooth schematic morphism of smooth Artin-stacks. Further let i↪ be a closed immersion of (pure) codimension n such that is smooth over with open complement j→. Further, let us denote f_0 f∘ j and f∘ i. Then there exists the following fiber sequence f_0!f_0^!1_→ f_!f^!1_→_!^!1_⟨ n⟩ . Let Y→ be a smooth atlas. Let us define X Y×_ and let (Y)_∙ resp. (X)_∙ denote the corresponding Čech nerves. By construction (X)_∙ is obtained by (Y)_∙×_Y X. So, by functoriality we get maps f_∙ !((Y)_∙)[r,"",shift left = 0.3em] [l,"",shift left = 0.3em]((X)_∙) f_∙^! that induce the maps f_! and f^! after passing to the limit. By construction we have a pullback diagram (X)_∙[r,"f_∙"][d,"j_X,∙"] (Y)_∙[d,"j_Y,∙"] [r,""] . In particular, by smoothness of the atlas and the exchange equivalence, we have j^*_Y,∙f_!f^!1_Y≃ f_∙ !f_∙^!1_(Y)_∙. Thus, by smoothness we can use that ^*()≃^!() and descent to see that lim_Δ f_∙ !f_∙^!1_(Y)_∙≃ f_!f^!1_Y. Analogously, we can write lim_Δ f_0∙ !f_0∙^!1_(Y)_∙≃ f_!f^!1_Y, lim_Δ_∙ !_∙^!1_(Y)_∙≃ f_!f^!1_Y. Therefore, we may assume that is representable by a scheme and by representability of f also , and are representable by schemes. Hence, the result now follows from the classical Gysin sequence (cf. <cit.>). Lastly, let us define Tate-motives. As we mentioned in the introduction, the existence of a motivic t-structure is still an open problem. For a field k Levine proved that under certain vanishing assumptions on motivic cohomology in (k), the so called Beilinson-Soulé vanishing conjecture, such a t-structure exists on the full stable subcategory generated by Tate-twists 1_k(n) (cf. <cit.>). The Beilinson-Soulé vanishing conjecture holds for example for finite fields. Let X be an Artin stack. We define the category of Tate-motives (X) to be the full stable subcategory of (X) generated by 1_X(n), for n∈. An element M∈(X) is Tate, if M∈(X). A map f X→ Y of Artin stacks over S is called Tate if f_*1_X is Tate. Levine further shows, that existence of a weight structure on Tate motives for a field imply that the heart of (_p) under the motivic t-structure is equivalent to the category of graded finite dimensional -vector spaces (^). In particular, using a classical result of Wildeshaus, we see for example that (_p)≃^b(^), where ^b denotes the bounded derived category.Let us give a particularly interesting example of Tate map that will be used later on. Let G be a split reductive S-group scheme and B⊆ G a Borel. Then we claim that the structure map of the flag variety G/B→ S is Tate. Indeed, the Bruhat decomposition of G/B yields a stratification by affine spaces indexed by the Weyl group. The length of each Weyl element yields a partial order on the associated Schubert varieties. Using this order, one can show using standard arguments that RΓ_S(G/B,)≃⊕_w∈ W 1_S⟨ l(w)-n⟩, where n denotes the relative dimension of G/B over S (cf. <cit.> or <cit.> for more details on analogous problems). If f X→ Y is a smooth morphism of Artin stacks and X is smooth over S, then D_Y(f_*1_X) ≃ f_!D_X(1_X) ≃ f_!1_X⟨Ω_X/S⟩ and D_Y(f_!1_X)≃ f_*1_X⟨Ω_X/S⟩. Thus, f_*1_X is Tate if and only if f_!1_X is Tate. In the literature one also considers the stable cocomplete ∞-category generated by Tate-twists. This is usually also referred to as “Tate-motives” but we will differentiate these from our definition. An example of such motives is given by the motive of →(k), the classifying stack of over a field k. Indeed, M_k(())≃⊕_n∈ 1_k⟨ n⟩ and thus this lies in the ind-completion of the category of Tate-motives. Let X be an Artin-stack. We will call an M∈(X) completed Tate if it is already in the full stable cocomplete subcategory generated by 1_X(n). § EQUIVARIANT MOTIVES UNDER SPLIT REDUCTIVE GROUPS Let G be a split reductive S-group scheme and T a split maximal torus in G with Weyl group W, where W denotes the S-points of the Weyl group scheme (cf. <cit.> for more on Weyl groups of split reductive group schemes). Let X be a scheme with G action. In this section, we want to show that the natural map fX/T→X/G is Tate, i.e. f_*1_X/T is a Tate-motive in (X/G).The key idea is to use the factorization X/T→X/N→X/G, where N is the normalizer of T in G. We note that per definition the map gX/T→X/N is a W-torsor. As the constant group scheme associated to W is finite étale, we see that after passage to an étale cover, X/T is isomorphic to the disjoint union of X/N indexed by W. Thus, automatically g is Tate. The map X/N→X/G is a G/N-torsor. Up to taking W-invariant, we can identify G/N with G/B. On G/B, we have stratification by Schubert cells, which are affine spaces. In this way, we can decompose p_*1_G/B as a direct sum of twists and shifts indexed by W, where p G/B→ S is the structure map. This will enable us to reduce the question to ordinary equivariant K-theory. More precisely, it will be enough to show that K_0(S)→ K_T(G)^W is an isomorphism, which is classically known.Before coming to our main result of this section, we will first introduce group actions on motives. In the end, we will see that our computations before showed that analogous to the case of Chow-groups, we have RΓ_S(X/G,)≅ RΓ_S(X/T,)^W. A key argument in this section, is that torsors under finite étale group schemes have related motives by taking invariants of the action. This is a major obstruction for the generalization to integral coefficients, as we expect that this is only satisfied if we have étale descent (c.f. <cit.>). Nevertheless, using the theory of étale motives it should to have analogous results after inverting only the residue characteristics of our base scheme. §.§ Torsors under finite groups To warm up, we first show that torsors under finite groups are Tate. In particular, it will be clear that the canonical map X/T→X/N, considered above, is Tate Let f X→ Y be a G-torsor of Artin stacks under a finite étale S-group scheme G. Then f is Tate, i.e. f_*1_X is a Tate-motive in (Y). Let n denote the degree of G over S. Then we claim that the natural map ∐_i=1^n 1_Y→ f_*1_X induced via the unit of f_*f^* is an equivalence.Indeed, by h-descent we may assume that f is given by the trivial G-torsor G×_S Y→ Y and Y is represented by a scheme. In particular, f is a finite étale cover of degree n. After passage to an étale cover, we may assume that G×_S Y ≅∐_i=1^n Y implying the claim. Next, let us analyze the structure of the motives with respect to the base. For this, we need to understand group actions on motives and taking fixed points under these actions. An action of a group G on a motive M in (X) is map G→_(S)(M) that is a group homomorphism on π_0. Or equivalently, it is a map Σ_G→(X) (here Σ_G denotes the deloop of the the group G seen as a discrete category - usually this is denoted with G but to avoid confusion, we changed the notation). Let M be a motive in (X) with an action by a finite group G. Then we define the homotopy G-fixed points of M, denoted by M^hG, as the limit of the action map Σ_G→(X). Let X be an Artin stack and with an action by a finite group G. For each g, we have an action map a_g X→ X. By construction of (X) this defines a map g. 1_X→ 1_X by lax-monoidality of the *-pushforward. This endows any M∈(X) with an action via g.. We define the G-fixed points of M, denoted by M^G as the image[Note that (X) is pseudo-abelian and hence for any idempotent operator we can define its image.] of the map p = 1/# G∑_g∈ G g. The canonical map M^G→ M defines an equivalence M^GM^hG (cf. <cit.>).Let X→ Y be a map of G-torsor of Artin-stacks over S (here we see G as a constant group scheme on S). Then the G-torsor Y-automorphisms of X is isomorphic to G and thus we get a G-action on f_*f^*M for any E∈(Y) (via *-pushforward of a G-torsor Y-automorphism). Note that f_*f^*E can be used to compute the motivic cohomology of X with coefficients in E for smooth f as _(Y)(1_Y,f_*f^*E)= _(Y)(f^*1_Y,f^*E) = _(Y)(M_Y(X),E). In particular, writing the fixed points as a limit, we see that _(Y)(1_Y,(f_*f^*E)^G) = _(Y)(M_Y(X),E)^G. If G is finite, then the G-torsor f is étale and proper, hence f_*f^*≃ f_!f^!. Let k be a field and X a scheme over (k). Further, let K/k be a finite Galois extension and let us denote the base change of X to (K) by X_K. The Chow groups of X and X_K are related by the fixed points under the Galois group, i.e. A^n(X) = A^n(X_K)^(K/k). As one expects, this also holds motivically. This is due to Ayoub and Cisinki-Déglise. As noted in Remark <ref>, we do not expect this to hold, when we do not impose étale descent. Thus, we do not expect the next lemma to hold with integral coefficients. Let f X→ Y be a G-torsor of Artin stacks under a finite group G. Then the unit factors as 𝕀→ (f_*f^*)^G→ f_*f^* and the map 𝕀→ (f_*f^*)^G is an equivalence. The factorization of the unit follows from the description of (f_*f^*)^G as a limit. We claim that 𝕀→ (f_*f^*)^G is an equivalence. It suffices to check this after base change to a smooth atlas of Y. In particular, we may assume that Y is a scheme. Since _ satisfies h-descent, we may assume[Using <cit.>, we see that M_Y(X)^G≃φ_*φ^*1_Y, where φ (,G)→ Y is the induced morphism of the diagram G→_S that maps the single point of the category G=(*,_G(*)=G) to X and the morphisms to the actions. Then we can base change using <cit.>.] that f is a trivial G-torsor, where it follows from <cit.>. Let us consider the W-torsor fX/T→X/N. As W is finite étale, Lemma <ref> implies that f_*f^*1_X/T^W≃ f_!f^!1_X/T^W≃ 1_X/N. After ♯- resp. *-pushforward to the base S along the structure map X/N→ S, we see that M_S(X/N) ≃ M_S(X/T)^W RΓ(S,X/N) ≃ RΓ(S,X/T)^W. Now assume that S=(k) is the spectrum of a field. Applying the latter equivalence to motivic cohomology yields A^n_N(X,m)≅ A^n_T(X,m)^W for the equivariant intersection theory of X. §.§ The relation between the motives of X/T and X/G We have seen, that the map X/T→X/N is Tate and how to use the action of the Weyl group to compute the motive of X/N with respect to the motive of X/T. Now our goal is to show that the map X/N→X/G is Tate and how to compute the !-push/pull of this map. This will be achieved by analyzing the motive of G/N. Using that G/T→ G/N is again a W-torsor, we will reduce to the case of the flag variety G/B. For this, we will need that equivariant motives do not see the action of split unipotent subgroups.Now let us recall the definition of a split unipotent subgroup. These are extensions of vector bundles, e.g. a Borel B containing a maximal split torus T is an extension of T by a split unipotent subgroup. An algebraic S-group scheme U is called split-unipotent if there exists a normal series, i.e. a filtration U= U_n⊇ U_n-1⊇…⊇ U_0 = 0 such that U_i is normal in U_i+1, with each successive quotient is isomorphic a vector bundle (), where is a finite locally free _S-module. The S-subgroup scheme of unipotent matricies _n,S in _n,S is split unipotent. More generally, let G be a reductive S-group scheme and P be a parabolic in G, then the unipotent radical R_u(P) of P is split unipotent (cf. <cit.>). Let F be a linear algebraic S-group scheme. Consider a split exact sequence of S-group schemes 1→ U→ F→ H→ 1 where U is split unipotent. Chose a splitting π H↪ F. Let X be an S-scheme of finite type with an F-action. Then the !-pullback induces an equivalence π^!(X/F)(X/H). This is analogous to the proof of <cit.> but for completion, we give a proof.The morphism π^! induces a morphism π_n^!(^n(X,F))→(^n(X,H)). Using <cit.>, it suffices to show that π^!(F)→(H) is fully faithful. By assumption π is a U-torsor and using étale descent, we may assume that π is given by the trivial U-torsor, i.e. π U× H→ H is given by the projection. Replacing H by S, it suffices to show that π^!(S)→(U) is fully faithful. As U is split unipotent, it has a filtration by subgroup U_i with successive quotients isomorphic to a vector bundle. Using the same argumentation as above, we may assume that U is a vector bundle, in which case the assertion is clear by homotopy invariance. Let B be a Borel contain T inside G and let X be an S-scheme with B-action. The above lemma shows that (X/T)≃(X/B). Our next idea is to use that G/T→ G/N is a W-torsor. The results on torsors under finite groups combined with the above lemma will yield that M_S(G/N) ≃ M_S(G/B)^W. As the flag variety G/B is stratified by Schubert cells, which are affine spaces, one can calculate M_S(G/B) explicitly (cf. <cit.>). We will use the computations of op.cit. to see that the map pX/N→X/G is Tate. Let us assume that =(k) is the spectrum of a field and S is geometrically regular and of finite type over . Let G be a split reductive S-group scheme with split maximal torus T and Weyl group W. Let N denote the normalizer of T in G. Further, let X be an S-scheme with G-action and fX/N→X/G the canonical map. Then the unit 1_X/G→ f_*f^*1_X/G is an equivalence. In particular, f_*1_X/N is a Tate-motive in (X/N). Let us summarize the idea of the proof. We will show that the unit 1_X/G→ f_*f^*1_X/G is an equivalence. To see this we will use that we have the following pullback diagram G/N×_S X[r,""][d,""] X[d,""] X/N[r,""] X/G (cf. <cit.>). In particular, after étale-descent, we may assume that f is given by the projection G/N→ S. Using our calculations about torsors under finite groups, it is enough to show that the induced map 1_S→ RΓ_S(G/T,)^W is an equivalence. But as RΓ_S(G/T,)^W≃ RΓ_S(G/B,)^W, we see that it is Tate. Thus, we will reduce this question to a classical question about Chow rings, at least when S is a field, namely if the pullback map A^∙(S) = A^∙_G(G)→ A^∙_T(G)^W is an isomorphism. But this is known by Edidin-Graham (cf. <cit.>). If S is a not field, we have to work with K-theory and then this follows from <cit.>. Throughout this proof, we will denote for readability the structure map of an Artin-stack to S with p_.Let n denote the relative dimension of G/B. We will first show that the unit 1_X/G→ f_*f^*1_X/G is an equivalence, proving that the map f is indeed Tate.By étale descent, we may assume that YX/G is represented by a scheme and the map f is given by the structure map G/N→ S. Then it is enough to show that the unit 1_S→ RΓ_S(G/N,) is an equivalence, since the pullback of this equivalence along p_X yields the desired equivalence. Note that the natural map g G/T→ G/N is naturally a W-torsor. Hence, by Lemma <ref>, we see that 1_G/N≃ (g_*1_G/T)^W and thus RΓ_S(G/N,)≃ p_G/N*(g_*1_G/T)^W. Since p_G/N* is a right adjoint it commutes with limits. Thus, we have p_G/N*(g_*1_G/T)^W≃ (p_G/N*g_*1_G/T)^W≃ (p_G/T*1_G/T)^W≃ RΓ_S(G/T,)^W By Lemma <ref>, we see that RΓ_S(G/T,)≃ RΓ_S(G/B,). By Example <ref>, we have that RΓ_S(G/B,) ≃⊕_w∈ W 1_S⟨ l(w)-n⟩, where n is the relative dimension of the flag variety G/B. In particular, RΓ_S(G/B,) and thus RΓ_S(G/T,) is Tate. As the W-invariants are defined as an image of a map, we see that RΓ_S(G/T,)^W is also Tate. The ∞-category of Tate-motives over S is the stable subcategory of (S) generated by 1(r), for r∈. Therefore, the natural map 1_S→ RΓ_S(G/T,)^W is an equivalence if and only if the induced map _(S)(1_S(r)[m],1_S)→_(S)(1_S(r)[m],RΓ_S(G/T,)^W) is an equivalence for all r∈ and m≤ 0[By commutativity of with suspensions - a colimit.]. If r>0, then the right hand side is isomorphic to K_-2r+m(S)^(-r) and thus vanishes as the negative Adams eigenspaces vanish per definition (cf. <cit.>). By Remark <ref> and the computation of p_G/B* 1_G/B, we see that _(S)(1_S(r)[m],RΓ_S(G/T,)^W) ≅ (⊕_w∈ WK_-2r+m(S)^(l(w)-n-r))^W, which also vanishes for r> 0 as l(w)-n≤ 0. Therefore, we may assume from now on that r≤ 0. Further, we may assume that m≥ 2r as otherwise -2r + m<0 and thus also the K-groups above vanish (since S is regular noetherian). Writing _(S)(1_S(r)[m],1_S) ≃_(S)(1_S(r)[2r-2r+m],1_S) and using that -2r+m≥ 0, we may assume without loss of generality that m=2r (again use the commutativity of the -functor with limits).Since S is finite dimensional (cf. <cit.>), it is a fact that K_0(S)^(i) vanishes for all but finitely many i∈ (cf. <cit.>). Therefore, the morphism (<ref>) is an isomorphism if and only if ⊕_r∈_(S)(1_S⟨ r⟩,1_S)→⊕_r∈_(S)(1_S⟨ r⟩,RΓ_S(G/T,)^W) is an isomorphism. Equivalently, we can write this morphism as ⊕_r∈_(S)(1_S,1_S⟨ r⟩)→⊕_r∈_(S)(M_S(G/T),1_S⟨ r⟩)^W The motive M_S(G/T) is a direct sum of shifts and twists of the unit of S and therefore compact (cf. <cit.>). By compactness of 1_S and M_S(G/T), the above morphism is an equivalence if and only if the morphism _(S)(1_S,⊕_r∈1_S⟨ r⟩)→_(S)(M_S(G/T),⊕_r∈1_S⟨ r⟩)^W is an isomorphism. By construction of the rational K-theory spectrum in (S), we see that ⊕_n∈ 1_S≃_S, (cf. <cit.>). Thus, as G/T is representable by a scheme (cf. <cit.>), the right hand side is isomorphic to K_0(G/T)≅ K_T(G). Further, the properties of the K-theory spectrum yield that the induced morphism K_0(S)→ K_T(G) is given by the pullback K_0(S)→ K_T(G) (cf. <cit.>). Taking W-invariants yields a map K_0(S)→ K_T(G)^W. As K_0(S) = K_G(G) and the map above is induced via pullback, we see that it is in fact an isomorphism (cf. <cit.> - here we need that is a field). Let =(k) be the spectrum of a field and assume that S is geometrically regular of finite type over . Let G be a split reductive S-group scheme with split maximal torus T and Weyl group W. Assume G acts on a locally of finite type S-scheme X. Then the natural map X/T→X/G is Tate.Further, we have have RΓ_S(X/G,)≃ RΓ_S(X/T,)^W. Let N be the normalizer of T in G. We can factor the map in the theorem as X/TX/NX/G. The first part of the theorem follows immediately with Lemma <ref> and Proposition <ref>.For the proof of the second claim, we apply Lemma <ref> and Proposition <ref> and get (p_X/T*1_X/T)^W≃ (p_X/G*g_*f_*1_X/T)^W ≃ p_X/G*g_*(f_*1_X/T)^W ≃ p_X/Gg_*1_X/N≃ p_X/G*1_X/G. Assume that S=(k) is the spectrum of a field. Let G be a reductive S-group scheme with maximal torus T. Let W denote the absolute Weyl[Let be an algebraic closure of k, then N_G(T)/T is a finite étale group scheme of S and we set the absolute Weyl group to be W()] group of T in G. Assume G acts on a smooth S-scheme X. Then we have RΓ_S(X/G,)≃ RΓ_S(X/T,)^W. In particular, applying this result to motivic cohomology yields for all n,m∈ A^n_G(X,m)_≅ A^n_T(X,m)_^W. Any reductive group over k becomes split after passing to a finite Galois extension K/k (cf. <cit.>). Thus, let K be such an extension, so that T_K is a split maximal torus. Then we have the following diagram with pullback squares X_K/T_K[r,""][d,"f"][rr,"p_T_K", bend left = 2em] X_K/G_K[d,"g"][r,"p_G_K"] (K)[d,"p_K"] X/T[r,""][rr,"p_T", bend right = 2em] X/G[r,"p_G"] (k). As K/k is a finite Galois extension, the morphisms p_K and thus also f are H(K/k)-torsors. Then Lemma <ref> yields the equivalence p_G* 1_X/G≃ (p_K*p_K^*p_G* 1_X/G)^H. As the diagram above has cartesian squares, we can use smooth base change, to see that p_K^*p_G*≃ p_G_K*g^*. But by Theorem <ref>, we have p_G_K*g^*1_X/G≃ RΓ_K(X_K/G_K,)≃ RΓ_K(X_K/T_K,)^W. Commutativity of the above diagram yields p_K*RΓ_K(X_K/T_K,)≃ p_T*f_*1_X_K/T_K. Thus, we have p_G*1_X/G≃ ((p_T*f_*1_X_K/T_K)^W)^H. As limits commute with limits, we may write the right hand side as (p_T*(f_*f^*1_X/T)^H)^W and again by Lemma <ref>, we have (f_*f^*1_X/T)^H≃ 1_X/T concluding the proof.The result about motivic cohomology follows from Remark <ref> by using <cit.> which proves that motivic cohomology for smooth quotient stacks is computed by the higher equivariant Chow groups of Edidin-Graham.[In loc.cit. they assume some properties on the groups and on X, as Edidin-Graham need these assumptions to compare higher Chow theory of stacks and equivariant higher Chow theory <cit.>. The assumptions in loc.cit. are needed as Bloch only shows the existence of a long exact sequence for higher Chow groups in the case where X is quasi-projective. This result was extended by Levine to all separated schemes (cf. <cit.>). Thus, the comparison of Edidin-Graham and hence also of Richarz-Scholbach go through in the case of the corollary.] § TORSORS UNDER SPLIT MAXIMAL TORI Let us fix a split reductive S-group scheme G and T a split maximal torus of rank r inside G. In this subsection, we want to understand the motivic homotopy theory of torsors under split maximal tori. More precisely, let us consider the following situation. Let X→ Y be a T-torsors of Artin stacks. We want to understand the relation between M_S(X) and M_S(Y).Let us recall the classical case of Chow theory and K-theory. For this paragraph let us assume that X and Y are smooth Artin stacks over (k), where k is a field. Then the group of characters of T acts on A^∙(Y) in the following way. Let χ∈ be a character and consider its associated 1-dimensional representation κ(χ). The quotient L_χ X×^Tκ(χ) is representable by a line bundle over Y. Multiplication with the first Chern class of L_χ yields an action of on A^∙(Y). Then for Chow rings it is known that A^∙(X)≅ A^∙(Y)/ A^∙(Y). Our goal is to extend this result to motivic homotopy theory, so that it generalizes the result about Chow theory and yields a similar statement for K-theory. Even though we work with Beilinson motives, we will remark in the end how to extend this to integral K-theory under some assumptions on X and Y. §.§ The motive of T-torsors Let us denote the character group of T with . Let X→ Y be a T-torsor of Artin stacks over S and χ∈ a character. Let _,S act via left multiplication in Å^1_S. Then T acts via χ on Å^1_S and thus, X×^TÅ^1_S→ X/T≅ Y yields a line bundle over Y. The action of the first Chern class of X×^TÅ^1_S on the motivic cohomology will be described by a Gysin sequence (cf. Proposition <ref>). In the following we want to split up the T-torsor f X→ Y into a sequence of _,S-torsors. Note that by splitness T≅_,S^r. Fixing a numbering of the _,S-components of T, we can embed for any 1≤ k≤ r the product _,S^k into T by 𝕀__,S^k× 1^r-k. Then _,S^k acts on X via this embedding. We get a sequence X→ X/_,S→ X/_,S^2→…→ X/T≅ Y of _,S-torsors. We denote the induced maps X/_,S^i→ Y with f_i. Let X→ Y be a T-torsor of smooth Artin stacks over S. Then, there exists a filtration M_Y(X)=M_0→ M_1→…→ M_r = 1_Y in (S), where M_i M_Y(X/_,S^i) such that the cofiber of M_Y(X_i-1)→ M_Y(X_i) is given by M_Y(X_i)⟨ 1⟩ and the map M_Y(X_i)→ M_Y(X_i)⟨ 1⟩ is induced by multiplication with c_1(X_i-1×^Å^1_S). This follows by successively using <cit.>. But for completion let us give a proof by recalling the argument.The morphism X_i-1→ X_i is a _m-torsor. In particular, the scheme _i X_i-1×_S^_,SÅ^1_S is a line bundle over X_i. Let s X_i↪ V_i denote the zero section. Certainly, the complement of the closed immersion s is isomorphic to X_i-1. Then the Gysin sequence of Lemma <ref> yields a fiber sequence M_Y(X_i-1)→ M_Y(V_i)≃ M_Y(X_i) M_Y(X_i)⟨ 1⟩ . By construction of the Gysin sequence φ is given by multiplication with c_1(_i). This concludes the proof. Let f X→ Y be a T-torsor of smooth Artin stacks over S. Then f is a Tate map. Proposition <ref> implies that M_Y(X) is a successive extension of 1_Y. Thus, the result follows from Remark <ref>. §.§ Motivic cohomology of T-torsors For any Artin stack X over S, we will denote its motivic cohomology with H^p(X,(n))_(S)(M_S(X),(n)[p]). If X is representable by a smooth S-scheme, then we have H^p(X,(n))≅ K_2n-p(X)^(n). If S=(k) is the spectrum of a field and X is smooth over S, we have H^p(X,(n)) ≅ A^n(X,2n-p). Note that for a smooth Artin stack X over S the motivic cohomology vanishes automatically in certain degrees by descent and vanishing of negative K-theory for regular schemes, i.e. we have that H^p(X,(n))≅ 0 for p>2n. Consider a χ∈. The associated line bundle L_χ X×_S^T(_χ) yields a map H^p+2(Y,(n+1))→ H^p(Y,(n)), by multiplication with the Chern class of L_χ. We denote the image of this map with c_1(L_χ)H^p(Y,(n)). In the setting of Proposition <ref> let us further fix an n∈. Then, we have H^2n(X,(n)) ≅ H^2n(Y,(n))/ H^2n(Y,(n)). First let us note that it is enough[Any character is generated by primitive characters and the corresponding 1-dimensional representation is given by the associated tensor product.] to show that H^2n(X,(n)) = H^2n(Y,(n))/⟨ c_1(X×_S^T(_χ_i))H^2n(Y,(n))⟩ , where χ_i is a primitive character in . Let X=X_0→ X_1→…→ X_r = Y be the sequence of Proposition <ref>. For each 0≤ i≤ r this yields a long exact sequence on motivic cohomology …→ H^2n+2(X_i,(n+1)) → H^2n(X_i,(n)) → H^2n(X_i-1,(n))→ H^2n+3(X_i,(n+1))→… . We have H^2n+3(X_i,(n+1))= 0 and thus get an exact sequence of the form H^2n+2(X_i,(n+1)) H^2n(X_i,(n)) H^2n(X_i-1,(n))→ 0. The map b is the usual pullback on motivic cohomology. The map a is induced by multiplication with the Chern class of the line bundle _i = X_i-1×^_SV(_χ_i). As X_r = Y, we have H^2n(X_r-1,(n)) ≅ H^2n(Y,(n))/c_1(_r) H^2n(Y,(n)). Hence, inductively we see that H^2n(X,(n)) ≅ H^2n(Y,(n))/⟨ c_1(_i) H^2n(Y,(n))⟩_1≤ i≤ r. We are left to show that c_1(_i) H^2n(Y,(n)) = c_1(X×_S^TV(_i))H^2n(X,(n)). For this let us start with i= r. Then by construction X×_S^T V(_χ_r) ≅ X_r-1×_S^ V(_χ_r). Inductively, we may replace Y by X_i, where the claim again follows by construction. Let S=(k) be the spectrum of a field and Let X→ Y be a T-torsor of smooth Artin stacks over S. Then A^∙(X)_≅ A^∙(Y)_/ A^∙(Y)_. This follows immediately from Proposition <ref>. Proposition <ref> and Proposition <ref> can be extended to other cohomology theories in the following way. Let us fix a T-torsor X→ Y of smooth Artin stacks. (1) (Rational étale localized cohomology theories) Let us fix a T-torsor X→ Y of smooth Artin stacks. Let M∈(S)_, be an oriented E_∞-ring spectrum and let us denote its pullback to any smooth Artin-stack Z with M_Z. The orientation of M yields a Chern class map c_1(Z)→ H^2(Z,M(1))_(Z)(1_Z,M_Z(1)[2]). In the same fashion as before, for any character χ∈, we can define c_1(L_χ)H^n(Y,M). Assume there exists a p∈ such that M satisfies H^m(Z,M)_(Z)(1_Z,M_Z[m])=0 for all m>p. Then H^p(X,M) ≅ H^p(Y,M)/⟨ c_1(X×_S^T(_χ_i))H^p(Y,M)⟩. If we take for example M=_,S, the rational K-theory spectrum in (S), then we get K_0(X)_^≅ K_0(Y)_^/⟨ c_1(X×_S^T(_χ_i))K_0(Y)_^⟩, where K_0(-)_^ denotes the étale localized rational K-theory. (2) (Integral K-theory) The extension to integral K-theory is more subtle, as we have to restrict ourselves to certain algebraic stacks[These are called scalloped stacks (cf. <cit.>).] to make sense of the stable homotopy category. For simplicity, we may assume that X=X'/H and Y=Y'/F are represented by quotients of quasi-projective schemes by diagonalizable group schemes. Then there is a well defined notion of a stable homotopy category (X) resp. (Y) together with a functorial E_∞-ring object that represents equivariant K-theory (cf. <cit.>). Further, Bott-periodicity yields an orientation on . In particular, using that (X) and (Y) are connective[This holds Nisnevich locally by loc.cit. and by descent for .] (cf. <cit.>), we see that Corollary <ref> holds for integral K_0 in this case, i.e. K_0(X)≅ K_0(Y)/ K_0(Y). This result can be glued to the class of so called scalloped stacks (cf. <cit.> for the notion of scalloped stacks and the construction of ). It is not hard to see that both Proposition <ref> and Corollary <ref> can be upgraded to integral coefficients and genuine K_0, i.e. not completed, if X and Y in the assumptions are assumed to be scalloped scalloped stacks. Brokemper has a result on Chow groups of split reductive groups module Frobenius conjugation by a Levi. This can be used to understand the Chow groups of the classifying stacks of finite groups of Lie-type. Remark <ref> enables us to get an analogous result for rational K_0. Let S=(k) be the spectrum of a field. Let G be a split reductive S-group scheme with split maximal torus T. Let φ L→ M be an isogeny, where L resp. M are Levi-components of parabolic subgroups P resp. Q of G. Assume that T⊆ L. Let g_0∈ G(S) such that φ(T) = g_0T. Let T→ T denote the isogeny φ followed by g_0^-1-conjugation. Further, let us denote the Weyl group of T in L with W_L = W(T,L) and of T in G with W_G. Then we have K_0(G/_φL)_≅ R(T)_^W_L/(f- f| f∈ R(T)_^W_G). For K_0, we could not produce an analogue of Corollary <ref> using Theorem <ref> and thus have to use a result of Krishna on equivariant G-theory[As we work with smooth stacks, G-theory and K-theory agree (cf. <cit.>).] (cf. <cit.>). For completion we recall the main argument.First, we may replace Q and M by g_0Q and g_0M and assume that φ(T) = T. In particular, -conjugation of G by T is just φ-conjugation. Now we embed T into T×_ST by t↦ (t,φ(t)). Let T×_ST act on G by (t,t').g tgt'^-1. This yields a morphism G/_φT→G/T×_ST. This is T≅ T×_ST/T-torsor. Thus, by Remark <ref>, we have K_0(G/_φ T)_ ≅ K_0( G/T×_ST)_/ K_0( G/T×_ST)_. By homotopy invariance, we have K_0(G/T×_ST)_≅ K_0^T(G/B)_. Therefore, we are reduced to classical statements about T-equivariant K-theory of flag varieties (cf. <cit.>) and get K_0(G/_φT)_≅ R(T)_/(f- f| f∈ R(T)_^W_G). It follows from <cit.> that K_0(G/_φL)_≅ K_0(G/_φT)_^W_L. Thus, it suffices to show that IR(T)_^W_L = IR(T)_∩ R(T)^W_L_, where I=(f- f| f∈ R(T)_^W_G), but this follows from faithfully flatness of R(T)^W_L↪ R(T) (cf. <cit.> - see also the proof of <cit.> resp. <cit.> for a detailed argument in the Chow group case). § MOTIVIC COHOMOLOGY OF QUOTIENTS UP TO ISOGENY In the following let =(k) be the spectrum of a field and assume S is geometrically regular and of finite type over . We will first prove our main question of the introduction (<ref>) in the case of split reductive groups. Afterwards, we will show how to extend these arguments to arbitrary reductive groups over k. In the end, we want to give some thoughts on generalizations of these results. §.§ The case of split reductive group schemes Let G be a split reductive S-group scheme, P resp. Q be parabolics inside G with Levi-components L resp. M. Let φ L→ M be an isogeny. Then L acts on G via (l,g)↦ lgφ(l)^-1. We are interested in the quotient of this action, which we denote by G/_φL or rather its motive. To do so, we follow the idea of Brokemper in the proof of <cit.>.Let T be a split maximal torus of G contained in L. As φ is an isogeny the image of T is again a split maximal torus. In particular, up to conjugation by an element g_0∈ G(S), we may identify T with φ(T). The g_0-conjugation of G induces an isomorphism G→ G that is L-equivariant, where L acts on the right hand side via (l,g)↦ lgg_0^-1φ(l)g_0. In particular, after replacing M resp. Q by their g_0^-1-conjugation, we may assume that φ(T) = T. Then we have the following embedding T↪ T× T, via t↦ (t,φ(t)). The quotient under this embedding is T× T/T≅ T. Thus, the naturally induced morphism G/_φT→G/T× T, where T× T acts on G via (t,t',g)↦ tgt'^-1, is a T-torsor. This leaves us with the following picture G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T. For morphism a we note that T is a split maximal torus inside L and L is reductive. Thus, we can apply Theorem <ref> and see that a is Tate and further, we have RΓ_S(G/_φL,)≃ RΓ_S(G/_φT,)^W_L, where W_L denotes the Weyl group of T ind L.The morphism b is by the above a T-torsor. Therefore, we can use Corollary <ref> to see that b is also Tate. Further, we can compute the motive resp. the motivic cohomology of G/_φ T via the motive resp. the motivic cohomology of G/T× T using Proposition <ref> and Proposition <ref>. But by invariance under extensions of unipotent groups, we can identify (G/T× T) with (T G/B) (cf. Lemma <ref>). Therefore, with all of the above we see that the T-equivariant motivic cohomology resp. motive of the flag variety G/B yields results about the motivic cohomology resp. the motive of G_φ/L. But the author has shown[Even though in the cited article, we assume that S is affine, all the arguments go through in without this assumption.] in <cit.> that the motive of T G/B is computed by M_S(G/B)⊗ M_S(BT). If S=(k), we have seen that M_S(T)≅⊗_i=1^rM_S() = ⊗_i=1^r⊕_j≥ 0_S⟨ i⟩, which is completed Tate (cf. Example <ref>). As the motive of the flag variety G/B is also Tate (cf. <cit.>), we see that M_S(T G/B) is completed Tate. Summarizing the above yields the following theorem. We have the following diagram of Tate maps G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T. Further, if S=(k) the motives RΓ_S(G/_φL,) and M_S(G/_φL) are completed Tate-motives in (S). And, we can compute the motivic cohomology of G/_φL as A^n(G/_φL)_≅( A^n_T(G/B)_/ A^n_T(G/B)_)^W_L The first assertion is the discussion above. the second resp. third assertion follows again from the discussion above and Remark <ref> resp. Corollary <ref>. The last isomorphism in Theorem <ref> is also valid in the case, where S is not the spectrum of a field, after replacing the Chow groups with the right motivic cohomology group as in Proposition <ref>. Brokemper has shown that one can give a more explicit computation of the Chow ring of G_φ/L using the computations of Brion <cit.> (cf. <cit.>). To be more precise, we can write the last isomorphism of Theorem <ref> as A^∙(G/_φL)_≅ S^W_L/(f-φ f| f∈ S_+^W_G), where S=_() ≅ A^∙_T(∗)_, S_+ are the elements of positive degree and W_G is the Weyl group of T in G. A more detailed computation can be found in the proof of <cit.>.In particular, our motivic result recovers, up to computations of loc. cit. resp. <cit.>, the rational version of Brokemper's result about A^∙_L(G). We want to give two motivating examples of quotients G/_φL as above, that appear naturally and apply Theorem <ref> to see that their motives are Tate. The classifying stack of finite groups of Lie-type and the stack of G-zips. Both are examples in characteristic p>0. In the last section, we want to use Theorem <ref> to give an idea how we want to approach geometric representation theory of finite groups of Lie-type G^F by relating it motivically to geometric representation of the Langlands dual of G (see below for the notation). Let us assume that S=(_q) be a finite field of characteristic p>0. We set φ G→ G to be the q-Frobenius. This is an isogeny and thus, we can apply Theorem <ref> in this setting. Further, the stack G/_φG is isomorphic to G^F, where G^F is the stabilizer group scheme of the neutral element (cf. <cit.>). It is well known that G^F(_q)≅ G(_q), where _q denotes an algebraic closure of _q. Thus, we see that the motive of the classifying stack of a finite group of Lie-type is Tate. Further, we are able to relate Tate-motives of T G/B with Tate-motives of G^F via the diagram in Theorem <ref>. One of Brokemper's applications of his computations is the computation of the Chow ring of G-zips. In a similar fashion we will apply the above results and show that the motive of the stack of G-zips over a field is completed Tate. Let k be a field of characteristic p>0 and let S=(k). Let G, P,Q be as above. Let us denote the unipotent radical of P resp. Q with R_u(P) resp. R_u(Q). Further, let φ P/R_u(P)→ Q/R_u(Q) be an isogeny. The datum (G,P,Q,φ) is called algebraic zip-datum. To every algebraic zip-datum like above, we can associate the group E_{ (p,q)∈ P× Q|φ() = }. The group E_ acts on G via conjugation (p,q).g pgq^-1. The quotient stack G/E_ is called the stack of G-zips. There are also alternative constructions using a Tannakian formalism on the stack of F-zips (cf. <cit.>). In op.cit. there is also an explicit description of the points of . Let L⊆ P be a Levi-component of P. Then as seen in the proof of <cit.> there is a split exact sequence 1→ R_u(P)× R_u(Q)→ E_→ L→ 1, where the splitting is induced by L↪ E_, l↦ (l,φ(l)). Therefore, by homotopy invariance, we have M_S() ≃ M_S(G/_φ L) which is completed Tate by Theorem <ref> and the discussion before the theorem. §.§ Arbitrary reductive groups over fields For this section, we will assume that S=(k) is the spectrum of a field, G is a reductive k-group scheme with maximal torus T. Let denote an algebraic closure of k. The Weyl group _G(T)/T is a finite étale group scheme and W() is a finite group, called the absolute Weyl group. Note that if G is split reductive, then is the constant k-group scheme associated to W.Any reductive group becomes split after passing to a finite Galois extension of k (cf. <cit.>). Let us set H G⊗_k K, where K/k is a finite Galois extension such that M the maximal torus T_K T⊗_k K is split. Let us denote by (K/k) the Galois group of K/k . We want to extend our results of Section <ref> to G. To do so, we use that torsors under finite groups behave nicely (in the sense of Section <ref>) and that the natural projection H→ G is a (K/k)-torsor.Now let us consider the setting of Theorem <ref>, i.e. we let P,Q be parabolics in G with respective Levi parts L,M. Further, let φ L→ M be an isogeny. The base change of P,Q resp. L,M to K stays parabolic resp. the corresponding Levi components in H. Also, the base change of an isogeny is an isogeny and thus we have an action of L_K on H via φ_Kφ⊗𝕀_K-conjugation. This yields the following pullback diagram H/_φ_KL_K[r,""][d,"p"] (K)[d,""] G/_φL[r,""] (k). The natural map pH/_φ_KL_K→G/_φL is a (K/k)-torsor. By Lemma <ref>, we thus have that the map 𝕀→ (p_*p^*)^(K/k) is an equivalence. Therefore, we have RΓ_S(G/_φL,)≃ RΓ_S(H/_φ_KL_K,)^(K/k) and Theorem <ref> yields that RΓ_S(G/_φL,) is Tate. By smoothness of G/_φL over S dualizing yields, that also M_S(G/_φL) is Tate (cf. Remark <ref>). We can even go further. As in Theorem <ref>, we have a commutative diagram with pullback squares H/_φ_KL_K[d,"f"] [l,"a'",swap] H/_φ_KT_K[d,"g"][r,"b'"] H/T_K× T_K[d,"h"] G/_φL [l,"a",swap] G/_φT[r,"b"] G/T× T. Again by Lemma <ref> the identity in (G/_φT) is equivalent to (g_*g^*)^(K/k). As *-push/pull commutes with limits, we can compute a_* resp. b_* as (f_*a'_*)^(K/k) resp. (h_*b'_*)^(K/k). By Theorem <ref> a' resp. b' is Tate, by Lemma <ref> f and h are also Tate and as taking invariants under a finite group is given by extensions, we see that a and b are both Tate maps.Finally, let us summarize our discussion above in the following Theorem. In the situation above, we have the following diagram of Tate maps G/_φL [l,"a",swap] G/_φ T[r,"b"] G/T× T. Further, the motives RΓ_S(G/_φL,) and M_S(G/_φL) are completed Tate-motives in (S). See discussion above. §.§ Generalizations In this section, we want to give an overview on the integral version of Theorem <ref> and Theorem <ref>. We want to mention three questions that came up naturally during the work on this article that we want to address in the future. (1) Under what assumptions can we transport all of these results to motives defined via Spitzweck's motivic cohomology ring spectrum? (2) Is it enough to invert the residue characteristics of the base for all of our results? (3) Can the results of this article be extended to other cohomology theories? (4) Is it possible to extend Theorem <ref> to arbitrary reductive group schemes over a k-scheme S, where k is a field. Let us go a bit more into detail. So, let S=(k) be the spectrum of a field and X be a finite type S-scheme. further, let G be a split reductive group over k with split maximal torus T and associated Weyl group W. Assume G acts on X. Question (1) is rather straightforward. For Chow groups one can see that if A^∙_G(X)≅ A^∙_T(X)^W holds integrally if any only if G is special. But if we assume that G is special, then any G-torsor is trivial Zariski-locally. If we define integral motives _ on Prestacks via right Kan extension of Spitzweck motives (cf. <cit.>), we see that _ satisfies Nisnevich and in particular Zariski descent. Thus, up to technicalities, we expect that all the arguments after Section <ref> go through. Crucially, Section <ref> has to be worked out in this context, as both Ayoub and Cisinski-Déglise use rational coefficients. This is not surprising, as one can see that the particular motivic behavior of torsors under finite groups should yield étale descent. We hope that in the case of special group there is a work around. Question (2) is addressed in a similar fashion, instead of Spitzweck motives, we can use étale motive (cf. <cit.>). As these still satisfy étale descent again all the arguments after Section <ref> should go through. We expect that inverting the residue characteristics should be enough to recover the statements of Section <ref> in this case. But still one needs to prove the necessary results, which we expect to be rather straightforward. Question (3) needs more careful treatment. The results about T-torsors of Section <ref> can be extended to other oriented cohomology theories, as we have seen. Section <ref> is more difficult as it boils down to vanishing results on cohomology theories. The key part is Proposition <ref>. We expect that this proposition still holds for modules over the étale K-theory ring spectrum but we did not check this thoroughly. If one wants to work with genuine K-theory, then this statement can not be proven via étale descent and thus needs a more careful treatment. For other cohomology theories one can possibly give a precise vanishing assumption to extend the results of this article. For Question (4) let G be a reductive group scheme over a k-scheme S. Assume for now that G admits a maximal torus. Then all of our constructions make sense and as any maximal torus becomes split after passage to a Galois cover of S, we can use the same argumentation as in Section <ref> to extend Theorem <ref> to arbitrary reductive group schemes over S. But the existence of maximal tori is not guaranteed (cf. <cit.> for examples of non-split reductive group schemes over ()). But in op.cit. there are only examples for non-classical reductive groups and the author is not aware of any other examples.Another approach that one could follow is to use the scheme of maximal tori. If a maximal torus T exists inside G, then the scheme of maximal tori is isomorphic to G/T. But again, we did not follows this approach any further. tocsectionReferences halpha-abbrv
http://arxiv.org/abs/2306.03780v3
20230606153450
Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale
[ "A. O. Barvinsky", "W. Wachowski" ]
hep-th
[ "hep-th", "gr-qc" ]
[email protected] Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia Institute for Theoretical and Mathematical Physics, Moscow State University, Leninskie Gory, GSP-1, Moscow, 119991, Russia [email protected] Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia We discuss the structure of nonlocal effective action generating the conformal anomaly in classically Weyl invariant theories in curved spacetime. By the procedure of conformal gauge fixing, selecting the metric representative on a conformal group orbit, we split the renormalized effective action into anomalous and Weyl invariant parts. A wide family of thus obtained anomalous actions is shown to include two special cases of Riegert–Fradkin–Tseytlin and Fradkin–Vilkovisky actions. Both actions are shown to be contained in the first three orders of the curvature expansion for a generic one-loop effective action obtained by covariant perturbation theory. The complementary Weyl invariant part of the action is given by the “conformization” of the full effective action—restricting its argument to the conformally invariant representative of the orbit of the conformal group. This is likely to resolve a long-standing debate between the proponents of the Riegert action and adherents of the perturbation expansion for the effective action with typical nonlocal logarithmic form factors. We derive the relation between quantum stress tensors on conformally related metric backgrounds, which generalizes the known Brown-Cassidy equation to the case of nonzero Weyl tensor, and discuss applications of this relation in the cosmological model driven by conformal field theory. We also discuss the issue of renormalization group running for the cosmological and gravitational coupling constants and show that it exhibits a kind of a metamorphosis to the nonlocal form factors of the so-called partners of the cosmological and Einstein terms—nonlocal curvature squared terms of the effective action. Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale W. Wachowski July 31, 2023 ================================================================================================ To the memory of Stanley Deser § INTRODUCTION The status of local Weyl anomalies is widely considered to be fully settled in current literature. However, the issue of their relevance to concrete physical effects, as opposed to a mere criterion of consistency at the quantum level of the classically Weyl invariant theories, often remains a subject of the debate. The manifestation of the conformal anomaly in physical applications usually occurs within the effective action formalism, and there is extending over years debate on the structure of this action, taking place between the pioneers of the conformal anomaly and adherents of perturbation theory. The nature of this debate consists in a seemingly contradictory difference between the known expression for the anomaly action and the form of the nonlocal effective action obtained by Feynman diagrammatic technique. As is well known, the one-loop conformal anomaly for classically Weyl invariant 4-dimensional theory having in Euclidean curved spacetime the covariantly renormalized effective action [ g_μν] reads as <cit.> ⟨ T^μ_μ ⟩≡2 g_μν/√(g)δ/δ g_μν = 1/16π^2(α C^2 + β E +γ R), E= R_μναγR^μναγ - 4R_μνR^μν + R^2, where √(g)E denotes the Gauss–Bonnet density, C_μναβ is the Weyl tensor, C^2 = C_μναβC^μναβ, and α, β and γ are the numerical coefficients depending on the spin of the quantum field.[We work in Euclidean signature spacetime, and our notations are R^α_βμν=∂_μ^α_νβ - ⋯, R_μν=R^α_μαν, =g^μν∇_μ∇_ν. For simplicity we do not include in the anomaly the contribution F_μν^2 of the vector gauge field and φ^4-contribution of the self-interacting conformal scalar field.] The anomalous action _A[ g_μν] generating this anomaly was first derived in the nonlocal form by Riegert <cit.> and by Fradkin and Tseytlin <cit.> in the local form of the conformal Wess-Zumino action involving an auxiliary scalar field—the dilaton responsible for intetwining two conformally related metrics. The nonlocal form of the Riegert–Fradkin–Tseytlin (RFT) action reads as _A[ g ] = 1/64π^2∫ d^4x √(g) (α C^2 + β/2_4) 1/Δ_4_4 -1/32π^2(γ/6+β/9) ∫ d^4x √(g)R^2, where E_4 ≡ E - 23 R, Δ_4 denotes the so-called Paneitz operator <cit.> Δ_4 = ^2 + 2R^μν∇_μ∇_ν - 2/3 R + 1/3(∇^μR) ∇_μ and 1/Δ_4 implies its inverse—the notation for the operation of acting by its Green's function G(x,y) on a generic test function ψ(y), Δ_4 G(x,y)= δ(x,y), 1Δ_4 ψ(x)= ∫ d^4y G(x,y) ψ(y). Some time after the invention of the RFT action the attention to it was drawn by Antoniadis, Mazur and Mottola due to several applications in gravity theory <cit.>, but this caused a serious criticism <cit.> of the expression (<ref>) in view of its drastic structural difference from the renormalized effective action built within perturbation theory in powers of spacetime curvature. This expansion begins with <cit.> _ ren = 1/32π^2∫ dx √(g)[-α C_μναβln(-/μ^2)C^μναβ -γ/6 R^2 ] + O(^3), collectively denoting here the Riemann, Ricci and scalar curvature, and does not at all resemble the form of (<ref>). This criticism was maintained by objections against short distance behavior of stress tensor correlation functions generated by the RFT action, which were shown to contradict the conformal Ward identities for these correlator <cit.>. Another criticism was associated with the objections against the double pole structure of the Green's function of the operator (<ref>), ∼ 1/^2 <cit.>. Although these objections were disclaimed in <cit.> by explicit calculations of ⟨ TTT⟩-correlators, the question might still be hovering unsettled in the literature <cit.>. The goal of this paper will be to discuss the status of the effective action responsible for the generation of the Weyl anomaly. To begin with we will focus on a wide variety of nonlocal anomalous actions by including the RFT action in their functional family. The idea of this construction is similar to gauge fixing applied to the ambiguity of the conformal split of the metric argument of the action functional, which was suggested rather long ago in <cit.>. The resulting class of anomaly actions will be parameterized by the conformal gauge selecting the representative on the orbit of the local conformal group. We will explicitly demonstrate that the difference between the members of this class is a Weyl invariant functional—a point of departure between various suggestions for the anomalous action. Two particular gauges will be considered, one of them exactly corresponding to the RFT action (<ref>) and another associated with the Weyl invariant nonlocal rescaling of the metric field suggested by Fradkin and Vilkovisky. This rescaling, which is directly applicable in asymptotically flat spacetimes, was designed as a remedy against the trace anomaly <cit.>—the analogue of the Yamabe problem of a local Weyl transformation to the metric with a vanishing scalar curvature. Then we show how the Fradkin–Vilkovisky version of the anomaly action arises in the first three orders of the covariant curvature expansion for a generic one-loop effective action. We discuss the associated mechanism of partial summation of scalar curvature terms of this expansion <cit.> along with the double pole problem for the Green function of the Paneitz operator (<ref>). Lack of uniqueness of the anomaly action defined only up to a Weyl invariant functional raises, of course, the question of its incompleteness in concrete applications. This also poses the question of whether the RFT action or its modifications within the above class provides an optimal description of the physical problem in question. For example, it is well known that in two dimensions the stress tensor trace anomaly and the associated nonlocal Polyakov action are fully responsible for the Hawking radiation of the two dimensional black holes <cit.>. On the contrary, in higher dimensions the anomaly action is insufficient to describe this phenomenon. Still there is a strong belief <cit.> that at distances of the horizon scale gravity theory is essentially modified due to large infrared effects of the conformal mode described by the action (<ref>). These effects might dominate macroscopic physics at such scales, like for instance the near black hole horizon behavior of quantum stress tensor <cit.>, the contribution to the scalar sector of gravitational waves <cit.> or dynamical vacuum energy in effective theory of gravity <cit.>. Though it is not entirely clear how complete is the setup in these problems, there are physical situations when the conformal mode really runs the whole show, and we consider as a direct application of (<ref>) two examples of such a situation. These are the calculation of the metric stress tensor in a generic conformally flat spacetime <cit.> and the Friedmann metric cosmology driven by the trace anomaly of conformal invariant fields <cit.>, the latter playing important role in the model of initial conditions for inflationary cosmology <cit.>. A related issue in the problem of nonlocal effective action is the question of renormalization group (RG) running of the cosmological and gravitational constants. Though the issue of running scale and its relation to the cosmological constant problem has already become a byword in current literature, it becomes increasingly clearer that this running should not be interpreted in the usual sense of RG theory <cit.>. The notion of “scale” is so ambiguous in physics that its running nature actually looses universality when addressing various physical setups, like for example associating cosmological inflation with RG running <cit.>. Serious arguments against running nature of the cosmological and gravitational couplings in <cit.> have led to the notion of cosmological constant partners <cit.> interpreted in <cit.> in terms of separation of scales or decoupling of heavy modes <cit.>. Still, it is customary to have nontrivial solutions of RG equations in renormalizable gravity models <cit.> with running scale dependent and G. Therefore a natural question arises how these solutions have to be interpreted when the tadpole structure of the covariant cosmological and Einstein terms preclude them from their actual dependence on the momentum <cit.>. So one of the goals of this paper is an attempt to clarify this issue within a special version of the notion of the “scale”. Looking forward to the final conclusion, we might formulate the suggestion for the notions of running and G couplings as their conversion or metamorphosis into their nonlocal partners similar to those introduced by J. Donoghue in <cit.>. Within perturbation scheme the cosmological and Einstein terms start manifesting themselves as nonlocal curvature squared terms very different from their original form. The paper is organized as follows. In Sect. <ref> we decompose the quantum effective action into anomalous and Weyl invariant parts by imposing the conformal gauge for the choice of the representative on the orbit of the conformal group. This allows one to build the whole class of nonlocal anomalous actions, functionally parameterized by the choice of this gauge and including the RFT action (<ref>) and the Fradkin–Vilkovisky action suggested in <cit.>. Sect. <ref> contains the discussion of the covariant curvature expansion of <cit.> and the way how it contains the anomalous action in the lowest orders of this expansion. In particular, it is shown that the Fradkin–Vilkovisky version of this action performs a resummation of the covariant curvature series in powers of the Ricci scalar <cit.>. In Sect. <ref> we give a direct and, apparently, not very well known derivation from the RFT action of the vacuum stress-tensor behavior at the orbit of the conformal group—a good example of direct applicability of (<ref>). Here we also comment on the application of the anomalous conformal Wess-Zumino action to the a-theorem <cit.> and present the generalization of the Brown–Cassidy formula <cit.> for the stress tensor to the case of a nonzero Weyl tensor, see Eq.(<ref>). Applications of the anomaly action in conformally flat spacetime are presented in Sect. <ref>. It is shown how this action underlies the construction of the inflation scenario starting from the cosmological initial state in the form of the mircocanonical density matrix <cit.>, recently reviewed in <cit.>. Important feature of this application is the value of the Casimir vacuum energy which is also determined by the coefficients of the anomalous trace (<ref>) <cit.>. In Sect.6 we discuss the problem of scale dependence of the gravitational and cosmological constants related to the ideas of <cit.> and <cit.>. Here we show that in the UV regime the RG analysis of the cosmological and Einstein terms strongly points out to the conversion of their scale dependence into the nonlocal form factors of their UV partners represented by curvature squared terms with dimensionless nonlocal coefficients. We call this phenomenon a metamorphosis of the running scale, which we derive by using a special scaling operator. In IR domain the same analysis leads to the low energy partners depending on mass scale of the theory. These nonlocal partners were suggested in <cit.> by J. Donoghue for the cosmological constant term and blueprinted for the Einstein term in <cit.> in the form of the long distance modification of Einstein gravity. In the concluding section we briefly recapitulate the above observations and dwell on related potential problems and applications. We start by discussing the role of Weyl anomaly in the problem of cosmological initial conditions for the inflation scenario driven by a conformal field theory <cit.>. This scenario motivates introduction of numerous conformal higher spin (CHS) fields whose Weyl anomaly is generated only in the one-loop approximation and, thus, acquires a kind of nonperturbative status. Then we discuss the uniqueness for the nonlocal scaling operator used for the derivation of the above metamorphosis phenomenon. In particular, we show that in the curvature squared terms of the action it is nearly uniquely determined due to general covariance of the theory, though in Lorentz symmetry violating models like Hořava gravity <cit.> it may be rather ambiguous. § CONFORMAL GAUGE FIXING The splitting of the renormalized effective action of a classically conformally invariant theory into the anomaly part _A generating the trace anomaly (<ref>) and the Weyl invariant part ^ conf, g_μνδ^ conf/δ g_μν=0, _ ren=_A+^ conf, is obviously not unique and admits the freedom _A→_A+W^ conf, ^ conf→^ conf-W^ conf, with an arbitrary conformally invariant functional W^ conf, g_μνδ W^ conf/δ g_μν=0. The freedom in the choice of W^ conf[ g_μν] arises as a functional integration constant for the first order variational equation that can be written down for _A[ g_μν] or for the renormalized effective action [ g_μν]≡_ ren[ g_μν]. At the orbit of the conformal group passing through the metric g_μν—the argument of the effective action—and parameterized by the local conformal parameter σ=σ(x), g_μν = e^2σg̅_μν, the renormalized action _ ren[ e^σg̅ ] satisfies the equation δ_ ren[e^2σg̅]/δσ = √(g)/16π^2(α C^2 + β E +γ R) |_g_μν = e^σg̅_μν, which can be integrated to give conformal Wess-Zumino action <cit.> Δ[ g̅, σ ] ≡_ ren[ g ] - _ ren[ g̅ ] = 1/16π^2∫ d^4x √(g̅){[αC̅^2 + β_4] σ + 2βσΔ̅_4σ} - 1/32π^2(γ/6 + β/9) ∫ d^4x (√(g)R^2 - √(g̅)R̅^2), where the two metrics g_μν and g̅_μν are related by the equation (<ref>), all barred quantities are built in terms of g̅_μν and Δ̅_4 is the barred version of the fourth-order Paneitz operator (<ref>). This expression _ ren-_ ren=_A -_A can also be rewritten in the other form _A[ g ]-_A[ g̅ ] = 1/16π^2∫ d^4x √(g) { [ α C^2 + β_4] σ -2β σΔ_4σ } - 1/32π^2(γ/6 + β/9) ∫ d^4x (√(g)R^2 - √(g̅)R̅^2), if one takes into account two important properties of the Paneitz operator—Weyl invariance of its densitized form, √(g̅) Δ̅_4 = √(g) Δ_4, and the finite conformal transformation of E_4—the Gauss-Bonnet density modified by √(g) R term (<ref>), √(g) _4 = √(g̅) _4 + 4√(g̅) Δ̅_4σ. These two properties are consistent with each other because the last equation should obviously remain valid under the interchange of g_μν and g̅_μν accompanied by flipping the sign of σ. There is also the third form of the Wess-Zumino action, which will be given below in Eq.(<ref>). It exists for a special renormalization converting to zero the coefficient γ of the R term in (<ref>), and underlies the proof of the so-called a-theorem for the monotonic RG flow of the coefficient a=β/16π^2 of the topological term in the trace anomaly <cit.>. Modulo a nonvanishing conformal anomaly all points on the orbit of the conformal group (<ref>) are physically equivalent, and this typical situation of a broken local gauge invariance can be managed by introducing the gauge condition which uniquely selects g̅_μν as the representative of the equivalence class of metrics (<ref>). If we denote this gauge condition as χ[g̅]=0 then this representative should be uniquely selected by the solution of the equation for the conformal parameter σ, χ[ g̅ ] = χ[ g e^-2σ] = 0, this solution being a functional of the metric _χ[ g̅ ], labelled by the gauge symbol χ, σ = _χ[ g̅ ]. The representative of the conformal orbit g̅_μν[g] as a functional of a given metric g_μν (through which the orbit is passing) becomes Weyl invariant, g̅_μν[ g ]≡ g_μν e^-2_χ[ g̅ ], g_αβδg̅_μν[ g̅ ]/δ g_αβ = 0, because under any local Weyl rescaling g_μν→ e^2σ g_μν the conformal parameter transforms as _χ[g]→_χ[g] + σ in view of the identity χ[g e^-_χ[g]] ≡ 0, so that δ_σ_χ[ g̅ ]=σ, where δ_σ is the operator of the conformal variation δ_σ≡ 2∫ d^4x σ(x) g_μν(x)δ/δ g_μν(x). For the uniqueness of such conformal gauge fixing procedure (in spacetime and at least in some finite domain of the space of metrics) the Faddeev–Popov operator Q_χ=Q_χ(x,y), corresponding to the gauge χ[g], δ_ωχ(x)=∫ d^4y Q_χ(x,y) ω(y), should be nondegenerate. Thus, the terms of (<ref>) W^ conf[ g ] = _A[ g̅ ] + 1/32π^2(γ/6 + β/9)∫ d^4x √(g̅) R̅^2 taken at g̅_μν[ g ] can be considered as an irrelevant Weyl invariant integration “constant”, while the rest of the terms can be identified with the anomaly action after the substitution of σ=_χ[ g ]. This set of anomaly actions _A[ g ]≡_χ[ g ] parameterized and labelled by conformal gauge conditions χ reads as _χ[ g ]= 1/16π^2∫ d^4x √(g) {(α C^2 + β_4) _χ - 2β_χΔ_4_χ} - 1/32π^2(γ/6 + β/9)∫ d^4x √(g)R^2. The difference between various members of this set is, of course, a Weyl invariant functional. For two arbitrary conformal gauges one has _χ_1 - _χ_2 = 1/16π^2∫ d^4x √(g) (_χ_1-_χ_2) ×[α C^2 +β_4 -2βΔ_4(_χ_1 +_χ_2)]. Conformal variation of this expression is vanishing, because of the transformation law (<ref>) for _1,2, Weyl invariance of the density √(g) C^2 and the relation (<ref>) which in the infinitesimal form reads as δ_σ[√(g) _4] = 4√(g) Δ_4σ, so that using all the above properties δ_σ(_χ_1 -_χ_2)=0. Note that with our definition of the anomaly action (<ref>) the way it enters the full quantum action can be represented as [ g ] = _χ[ g ] +[ g̅ ] +1/32π^2(γ/6 + β/9) ∫ d^4x√(g̅) R̅^2, where g̅_μν[ g ] = e^-2_χ[ g ]g_μν §.§ Riegert–Fradkin–Tseytlin gauge An obvious choice of the conformal gauge associated with the Gauss–Bonnet density and the Branson curvature is the Riegert–Fradkin–Tseytlin gauge χ__ RFT[ g̅ ] ≡_4 = 0. It can be imposed for topologically simple spacetime manifolds with a vanishing bulk part of the Euler characteristics (see Eq.(<ref>) and footnote <ref> below), to which in particular belongs asymptotically flat spacetime to be mainly considered throughout the paper. The advantage of this gauge is that it is exactly solvable due to the transformation law for the Branson curvature (<ref>). Applying this gauge and using Eq. (<ref>) we obtain a linear equation on __ RFT which has a solution in terms of the inverse Paneitz operator __ RFT=1/4 1/Δ_4_4. Formally substituting this expression to (<ref>) we obtain exactly the RFT action (<ref>). This RFT action and the inverse Paneitz operator are well defined and exist in asymptotically flat spacetime under Dirichlet boundary conditions at infinity when treated within perturbation theory in powers of the curvatures whose collection is denoted below as . Indeed, in this case 1/Δ_4=1/^2+O(), and this operator works well when it is applied to the functions of the Branson curvature type ∼_4. Because of the double-pole nature of the operator 1/^2 its action on generic functions may be badly defined due to infrared divergences, but when the function is represented by the total derivative structure it generates, when acted upon by 1/^2, well defined multipole expansion valid in four dimensions at spacetime infinity <cit.>.[ As discussed in <cit.>, the operator 1/^n in D-dimensional space with D<2n is ill defined unless the functions it acts upon are of the form ∂_α_1...∂_α_mj(x), m=2n-D+1 with the function j(x) having an asymptotic behavior j(x)=O(1/|x|^D), |x|→∞. This property can be explained by the fact that in the multipole expansion of 1∂_α_1... ∂_α_mj(x) the first few multipoles vanish, which improves the fall-off properties of the result at infinity and makes possible a repeated action by 1/.] But the Gauss–Bonnet density and √(g) R are both locally a total derivative which makes 1/Δ_4 well defined in the expression (<ref>) for __ RFT. This in fact implies the invertibility of the Faddeev–Popov operator in this gauge, which up to coefficient coincides with the Paneitz operator, Q__ RFT=4Δ_4, and thus guarantees local uniqueness of conformal gauge fixing procedure. Moreover, the above observation serves as a repudiation of the harmful role of double poles in the RFT action that was claimed in <cit.>. Absence of infrared dangerous double poles is explicit in the lowest order of the curvature expansion for __ RFT which reads __ RFT=-1/6 R+O(^2), in view of the fact that the Gauss–Bonnet density is quadratic in the curvature √(g)E = O(^2). Higher orders of this expansion are also safe because of the total derivative nature of √(g) E. Regarding the lowest order quadratic in curvature part, with the above approximation for __ RFT it equals __ RFT[ g ] = -γ/192π^2∫ d^4x√(g) R^2+O(^3), because all the terms depending on the parameter β completely cancel out, and what remains coincides with the last quadratic term of (<ref>). This coincidence fully matches with the linear in curvature part of the trace anomaly (<ref>) (its γ-term) generated by the quadratic action (<ref>). Indeed, the conformal transformation of its nonlocal Weyl term contributes only to O(^2)-part of the anomaly due to the fact that only its form factor ln(-/μ^2) is not Weyl invariant, and the whole γ-term of the anomaly entirely comes from the R^2-part of (<ref>). §.§ Fradkin-Vilkovisky gauge Another conformal gauge arises in context of conformal off-shell extension of Einstein gravity suggested in <cit.> and corresponds to the 4-dimensional version of the Yamabe problem. The representative of the conformal group orbit is chosen to be the metric with a vanishing scalar curvature χ__ FV[ g̅ ] = R̅, which implies a nonlinear but still explicitly solvable equation for __ FV , R[e^-2__ FVg_μν] = e^3__ FV(R-6 ) e^-__ FV = 0. This solution reads __ FV=-ln(1+1/61/-R/6R), lim_|x|→∞ e^-__ FV = 1 in terms of the inverse of the conformal second order operator -16 R subject to zero boundary conditions at infinity. This inverse operator also admits covariant curvature expansion and in the lowest order yields the function __ FV coinciding with that of the RFT gauge (<ref>), __ FV= __ RFT + O(^2), and, therefore, generates in the quadratic order the same expression for the anomaly action __ FV= __ RFT + O(^3). Using Eqs. (<ref>) and (<ref>) it is easy to see that the difference between RFT and FV actions is given by the exact expression __ RFT - __ FV = 1/16π^2∫ d^4x √(g)(__ RFT-__ FV) ×[ α C^2+2β Δ_4 (__ RFT-__ FV)], bilinear in the local Weyl squared term and conformally invariant nonlocal functional __ RFT - __ FV = 1/4 1/Δ_4_4 + ln(1+1/61/-R/6R) = O(^2). Therefore within perturbation theory these two actions remain coinciding even in the cubic order and become different only starting from the fourth order in the curvature. Perturbatively both terms of (<ref>) produce similar nonlocal structures of tree-like nature, that is the terms characteristic of the tree-level approximation in field theory. Such terms are composed of the powers of inverse d'Alembertians acting on the curvature tensor structures or on the products of similar nonlocal tensor structures built according to the same pattern. However, taken separately as exact entities they have essentially different types of nonlocality. RFT action formalism involves the Green's function of the fourth order Paneitz operator, whereas the FV version of the action is based on the Green's function of the second order operator -16 R. Both operators are conformally covariant, but the Weyl transformation of -16 R is different from (<ref>) -16 R= e^-3σ (-16R̅) e^σ, g_μν=e^2σg̅_μν. Moreover, FV action formalism involves a special logarithmic nonlinearity absent in RFT gauge fixing. The action of the Paneitz operator derivatives in (<ref>) can destroy this logarithmic structure, but the __ FVC^2-term in __ FV still contains it intact. A further comparison of the RFT and FV actions can be done along the lines of their “naturalness”. RFT gauge (<ref>) is based on structures organically belonging to the conformal anomaly formalism in the sense that it involves the same fundamental objects—the Branson curvature _4 and the relevant Paneitz operator Δ_4 which are immanently present in the flow of the anomalous action along the conformal group orbit (<ref>). One could even interpret this gauge as the one providing the extremum of β-terms in this expression with respect to the variation of the orbit parameter σ. This interpretation is, however, erroneous because g_μν, g̅_μν and σ cannot be treated as independent variables in Eq. (<ref>). On the contrary, FV gauge (<ref>) uses a somewhat extraneous entity—the scalar curvature—which is singled out only by the fact that it turns out to be the bearer of the metric conformal mode. As the result the advantage of FV gauge is that it does not involve higher than second order derivatives and does not produce double pole nonlocalities. Another advantage is that the equation (<ref>) disentangling the FV anomaly action from the full effective action becomes in view of R̅=0 much simpler [ g ] = __ FV[ g ] + [ g̅ ] |_g̅_μν[ g ], where g̅_μν[ g ] = e^-2__ FV[ g ]g_μν, which is obviously consistent with the fact that __ FV[ g̅ ]=0 because __ FV[ g̅ ]≡0. As compared to the FV version, among technical disadvantages of the RFT gauge and the action is the presence of fourth order derivatives of the Paneitz operator. Due to this the RFT version turns out to be vulnerable from the viewpoint of possible generalizations. For example, a modification of the gauge (<ref>) by the additional Weyl squared term, χ__ RFT→χ__ RFT+aC^2 would not work, because the relevant modification __ RFT→__ RFT+a (2Δ_4)^-1C^2 is badly defined for the reasons described above in the footnote <ref>—the additional term should have a total derivative structure. The generalization to spacetimes with nontrivial topology is also not straightforward, because the condition (<ref>) should not contradict nonvanishing Euler number of the manifold, which for compact manifolds without a boundary reads e_E=132π^2∫ d^4x√(g) E(x). Say, for a compact manifold of a finite volume V=∫ d^4x √(g) the gauge (<ref>) can be chosen to be χ(g̅) = √(g̅) (E̅ -2/3R̅ -32π^2e_E/V̅), but this leads to a nonlinear integro-differential equation for the relevant 4√(g)Δ_4 = √(g)(E-2/3 R-32π^2 e^-4/⟨ e^-4 ⟩e_E/V), ⟨ e^-4 ⟩ ≡1/V∫ d^4x √(g) e^-4, which apparently can be solved analytically only by perturbations in e_E/V. Unless stated otherwise, below we consider asymptotically flat spacetime with a trivial topology, whose Euler characteristics should be modified by the boundary term. For generic 4-dimensional manifolds with a smooth boundary it reads e_E=132π^2(∫_ M d^4x√(g) E(x)+∫_∂ M d^3x√(γ)(x)), where γ=γ_ab and γ_ab is the induced metric on ∂ M. For asymptotically flat case due to the contribution of ∂ M at infinity | x |→∞ it equals 1, so that everywhere in what follows the bulk part of the Euler characteristics is 132π^2∫ d^4x√(g) E(x)≡ e'_E=e_E-1=0.[I am grateful for this observation to M.Duff. Explicit and simple expression for the boundary term of the Euler characteristics in the 4-dimensional case can be found in <cit.>, =14R_a⊥ b⊥K^ab+16 K^a_b, where K_ab=∇_a n_b is the extrinsic curvature of the boundary, and ⊥ denotes the projection on the outward pointing normal vector n^μ. The last term in exactly reproduces the value of the Euler number e_E=1 for flat and asymptotically flat spaces <cit.>.] § CONFORMAL ANOMALY AND COVARIANT CURVATURE EXPANSION Despite the diversity of nonlocal structures of RFT and FV versions of anomaly action, neither of them seem to appear in conventional perturbation theory for quantum effective action. The covariant form of this perturbation theory in curved spacetime (<ref>) was pioneered in <cit.>, but its logarithmic nonlocal formfactor did not resemble the nonlocal operators of the RFT action (<ref>). Here we show how in spite of these discrepancies the anomaly action originates from covariant perturbation theory of <cit.>. This perturbation theory arose as a concrete implementation of the ideas of <cit.> as an expansion in powers of covariant tensors of spacetime and fibre bundle curvatures and other covariant background field objects. This expansion is completely equivalent to standard Feynman diagrammatic technique and represents its resummation converting the original perturbation series in noncovariant odjects, like matter and metric field perturbations on top of flat and empty spacetime background, into the series in powers of covariant fields strengths denoted collectively below by and including spacetime and fibre bundle curvature. To be more specific, consider the theory with the inverse propagator on top of the nontrivial field background F̂(∇)=F^A_B(∇), hat denoting the matrix structure of the operator acting in the space of fields φ=φ^A(x) with a generic spin-tensor index A and ∇=∇_μ denoting the covariant derivative with respect to the corresponding fibre bundle connection, F̂(∇)=+P̂-1̂/6 R, =g^μν∇_μ∇_ν. This operator is characterized by the “curvatures”—metric Riemann tensor with its Ricci contractions, fibre bundle curvature R̂_μν determining the commutator of covariant derivatives, [∇_μ,∇_ν] φ =R̂_μν φ, and the potential term P̂ (the term -1̂6 R is disentangled from the operator potential for reasons of convenience), =(R^μ_ναβ, R_μν, R, R̂_μν, P̂). In covariant perturbation theory the one-loop effective action gets expanded in powers of these curvatures = 1/2 Trln F(∇) = local power div_0+_1 + _2+_3 + O(^4), where _n∼^n. Within dimensional regularization of 2ω-dimensional spacetime, ω→ 2, the zeroth and first order terms of the expansion represent pure power divergences (note that we consider the case of a massless theory, or the theory where the mass matrix is included in the potential term P̂ and treated by perturbations), so that these two terms are annihilated by the regularization, while the second order term is given by the expression <cit.> ^(2)_ dim reg = -Γ(2-ω)Γ(ω+1)Γ(ω-1)/2(4π)^ωΓ(2ω+2) μ^4-2ω ×∫ dx √(g) tr {R_μν(-)^ω-2R^μν1̂ -1/18(4-ω)(ω+1) R(-)^ω-2R 1̂ -2/3(2-ω)(2ω+1) P̂(-)^ω-2R +2(4ω^2-1) P̂(-)^ω-2P̂ +(2ω+1) R̂_μν(-)^ω-2R̂^μν}, where ω=d2→ 2. Here tr denotes the matrix trace and the concrete coefficients implement the originally conjectured structure of dimensionally regularized effective action Lagrangian, (-)^ω-2, that was blueprinted in <cit.>. What is important and should be especially emphasized is that =g^μν∇_μ∇_ν means here the full covariant d'Alembertian acting on a respective scalar R, tensor R_μν or spintensor R̂_μν and P̂ objects. For brevity we will consider the case of a single conformal scalar field with 1̂=1, P̂=0, R̂_μν=0 and the following values of the trace anomaly coefficients[The coefficients have the opposite sign to those of b=-α/16π^2 and b'=-β/16π^2 in <cit.>, because in our case the stress tensor is defined with respect to the Euclidean effective action =-i_L in contrast to the definition of T^μν=2g^-1/2δ_L/δ g_μν in the Lorentzian signature spacetime of <cit.>. Comparison with <cit.> should also take into account another sign of the stress tensor defined by the variation with respect to the contravariant metric.] α=-1/120, β=1/360, γ=-1/180, for which the action (<ref>) takes the form—a particular case of (<ref>), ^(2)_ ren = 1/32π^2∫ dx √(g) {1/60[R_μνγ(-)R^μν -1/3Rγ(-)R]+R^2/1080} =1/32π^2∫ dx √(g) {1/120 C_μναβγ(-)C^μναβ +R^2/1080}+O(^3). Here γ(-) is the nonlocal formfactor (in minimal subtraction scheme with ln(4π) and Euler constants absorbed in μ) γ(-)=ln(-/μ^2)-16/15, and the transition to the last line is valid up to the higher order terms in curvature and based on the nonlocal generalization of the identity ∫ d^4x √(g) C^2=2∫ d^4x √(g) (R_μνR^μν-13 R^2) derived in <cit.> by integration by parts and use of the nonlocal representation of the Riemann tensor in terms of the Ricci one (see footnote <ref> below). The first term of this action is obviously conformal invariant in quadratic order, so that the linear in curvature part of the anomaly originates from the last term which is the RFT (or FV) action (<ref>) in the quadratic approximation with γ=-1/180. Thus, the RFT or FV action is fully recovered in this approximation from perturbation theory and, as expected, turns out to be local. §.§ Cubic order Quadratic order of the covariant curvature expansion is, in fact, a trivial generalization of the flat space expressions for self-energy operators of Feynman diagrammatic technique, because ln(-/μ^2) is just a straightforward replacement of the typical momentum space formfactor ln(p^2/μ^2) by its position space version. At higher orders the situation becomes much more complicated and usually represented in terms of correlators of stress-tensor and other observables, written down in momentum space representation, see <cit.> for the treatment of generic conformal field theories. These correlators are, of course, contained in the effective action expanded in curvatures which, for reasons of general covariance, we prefer to consider in coordinate representation. In this representation the effective action becomes for each order N in the curvature a sum of nonlocal monomials ∫ d^4x_1⋯ d^4x_N F(x_1,…,x_N)∇...∇(x_1)...(x_N) with nonlocal multiple-point coefficients and covariant derivatives somehow acting on the product of curvatures at their various points. The absence of convenient and generally covariant momentum space representation makes us to work in coordinate representation and invent a special language which would simplify the formalism and make it manageable <cit.>. This language is based on the operator representation of nonlocal formfactors, F(x_1,…,x_N) = (∇_1,…,∇_N) ×δ(x_1,x_2) δ(x_1,x_2)⋯δ(x_1,x_N), where (∇_1,…,∇_N) is the operator valued function of N independent covariant derivatives such that each ∇_i is acting on its own x_i. This allows one to write the orders of perturbation theory as ^(N) = 1/2(4π)^2∫ d^4x √(g)∑_M _M(∇_1,…,∇_N) × I_M(x_1,…, x_N) |_{x}=x, where summation runs over all invariant monomials in curvatures of a given n-th order I_M(x_1,…,x_N) ∼∇⋯∇(x_1)⋯(x_N) and after the action of all independent derivatives on their arguments all these arguments {x}=(x_1,… x_N) have to be identified. In the cubic order for the full set of curvatures (<ref>) there are 29 such invariant structures built of these curvatures and their covariant derivatives with all indices fully contracted with each other. Moreover, in view of the scalar (no free indices) nature of the formfactors and the formal identity ∇_1+∇_2+∇_3=0 (reflecting the possibility of integration by parts without surface terms, which is a counterpart to the momentum conservation in Feynman diagrams) the formfactors of ^(3) can be written down as functions of three d'Alembertians _1, _2 and _3 independently acting on three arguments of I_M(x_1,x_2,x_3). Thus, cubic order reads as ^(3) = 1/2(4π)^2∫ dx √(g)∑^29_M=1_M(_1,_2,_3) × I_M(x_1,x_2,x_3) |_{x}=x. The list of cubic invariants and their formfactors is presented in <cit.>. It is very long and, as its details are not necessary for our purposes, we will not fully present it here. We only give the general structure of the nonlocal formfactors of these invariants. It reads as a sum of three different groups of terms _M(_1,_2,_3) = A_M (_1,_2,_3) +∑_1≤ i<k^3D^ik_M/(_i-_k)ln_i/_k + B_M. Here (_1,_2,_3) is the fundamental cubic formfactor corresponding to the triangular Feynman graph of massless theory with unit vertices <cit.>, (_1,_2,_3) = ∫_α≥ 0d^3α δ(1-α_1-α_2-α_3)/α_1α_2(-_3) + α_1α_3(-_2) + α_2α_3(-_1), which cannot be reduced to an elementary function. The operator-valued coefficients A_M, B_M and D_M^ik are rational functions of three -arguments with a polynomial numerator P(_1,_2,_3) and the denominator containing together with the product _1_2_3 also the powers of a special quadratic form of these arguments D, A_M, D^ik_M, B_M ∼P(_1,_2,_3)/_1_2_3 D^L, L≤ 6, D = _1^2+_2^2+_3^2 - 2_1_2 - 2_1_3 - 2_2_3. In this cubic order of the curvature expansion the conformal anomaly (<ref>), which is quadratic in curvatures, was explicitly derived by the direct variation of the metric in <cit.>. Though this derivation has demonstrated nontrivial localization of the nonlocal terms under straightforward tracing the metric variational derivative, it still remained rather technical and not very illuminating because it has not revealed the anomalous part of the action. It turns out, however, that the transition to another basis of curvature invariants, suggested in <cit.>, explicitly disentangles this part. §.§ Conformal resummation: Fradkin–Vilkovisky anomaly action The recovery of the anomaly part of the action and its conformal invariant part is based on a simple idea that the latter should consist of the series of Weyl invariant structures. The construction of Weyl invariants can be done by the gauge fixing procedure of the above type—choosing the representative metric on the group orbit by imposing the conformal gauge. Obviously the set of invariants surviving after imposing this gauge will be minimal if the gauge would explicitly annihilate the maximum number of invariants in their original full set. For this reason the FV gauge (<ref>) is much easier to use for the separation of the total set of invariants into the Weyl type ones and those which vanish when the gauge is enforced. As R is one of the curvatures in the set of the FV gauge is more useful for the purpose of such a separation than the RFT gauge (<ref>) which nonlinearly intertwines all the curvatures. Intuitively it is also clear because R, in contrast to C^α_βμν, is a bearer of the conformal mode. In the purely metric sector such a separation is attained by the transition to the new curvature basis <cit.>, = ( R^μ_ ναβ,R_μν,R ) →= ( C^α_ βμν,R ), via expressing Ricci tensor in terms of the Weyl tensor and the Ricci scalar[In fact, the original basis and the curvature expansion of <cit.> consisted of R_μν and R because in asymptotically flat Euclidean spacetime Riemann tensor can be expressed as nonlocal power series in the Ricci tensor, R_αβμν=1/(∇_μ∇_α R_νβ-∇_ν∇_α R_μβ) -(α↔β)+O(^2),—the corollary of contracted Bianchi identity.]. This expression follows from the contracted Bianchi identity which for the Weyl tensor reads as ∇^β∇^α C_αμβν=1/2 R_μν-1/6∇_μ∇_ν R -g_μν/12 R + O(^2). This equation can be solved by iterations for Ricci tensor in terms of nonlocal series in powers of two objects—Ricci scalar R and the new traceless (and up to quadratic order transverse) tensor C_μν which is itself a nonlocal derivative of Weyl, C_μν = 2/∇^β∇_α C^α_ μβν. The resulting series begins with R_μν = C_μν + 1/3∇_μ∇_ν1/R + 1/6 g_μνR + O(^2). Effective action reexpansion imples the transition from I_M(x_1,…, x_n) to a new basis of invariants Ĩ_M(x_1,...x_n) ∼∇...∇(x_1)...(x_n), which can be separated in the set of monomials I_C(x_1,…, x_n) involving only C_μν and the set of monomials I_R(x_1,…, x_n) containing at least one scalar curvature factor, I_C(x_1,...x_n) ∼ ∇...∇ C(x_1)... C(x_n), I_R(x_1,...x_n) ∼ ∇...∇ R(x_1)C(x_2)... C(x_n), ∇...∇ R(x_1)R(x_2)C(x_3)... C(x_n), ... Expansion in the new basis of invariants implies, of course, the transition to a new set of their relevant formfactors _M(∇_1,...∇_n)→_C(∇_1,...∇_n), _R(∇_1,...∇_n), and the new expansion takes the form =W+_R, where W is the Weyl and _R is the mixed Weyl–Ricci scalar parts of the whole expansion, which we write in abbreviated form (omitting multiple spacetime arguments and the operation of equating them) W = 1/32π^2∫ d^4x √(g)∑_n,C_C^(n)I_C^(n), _R = 1/32π^2∫ d^4x √(g)∑_n,R_R^(n)I_R^(n). Note that W and its Weyl basis invariants are not Weyl invariant, because apart from Weyl tensors they contain covariant derivatives and nontrivial formfactors which do not possess conformal invariance properties. The main statement on the conformal decomposition of the effective action of <cit.> is that [ g ] = __ FV[ g ] + W[ g̅ ] |_ g̅_μν = e^-2__ FV[ g ] g_μν, where __ FV[ g ] is exactly the FV anomaly action introduced above[One can check that the last four lines of Eq. (24) in <cit.> form exact expression for __ FV[ g ] by taking into account that the function Z in this equation coincides with -__ FV and satisfies the equation Z + 12(∇ Z)^2 = 13 R.]. Conformally invariant part is obtained by the “conformization” of W, while the rest of the effective action is exhausted by the Fradkin-Vilkovisky anomaly action. Invariant meaning of this representation is that the Ricci part of the full action is not independent, but fully determined by the anomaly and Weyl parts of the action. This representation looks as the realization of Eq. (<ref>) within perturbation theory in curvatures. This result is likely to resolve a long-standing debate between the proponents of the Riegert action and adherents of the flat space perturbation expansion for the effective action with typical nonlocal logarithmic form factors of the form (<ref>). Note that these form factors do not contribute to the anomaly even though their coefficients are directly related to its expression (<ref>). Rather they become Weyl invariant under the substitution of g̅_μν as their functional argument. Validity of the representation (<ref>) was checked in the cubic order approximation for the effective action in <cit.>. The transition to the new basis of invariants in the second order leads to (see the second line of Eq. (<ref>)), W^(2)[ g ] = 1/32π^2∫ dx √(g)1/120 C_μναβ γ(-)C^μναβ, ^(2)_R[ g ] = 1/32π^2∫ dx √(g)1/1080R^2, whereas in the third order it results in a great simplification of the “Ricci scalar” formfactors _R^(3) as compared to the original ones—they become much simpler and, moreover, in their expressions of the form (<ref>) the coefficients A,D^ik_M,B_M of (<ref>) completely loose powers of the function D in the denominator. Thus, modulo the contributions of ln(_i/_k)/(_i-_k) the formfactors _R^(3) acquire the tree-level structure. The terms with these factors get, however, completely absorbed with accuracy O(^4) by the replacement W^(2)[ g_μν ]→ W^(2)[ g̅_μν ] in view of the following relation <cit.> W^(2)[ g ] - W^(2)[ g̅ ] ∼∫ dx √(g) C_μναβ[ln(-)-ln(-)]C^μναβ = ∫ dx √(g)ln(_1/_2)/_1-_2 [_2-_2] C_1 μναβ C^μναβ_2 + O(^4), _2-_2∼_3+ O(^2), where the right hand side is the set of relevant cubic order terms with the above factor acting on two Weyl tensors out of three curvatures in RCC-type invariants. What remains in the sector of cubic I^(3)_R-invariants is the set of tree-like nonlocal form factors which comprise the curvature expansion of FV action up to ^3 order inclusive. This observation done in <cit.> can be formalized as the following sequence of identical transformations [ g ] = W^(2+3)[ g ] + ^(2+3)_R[ g ] + O(^4)= W^(2+3)[ g̅ ] + ^(2+3)_R[ g ] +(W^(2)[ g ] - W^(2)[ g̅ ] )_^(2+3)__ FV + O(^4) + O(^4), where the group of the last three terms forms Fradkin–Vilkovisky anomaly action expanded with ^3-accuracy. Explicitly the cubic part of __ FV for the model of a single conformal scalar field with (<ref>) reads <cit.> ^(3)__ FV = -1/32π^2∫ dx √(g){1/19440(2/_3 - _1/_2 _3) R_1 R_2 R_3 + 1/1620 _2_3 C_1^αβ∇_α R_2 ∇_β R_3 +1/540(4/_2 - 1/_3 - 2 _1/_2_3 - _3/_1_2) C_1^μν C_2 μν R_3 +1/135(1/_1_2 - 2/_2_3) ∇^μ C_1^να∇_ν C_2 μαR_3 - 1/135 _1 _2_3∇_α∇_β C_1^μν∇_μ∇_ν C_2^αβ R_3}|_ {x}=x, where C_μν is the “Weyl” part (<ref>) of Ricci tensor (<ref>) §.§ The problem of double poles and global conformal transformations The expression (<ref>) shows that in the cubic order the anomalous effective action is free from double pole nonlocal terms. For the FV action this is obviously true to all orders of the curvature expansion, since all its tree type nonlocalities originate from the Green's function of the conformal scalar operator -16 R. However, for the RFT action double poles formally appear starting from the fourth order in the curvature because the metric variation of _χ=_ RTF in (<ref>) leads to the action of the inverse Paneitz operator upon the square of the Weyl tensor C^2 = C_μναβC^μναβ due to a formal variational rule ∫ d^4x √(g) C^2δ_ RFT = ∫ d^4x √(g) (Δ_4^-1C^2)δ(…). This operation is not well defined, because C^2 is not a total derivative and the repeated action of 1/ upon generic test functions in four dimensions leads to IR divergent integrals—see footnote <ref>. In the cubic order of _ RFT this problem does not arise because of the extra factor in R, as it was checked in <cit.> by explicit calculations of ⟨ TTT ⟩ correlators, but one is not granted to be free from this difficulty for higher order correlators. In fact this is a typical situation of IR divergences in two dimensions, where the kernel of 1/ has a logarithmic dependence at infinity, and the correlators of undifferentiated conformal fields ϕ are UV divergent, while the correlators ⟨∂ϕ(x)∂ϕ(y)⋯⟩ stay well defined. Apparently, the same property in four dimensions also underlies absence of unitarity in dipole theories with 1/^2-type propagators recently discussed in <cit.>. The mechanism of transition from operators to their derivatives in shift symmetric theories actually helps to justify the RFT action as a source of well defined stress tensor correlators and extend the validity of results in <cit.> to all higher orders. This follows from the observation that the Paneitz operator reads √(g)Δ_4 = ∂_μ[√(g)(∇^μ∇_ν + 2R^μν - 2/3Rg^μν)]∂_ν and, therefore, perturbatively on the flat space background can be represented as √(g)Δ_4 = ^2 + V, = δ^μν∂_μ∂_ν, V = ∂_μ V^μν∂_ν, where the perturbation V = O() has a special form—another differential operator V^μν sandwiched between two derivatives with all derivatives acting to the right (which is indicated by the arrow). Within perturbation theory in powers of V the action of the inverse operator on a generic test function ψ—scalar density—could have been understood as the expansion ϕ = 1/√(g)Δ_4ψ = ∑_n=0^∞(-1)^n/^2(V1/^2)^nψ = ∑_n=0^∞(-1)^n/^2(∂_μ V^μν1/^2∂_ν)^nψ, where we deliberately permuted the factors of ∂_ν and 1/^2 using their formal commutativity in order to provide the action of 1/^2 on the total derivative function. Thus all terms of this expansion except the first one become infrared finite. The first term (1/^2)ψ, however, makes this function ϕ ill defined. On the contrary, its derivative ∂_αϕ becomes consistent if one understands the first term of the expansion as (1/^2)∂_αψ, so that the prescription for the operation of ∂_α(1/√(g)Δ_4) on a generic non-derivative type test function reads as ∂_α1/√(g)Δ_4ψ= ∑_n=0^∞(-1)^n/^2∂_α(∂_μ V^μν1/^2∂_ν)^nψ. With this prescription the term C^2_ RFT in the RFT action becomes perturbatively well defined to all orders of expansion. Indeed, this term with _ RFT given by (<ref>) and on account of total derivative structure √(g)(E-23 R)=∂_α E^α can be rewritten by integration by parts as 4∫ d^4x√(g) C^2_ RFT=-∫ d^4x √(g)E^α ∂_α1/√(g)Δ_4(√(g) C^2) with the above prescription (<ref>). This confirms a well defined nature of all multiple point correlators of stress tensor generated by RFT action. Finally, it is worth discussing the effective action behavior under global conformal transformations with σ_0= const. Higher order curvature terms of the effective action scale as negative powers of e^σ_0 and therefore are irrelevant in the IR limit. In <cit.> this was a main argument in favor of a dominant role of the Wess–Zumino action (<ref>) in this limit because Δ[g, σ] behaves linearly in σ_0 (or logarithmically in the distance). Indeed, Δ[g, σ + σ_0] = Δ[g, σ] +σ_0(γ/32π^2∫ d^4x √(g) C^2 + β e'_E), where e'_E is the Euler characteristics of the manifold modulo its boundary contribution (see footnote <ref>). Note, however, that this behavior cannot be captured within the nonlocal RTF form of the anomaly action (<ref>) because it is valid only under Dirichlet boundary conditions for the Green's function of Δ_4 (which would be violated by the σ_0-shift). In other words, the expression (<ref>) lacks the contribution of the zero mode of the Paneitz operator, which on the contrary is explicitly featuring in (<ref>). For compact manifolds with possibly nontrivial topology global Weyl transformations would not contradict boundary conditions, and these transformations will obviously show up in the generalized RFT gauge (<ref>) as an ambiguity of the solution for Eq. (<ref>), →+σ_0. § STRESS TENSOR IN CONFORMALLY RELATED SPACETIMES Equations (<ref>) and (<ref>) show that the anomalous action makes sense as an object specifying the difference of effective actions on conformally related metrics and other fields. Outside of this context this action, being a subject of shifting by an arbitrary conformal invariant functional W^ conf[ g ], as in Eq. (<ref>), is not very instructive because such a shift can include essential physical information on conformally invariant degrees of freedom. Anomaly action _χ, or it would be better to say, the Wess–Zumino type action (<ref>)—the generating functional of _χ—is really useful in situations when the physics of a conformally related spacetime with the metric g̅_μν is fully known. Then the effective action at g_μν can be completely recovered from the knowledge of the Weyl anomaly. The simplest situation belongs to the class of conformally flat spacetimes when g̅_μν can be associated with flat metric for which all the metric field invariants are vanishing and [ g̅ ] is either exactly zero or calculable for quantum matter fields in flat spacetime. In particular, the fundamental observable which can then be obtained is the UV renormalized expectation value of the stress tensor of classically conformally invariant fields, √(g) ⟨ T^αβ⟩=2 δ_ ren/δ g_αβ provided ⟨ T̅^αβ⟩=0 or known from flat space physics. Here we derive from (<ref>) the expression for the difference of (densitized) stress tensors √(g) ⟨ T^α_β⟩ - √(g̅) ⟨ T̅^α_β⟩, which for a conformally flat spacetime coincides with a well-known Brown–Cassidy expression <cit.> and generalizes it to the case of a nonvanishing Weyl tensor. §.§ Conformal anomaly from the divergent part of the effective action To derive the behavior of the renormalized stress tensor on the conformal group orbit we, first, have to trace the origin of conformal anomaly as the result of subtracting UV divergences from covariantly regularized effective action, _ ren=_ reg-_∞. In dimensional regularization, _ reg=^(d), these divergences are given by _∞ = -1/16π^2ϵ∫ d^d x √(g) a_2 = 1/16π^2ϵ∫ d^d x √(g) (α ^(4)C^2+β ^(4)E ), where ϵ = 4-d, ^(4)C^2 and ^(4)E are the four-dimensional invariants formally continued to d-dimensions and a_2 is the relevant second Schwinger–DeWitt coefficient of the corresponding heat kernel expansion for the inverse propagator of the theory <cit.>, a_2 = -(α ^(4)C^2 +β ^(4)E+γ R), ^(4)C^2 = R_μναβ^2 - 2R_μν^2 + 1/3 R^2, ^(4)E = R_μναβ^2 - 4R_μν^2 + R^2. This structure of a_2 follows from the local conformal invariance of the pole residue of _∞ at d=4 and associated with the integrability (or conformal Wess–Zumino) condition for a conformal anomaly. It includes the topological Gauss–Bonnet density √(g)E, Weyl tensor squared and the total derivative R terms. Conformal anomaly arises as a contribution of the conformal transformation of the one-loop counterterm (<ref>) subtracted from the regularized effective action √(g) ⟨ T^α_α⟩=-2g_αβδ_∞/δ g_αβ, because the regularized (but not yet renormalized by counterterm subtracting) action _ reg is assumed to be conformally invariant[Or the Weyl invariance violation of dimensionally regularized _ reg is proportional to (d-4)^2 as it happens for spin one case <cit.>, so that it does not contribute to the residue of the simple pole in dimensionality.]. The R term does not contribute to the divergences but it appears in the conformal anomaly in view of the conformal transformation of the Weyl squared term continued to d dimensions. Moreover, within the above subtraction scheme its coefficient γ in the anomaly turns out to be determined by the coefficient α of the Weyl term <cit.>. Indeed, introduce conformally covariant Weyl tensor in d dimensions ^(d)C_μναβ = R_μναβ +2P_β[μg_ν]α - 2P_α[μg_ν]β, ^(d) C^μ_ναβ = ^(d)C̅^μ_ναβ, which is written down in terms of the Schouten tensor P_μν≡1/d-2(R_μν - Rg_μν/2(d-1)). In view of the relation between the square of Weyl tensors ^(d)C^2≡^(d)C_μναβ^2 and C^2≡^(4)C^2_μναβ (both formally continued to d dimensions) <cit.> ^(4)C^2 = ^(d)C^2 - ϵ/2(E-C^2-19 R^2) + O(ϵ^2) one has δ/δ g_μν∫ d^dx √(g) C^2 = δ/δ g_μν∫ d^dx √(g) ^(d)C^2 +ϵ/2δ/δ g_μν∫ d^4 x √(g) (C^2+19 R^2) + O(ϵ^2). Then, since the tensor ^(d)C_μναβ is conformally covariant in any dimension, g_μν(δ/δ g_μν) ∫ d^dx √(g) ^(d)C^2 = -ϵ2√(g) ^(d)C^2, we have 1/ϵg_μνδ/δ g_μν∫ d^dx √(g)C^2 = -1/2√(g)(C^2+2/3 R) +O(ϵ). Using this in (<ref>) one recovers the C^2 and the R terms in the expression for the anomaly √(g) ⟨ T^α_α⟩ = -1/16π^2√(g) a_2, with the parameter γ related to the coefficient α of the Weyl squared term <cit.> γ=2/3α. This simple expression for the trace anomaly in terms of the second Schwinger–DeWitt coefficient also follows from the zeta-function regularization <cit.>. The Gauss–Bonnet part of the anomaly follows from the conformal variation of the ^(4)E-term in the divergent part of the action. Just like R, as the residue of the pole in _∞ the integral of √(g)^(4)E at least naively does not contribute to the stress tensor, because in four dimensions this integral is a constant Euler characteristics of the manifold. But in a covariant renormalization procedure the coefficient of 1/ϵ in _∞ cannot be treated other than as a d-dimensional object, so that ∫ d^dx√(g)^(4)E is no longer a topological invariant, and its metric variation is nontrivial. Therefore, rewriting, similarly to (<ref>), the dimensionally continued Gauss–Bonnet density in terms of ^(d)C^2, ^(4)E = R_μναβ^2 - 4R_μν^2 + R^2 = ^(d)C^2 - (2-3ϵ)(R_μν^2 - 13 R^2)+O(ϵ^2), one has 1/ϵδ/δ g_αβ∫ d^dx √(g) ^(4)E = -√(g)( 12W^αβ+^(3)H^αβ +2R_μνC^μανβ) + O(ϵ), where the two new tensors arise ^(3)H^αβ = R^αμR^β_μ - 2/3RR^αβ - 1/2g^αβR_μν^2 + 1/4g^αβR^2, W^αβ = lim_ϵ→ 01/ϵ(4 ^(d)C^α_μνλ^(d)C^βμνλ - g^αβ ^(d)C^2 ). The limit to d=4 for the tensor W^αβ is regular here because at d=4 there is the important identity 4 ^(4)C^α_μνλ^(4)C^βμνλ= g^αβ^(4)C^2 —it can be proven by antisymmetrization over five indices in the four-dimensional spacetime <cit.>. Tensors ^(3)H^αβ and W^αβ have the following traces ^(3)H^α_α=13R^2 -R_μν^2=12(E-C^2), W_α^α = C^2. Thus from (<ref>) and (<ref>) we have the relation 2/ϵg_αβδ/δ g_αβ∫ d^dx √(g) ^(4)E = -√(g)^(4)E + O(ϵ), which recovers the contribution of E-term in the conformal anomaly (<ref>) with the expression (<ref>) for a_2. §.§ Minimal form of Wess-Zumino action and a-theorem Of course there is a big ambiguity in the above analytic continuation of the coefficients relating 4-dimensional objects to their d-dimensional counterparts. This ambiguity reduces to the renormalization by finite 4-dimensional counterterms ∫ d^4x√(g) R_μναβ^2, ∫ d^4x√(g) R_μν^2 and ∫ d^4x√(g) R^2 among which in view of the total-derivative nature of the Gauss-Bonnet density only one counterterm can additionally break Weyl invariance and change the coefficient γ of the R term in the conformal anomaly. This is because the combination ∫ d^4x √(g)(C^2-E)=2∫ d^4x √(g)(R_μν^2-13 R^2) is Weyl invariant, and such a counterterm can be chosen as the square of the curvature scalar, satisfying g_μνδ/δ g_μν∫ d^4x√(g) R^2 = -6√(g) R. Therefore this finite local counterterm can be used to alter the coefficient γ and, in particular, put it to zero by a special finite renormalization which we will denote by a subscript Ren, _ ren[ g ]→_ Ren[ g ] ≡_ ren[ g ] + γ/192π^2∫ d^4x √(g) R^2. Regularization and subtraction scheme dependence of γ-coefficient manifests itself in the violation of the relation (<ref>) for the dimensionally regularized electromagnetic vector field <cit.>, but ultimately does not change the physics of the theory because of the locality of the covariant counterterm ∫ d^4x√(g) R^2, whose subtraction point should be determined from the comparison with the observable value of its coupling constant. In the cosmological example considered below the above renormalization (<ref>) corresponds to fixing the coupling constant in the Starobinsky R^2-model <cit.>. The renormalization (<ref>) has an important consequence – with γ=0 the terms with quartic derivatives of σ, contained in the combination β16π^2∫ d^4x (4√(g̅) σΔ̅_4σ-1/9√(g) R^2) of (<ref>), completely cancel out, and the resulting minimal Wess-Zumino action does not acquire extra hihger-derivative degrees of freedom, _ Ren[ g ]-_ Ren[ g̅ ] =α/16π^2∫ d^4x √(g̅) C̅_μναβ^2σ +β/16π^2∫ d^4x √(g̅) {E̅ σ-4 (R̅^μν -12g̅^μνR̅ ) ∂_μσ ∂_νσ -4 σ (∇̅^μσ ∇̅_μσ) -2 (∇̅^μσ ∇̅_μσ)^2}. This minimal version of the action for the dilaton field σ was discussed in <cit.> and used in the derivation of the a-theorem in <cit.> – monotonically decreasing coefficient a=β/16π^2 in the RG flow of the theory from UV to IR domains. This theorem is based on the sign of the last quartic interaction term for this field, related to the cross section of the forward 2→ 2 dilaton scattering which should be positive in unitary theory, its unitarity being related to the absence of higher-derivative ghosts in (<ref>). §.§ Renormalized stress tensors The behavior of the stress tensor on the orbit of the conformal group can be obtained by using the commutativity of the following functional variations [ g_μν(y)δ/δ g_μν(y), g_βγ(x)δ/δ g_αγ(x)]=0, which allows one to write δ/δσ(y)√(g)⟨ T^α_β(x) ⟩ = 2g_βγ(x)δ/δ g_αγ(x)δ_ ren/δσ(y)|_g_μν = e^2σg̅_μν = g_βγ(x)δ/δ g_αγ(x)√(g)(y)⟨ T^μ_μ(y)⟩|_g_μν = e^2σg̅_μν. Bearing in mind that g_βγδ/δ g_αγ= g̅_βγδ/δg̅_αγ at fixed σ and functionally integrating this relation over σ one has √(g) ⟨ T^α_β⟩ - √(g̅) ⟨T̅^α_β⟩ = 2g̅_βγδ/δg̅_αγΔ[ g̅,σ ], where Δ[ g̅,σ ] = _ ren-_ ren is given by (<ref>). Before calculating this difference by the metric variation of Δ[g̅, σ] it is instructive to obtain it directly from the divergent part of the action as it was done in <cit.>. Note that _ ren-_ ren = -(_∞-_∞) because _ reg does not contribute to the anomaly (see footnote <ref>). Therefore, √(g) ⟨ T^α_β⟩ |_ g̅^ g = -2 g_βγδ_∞/δ g_αγ |_ g̅^ g To calculate the contribution of the ^(4)C^2-term in _∞ we rewrite it in terms of ^(d)C^2 and use Eq. (<ref>). This leads to the contribution of the first term of this equation δ/δ g_μν∫ d^dx √(g) ^(d)C^2 = -ϵ/2√(g) W^μν-4√(g) ^(d)B^μν, ^(d)B^μν=(1/d-2R_αβ +∇_(α∇_β))C^μανβ, where the tensor W^μν is defined by Eq. (<ref>) and ^(d)B^μν is the d-dimensional Bach tensor. Assembling this with the second term of Eq. (<ref>) we get on the orbit of the conformal group 1/ϵg_βγδ/δ g_αγ∫ d^dx √(g) ^(4)C^2 |_ g̅^ g = -√(g)[ 4/ϵ^(d)B^α_β +1/18^(1)H^α_β ]_ g̅^ g + O(ϵ), where the tensor ^(1)H^α_β is given by the equation ^(1)H^α_β = 1/√(g)g^αγδ/δ g^βγ∫ d^4x √(g)R^2 = -1/2δ^α_β R^2 + 2RR^α_β +2δ^α_β R - 2∇^α∇_β R, and we took into account that the both tensor densities √(g) W^α_β and √(g) B^α_β in four dimensions are invariant on the conformal orbit. Outside of four dimensions the Bach tensor density transforms on this orbit as (here as above g_μν=e^2σg̅_μν) √(g) ^(d)B^α_β |_g̅^g = -ϵ/2√(g̅)(R̅^μν+2∇̅^(μ∇̅^ν))(σC̅^α_μβν) +O(ϵ^2), which makes the first term on the right hand side of (<ref>) well defined at d→ 4. Note that the expression √(g̅)(R̅^μν+2∇̅^(μ∇̅^ν))(σC̅^α_μβν) treated as a functional of independent g̅_μν and σ is Weyl invariant under local conformal transformations of the barred metric. This can be easily inferred from the invariance of Eq.(<ref>) under the interchange g_μν↔g̅_μν and σ→ -σ or directly checking the conformal transformation of g̅_μν (with a fixed scalar σ). The contribution of Gauss–Bonnet term to the stress tensor behavior on the conformal orbit is obtained from using (<ref>)–(<ref>). Collecting this contribution with the contribution (<ref>) of the Weyl tensor squared part we finally have √(g) ⟨ T^α_β⟩|_ g̅^ g = -α/4π^2√(g̅) (R̅^μν +2∇̅^(μ∇̅^ν))(σC̅^α_μβν) +1/8π^2√(g) [ β ^(3)H^α_β +α/18 ^(1)H^α_β+2β R^μνC^α_μβν ]_ g̅^ g. This is a generalization of the Brown–Cassidy formula to the case of a nonzero Weyl tensor. The first term of this expression is Weyl invariant in view of the above remark and can be represented by its unbarred version. The check of consistency of this formula with the original expression for the conformal anomaly is trivial in view of ^(3)H^α_α=(E-C^2)/2, ^(1)H^α_α=6 R and tracelessness of the Weyl tensor, √(g) ⟨ T^α_α⟩|_ g̅^ g = √(g)/16π^2[β E-β C^2+2α3 R]_g̅^g = -√(g) a_2/16π^2|_ g̅^ g, where the last equality follows from the conformal invariance of the density √(g) C^2 and from the relation (<ref>) between the coefficients γ and α, α=32γ. The recovery of (<ref>) from the direct variation of the Wess–Zumino action (<ref>) goes as follows. We use metric variational formulae δ/δ g_αβ∫ d^4x √(g) C^2σ = -2√(g)(R_μν+2∇_(μ∇_ν)) (σ C^αμβν), δ/δ g_αβ∫ d^4x √(g) _4σ = √(g) Δ^αβσ, δ/δ g_αβ∫ d^4x √(g) φΔ_4σ=-√(g)/2 D^αβ[φ,σ], which hold for generic scalar test functions σ and φ with the differential operator Δ^αβ acting on σ, Δ_αβ = 1/3(g_αβ-∇_α∇_β) +[ 2(g_αβP_μν - g_αμP_βν - g_ανP_βμ)+ 8/3g_μνP_αβ. .+ 2Pg_αμg_βν - 5/3Pg_αβg_μν - 2W_αμβν]∇^μ∇^ν + ( g_αβg_μν - g_αμg_βν - g_ανg_βμ) (∇^μ P)∇^ν, and the bilinear form D^αβ(φ,σ), D_αβ[φ, σ] = -1/2g_αβφ σ - 2σ_αβφ + 2σ_αφ_β - 1/3 g_αβσ_μφ^μ - 2/3φ_μ(αβ)σ^μ + [ 2W_αμβν + 1/3(g_μνR_αβ - g_αμg_βνR)]φ^(μσ^ν) + 1/3( 4φ_αμσ^μ_β - g_αβφ_μνσ^μν) + ( φ⇔σ ), where φ_α≡∇_αφ, σ_αβ≡∇_β∇_ασ, φ_αβγ≡∇_γ∇_β∇_αφ, etc. Note that the trace of Δ^αβ coincides with the Paneitz operator, g_αβΔ^αβ=Δ_4, which matches with the conformal variation (<ref>), and the bilinear form D^αβ(φ, σ) is traceless in view of conformal invariance of √(g)Δ_4. Using these relations we get from (<ref>) and (<ref>) √(g) ⟨ T^α_β⟩ |_ g̅^ g = -α/4π^2√(g)(R^μν +2∇^(μ∇^ν))(σ C^α_μβν) +√(g)/8π^2(2βΔ^α_βσ +β D^α_β[σ,σ]) + √(g)(γ12 +β18) ^(1)H^α_β |_ g̅^ g. The term in the first line here coincides with its barred version in (<ref>)—this easily follows from the relation (<ref>) where the integrand can be identically replaced by the barred one. The γ/12^(1)H^α_β term here matches with the α/18^(1)H^α_β term of (<ref>) in view of the relation α=32γ. And finally, the identity holds √(g) [^(3)H^α_β +118^(1)H^α_β +2 R^μνC^α_μβν]_ g̅^ g =√(g)( 2Δ^α_βσ + D^α_β[σ,σ]), which completely reconciles the two expressions (<ref>) and (<ref>) for the stress tensor behavior on the orbit of the conformal group. § CONFORMALLY FLAT SPACETIME The generalization (<ref>) of Brown-Cassidy formula to the case of a nonvanishing Weyl tensor might be not very useful, because in the general case not much can be said about ⟨ T^α_β⟩ |_g̅. Therefore we will restrict ourselves with the case of the conformally flat spacetime for which the conformal transformation of the metric can lead to the metric g̅_μν of flat spacetime, where ⟨ T̅^α_β⟩ is either zero or can be obtained from flat space physics. Interestingly, in this case the parameter of the conformal transformation σ making this transition satisfies the equation Δ_4 σ = 1/4_4 and in asymptotically flat case with Dirichle boundary conditions has a unique solution (<ref>), σ=_ RFT. This, apparently not very well known fact, can be proven by using the equation for the conformal transformation of the four-dimensional Schouten tensor (<ref>) (g_μν=e^2σg̅_μν) P_μν-P̅_μν = -σ_μν -σ_μσ_ν + 1/2σ_ασ^α g_μν, where σ_μ≡∇_μσ and σ_μν≡∇_ν∇_μσ. Assuming that g̅_μν is flat space metric with P̅_μν=0, differentiating twice and again using this relation to express P_μν in terms of the derivatives of σ one has ∇^μ∇^ν(P_μν+σ_μν +σ_μσ_ν- 1/2σ_ασ^α g_μν) =Δ_4σ-1/4_4 = 0, whence it follows that the conformal invariant metric (<ref>) in the RFT gauge (<ref>) is actually the flat space one when the Weyl tensor is zero R̅^α_ βμν=0, g̅_μν=e^-2_ RFT[ g ]g_μν|_ C_αβμν=0. Note that g̅_μν here is not automatically diagonal unit matrix δ_μν, because this is the invariant statement which is valid in any coordinate system. §.§ Anomaly driven cosmology Applications of the conformal anomaly in the cosmological context have a long history, see for example <cit.>. In particular, cosmology with the Friedman–Robertson–Walker (FRW) metric represents the situation when the anomalous action Δ[g̅,σ] entirely determines the physics of the field model and via effective equations of motion produces a nontrivial back reaction of quantum matter on the dynamical metric background. The most interesting example is, perhaps, the case when [ g̅ ] in (<ref>) nontrivially contributes to this back reaction effect rather than just serves as an inert flat space background. This is the spatially closed cosmology driven by a conformal field theory (CFT) from the initial state in the form of a special microcanonical density matrix, which was orginally suggested in <cit.> and recently reviewed in <cit.>. With the density matrix defined as the projector on the space of solutions of the Wheeler–DeWitt equations <cit.> the statistical sum in this model has a representation of the Euclidean quantum gravity (EQG) path integral Z = ∫ D[ g_μν,ϕ ] e^-S[ g_μν,ϕ ], where integration runs over the metric g_μν and matter fields ϕ which are periodic on the Euclidean spacetime of topology S^1× S^3 with the time τ compactified to a circle S^1. When the classical action S[ g_μν,ϕ ] is dominated by numerous CFT fields with their action S_CFT[ g_μν, ], the statistical sum can be approximated by the contribution of the saddle point of this integral. This is the extremum of the total action including the tree-level gravitational Einstein–Hilbert action S_EH[ g_μν] and the effective action [ g_μν] of these CFT fields[Disregarding the graviton loops can be justified by the domination of conformal fields outnumbering the metric, and retaining the Einstein–Hilbert term obviously follows from the fact that this term with renormalized gravitational and cosmological constants is anyway induced from the quantum conformal sector.], _ tot[ g_μν] = S_EH[ g_μν] +[ g_μν], e^-[ g_μν] = ∫ D e^-S_CFT[ g_μν, ]. Choosing as g_μν the FRW metric with the scale factor a(τ) and the lapse function N (^2_(3) is the metric of the 3-dimensional sphere of a unit radius), ds^2=N^2dτ^2+a^2d^2_(3)= a^2(τ)(dη^2+d^2_(3)), one immediately finds that in terms of the conformal time variable η, related to the Euclidean time τ by the relation dη=dτ/a(τ), this metric is conformally equivalent to the metric g̅_μν≡ g_μν^ EU of the Einstein static universe with spatial sections—the 3-dimensional spheres of some constant radius a_0, ds̅^2=a_0^2 (dη^2+d^2_(3))≡ g_μν^ EUdx^μ dx^ν, ds^2=e^2σ ds̅^2, g_μν=e^2σ g_μν^ EU, σ=lna/a_0. Therefore the CFT effective action expresses in terms of the same action on a static Einstein universe [ g_μν^ EU]≡_ EU and Wess–Zumino action (<ref>) with the above conformal parameter σ [ g_μν]= Δ[ g_μν^ EU,σ ] +_EU. The calculation of _EU is strongly facilitated by the static nature of the background, but it still yields a nontrivial result in view of compactification of time on S^1. To begin with, note that although g_μν^ EU explicitly depends on the size a_0 of S^3, the value of _EU is a_0-independent for a fixed period of the conformal time η=∮ dη. This follows from the invariance of the effective action under global conformal transformations (<ref>) for conformally flat spacetimes with zero bulk part of the Euler characteristics (which is the case of S^1× S^3). This also can be confirmed by using scaling properties of the conformal fields. Indeed, the energies of conformal quanta on a static spacetime scale as 1/a_0 and their Hamiltonian reads, Ĥ=∑_ωω/a_0(â^†_ωâ_ω±1/2), where summation runs over all quantum numbers (and spins) of the energies ω/a_0 of all field oscillator modes on a static 3-dimensional sphere of the radius a_0 and â^†_ω and â_ω are the relevant creation-annihilation operators (± signs correspond to bosons or fermions). The path integral over (anti)periodic conformal (fermion) boson fields with a period T=∮ dτ N on a static metric background is exactly calculable and equals the equilibrium statistical sum at the temperature 1/ T which expresses as a function of the conformal time period η= T/a_0 e^-_ EU=∫ D e^-S_CFT[ g_μν^ EU, ] = Tr e^- TĤ =exp(-η E_ vac-F(η)). Here F(η) is the free energy of the gas of conformal particles and E_ vac is a UV divergent Casimir energy which should be covariantly renormalized F(η) = ∑_ω[± ln(1∓ e^-ωη) ], E_ vac = (∑_ω± ω/2)_ ren. Thus, the dependence on a_0 is absorbed into the dependence on η which should be fixed under the rescaling of a_0. Note that it is η that should be kept fixed under the global conformal transformation which simultaneously rescales the lapse function N and a_0 in the definition of the conformally invariant η=∮ dτ N/a_0. Remarkably, the covariant renormalization of the vacuum Casimir energy E_ vac also follows from the behavior of the effective action on the orbit of the conformal group. The Einstein universe extending from -∞ to +∞ in η is mapped to flat space by the transition to the radial coordinate ρ η↦ρ = a_0 e^η, -∞<η<+∞, 0≤ρ<∞, with the conformal relation between the two metrics ds^2_EU=e^2σ ds_ flat^2, σ = -η=lna_0/ρ, ds_ flat^2=dρ^2+ρ^2 d^2_(3). For the vacuum state (the limit η→∞ and F(η)→ 0 in Eq. (<ref>)) _ EU→ E_ vacη. On the other hand, from Eq. (<ref>) with the above expression for σ Δ[ g_ flat,σ ]=β/8π^2∫ d^4x√(g_ flat)(_ flatσ)^2 -1/32π^2(γ/6 + β/9) ∫ d^4x√(g_EU)R^2_EU. Bearing in mind that _ flatσ=-2/ρ^2, ∫ d^4x√(g_ flat)↦ 2π^2∫ dρ ρ^3, R_EU=6/a_0^2 and ∫ d^4x√(g_EU)↦ 2π^2a_0^4∫ dη, one has _ EU-_ flat=Δ[ g_ flat,σ ] =β∫dρ/ρ- (3/8γ + β/4)∫ dη =3/4 (β-γ/2)∫ dη. Therefore, under an obvious assumption that _ flat=0 one has E_ vac=3/4 (β-γ/2). In other words, after covariant renormalization by covariant counterterms the Casimir energy gets the value compatible with the behavior of the renormalized effective action on the conformal group orbit (or with the Brown–Cassidy formula for the vacuum stress tensor). This compatibility was indeed checked by direct renormalization of the UV divergent sum over field modes in (<ref>) <cit.>. Let us now turn to the contribution of the conformal transformation from the generic FRW metric to that of the static Einstein universe in (<ref>). To begin with we use the freedom of finite renormalization (<ref>) which reduces the theory to the case of anomaly (<ref>) with γ=0 and, in particular, renders E_ vac=34β. In the cosmological context this freedom corresponds to the adjustment of the coupling constant of the Starobinsky R^2-action <cit.> which plays an important role in inflation theory and the dark energy model. Then, with γ=0 and σ given by (<ref>) the Wess–Zumino term in (<ref>) takes the form <cit.> _ Ren[ g ]-_ Ren[ g_ EU] = 3β/2∮ dτ N (a'^2/a - a'^4/6 a), when written down in terms of the original FRW coordinates with the notation for the invariant time derivative a'=da/Ndτ. Note that the result is again independent of the constant a_0 because it contains only differentiated σ and, moreover, it does not involve higher order derivatives of a(τ). The last property is entirely due to the fact of γ being renormalized to zero and due to the cancellation of higher derivative terms in the minimal form of Wess-Zumino action (<ref>). Now we assemble together the Einstein-Hilbert action (with the reduced Planck mass M_ P=1/√(8π G) and the cosmological constant ), the action on the Einstein universe space (<ref>) and (<ref>). This leads to the total effective action on the generic Euclidean FRW background periodic in Euclidean time with the period η measured in units of the conformal time _ tot[ a,N ] = 6π^2 M_P^2∮ dτ N {-aa'^2 -a+/3 a^3 +β/4π^2 M_P^2(a'^2/a -a'^4/6 a +1/2a)} + F(η), η=∮dτ N/a. Here the contribution of the conformal anomaly and Casimir energy (<ref>) (with γ=0) are both weighted by the parameter β of the topological term in the conformal anomaly. The free energy of the gas of conformal particles F(η) is a function of the effective (“comoving”) temperature of this gas – the inverse of the circumference η of the cosmological instanton (<ref>). Despite essentially non-stationary metric background this gas stays in equilibrium state because of scaling properties of its particles and produces back reaction on the Friedmann metric background. Applications of the action (<ref>) have been considered in the number of papers <cit.> and recently reviewed in <cit.>. Physics of the CFT driven cosmology is entirely determined by this effective action and the effective (Euclidean) Friedmann equation. The latter follows from the action by varying the lapse N(τ) and expressing the Hubble factor in terms of the energy density. In cosmic type gauge N=1, ȧ=da/dτ, it reads 1/a^2-ȧ^2/a^2=ε/3M_±^2(ε), ε=M_P^2 +1/2π^2 a^4∑_ωω/e^ηω-1, M_±^2(ε)=M_P^2/2(1±√(1 -βε/6π^2M_P^4) ), where the total energy density ε includes the cosmological constant contribution and the radiation density of conformal field modes distributed over Planckian spectrum with the comoving temperature 1/η. The nonlinear effect of the Weyl anomaly manifests itself in the effective Planck mass squared explicitly depending on ε which takes two possible values M_±^2(ε).[To avoid mixup of the signs in M_±^2 and sign factors associated with the statistics of conformal ω-modes we present here the radiation spectrum only for bosonic case.] These equations should be amended by the expression for the conformal time period that interpolates between the turning points of the solution with ȧ(τ)=0. Note that the right hand side of the Friedmann equation does not contain Casimir energy density – it turns out to be fully screened due to the dynamical effect of the Weyl anomaly. This is the result of the finite renormalization (<ref>) leading to a particular value of the anomaly coefficient of R, γ=0. For the choice of + sign in M_±^2 the solutions of this quantum Friedmann equation turn out to be the so-called garlands – the cosmological instantons of S^1× S^3 topology, which have the periodic scale factor a(τ) oscillating on S^1 between maximal and minimal values a_± <cit.>. These instantons serve as initial conditions for the cosmological evolution in the physical Lorentzian spacetime. This evolution follows from a(τ) by the analytic continuation a_L(t)=a(τ_++it), (da_L/dt)^2=-ȧ^2, to the complex plane of the Euclidean time at the turning point with the maximal scale factor a_+=a(τ_+). It can incorporate a finite inflationary stage if the model is generalized to the case when a primordial cosmological constant is replaced by the potential of the inflaton field ϕ, → V(ϕ)/M_P^2, staying in the slow-roll regime during the inflationary stage[Alternatively, the role of inflaton can be played by Ricci curvature in the Starobinsky R^2-model, the coupling of the R^2 term being subject to the renormalization respecting the zero value of α in the total Weyl anomaly <cit.>.] and decaying in the end of inflation by a usual exit scenario <cit.>. The energy scale of inflation – its Hubble parameter H∼√(/3) turns out to be bounded from above by √(2)π M_P/√(β), so that to solve the problem of hierarchy between the Planck and inflation scales one needs β≫ 1 which matches with the previously adopted assumption that numerous conformal fields drastically outnumber all other fields and dominate over their loop corrections. For the negative sign in M_±^2 the solutions represent vacuum S^4-instantons of the no-boundary type with the vanishing minimal value of the scale factor a_-=0. They correspond to the diverging η∼∫_0^a_+da/aȧ→∞ or zero temperature. These solutions, however, do not contribute to the statistical sum because of their infinitely positive action _ tot→+∞ — the quantum effect of the trace anomaly which flips the sign of the negative tree-level action of the Hartle-Hawking instantons <cit.> and sends it to +∞ <cit.>. Thus the CFT cosmology scenario is free from the infrared catastrophe of the no-boundary quantum state which would imply that the origin of an infinitely big Universe is infinitely more probable than that of a finite one. § RENORMALIZATION GROUP AND THE METAMORPHOSIS OF THE RUNNING SCALE This section has essentially discussion nature and is associated with the covariant perturbation theory of the above type. One of the motivations for this discussion is that, in spite of a widespread concept of running cosmological and gravitational constants, which is especially popular within the asymptotic safety approach, there is a very profound and persuading criticism of this concept <cit.>. It is based on numerous arguments of the tadpole structure of the cosmological and Einstein terms, on concrete results for graviton scattering amplitudes <cit.> which cannot be interpreted in terms of a universal scaling of and G, etc. At the same time in renormalizable gravity models with multiple couplings the solution of the full set of RG equations includes running cosmological and gravitational constants <cit.>. So the question arises how to interpret their running scale. Here is the attempt to do this in terms of the covariant curvature expansion developed in <cit.>. We start with the classical action which is the sum of local curvature invariants of growing dimensionality (4+m) in units of the mass S[ g_μν]=∑_m,N^(m)_N∫ d^4x √(g) ^(4+m)_N(x). They are monomials of N-th order in curvature tensors which are acted upon by covariant derivatives ^(m)_N(x)=∇...∇_m-2N(x)...(x)^N, dim ^(m)_N(x) ≡[ ^(m)_N(x) ]=m. The curvature monomials enter the action with coupling constants ^(m)_N of the decreasing (with growing m) dimensionality [ ^(m)_N ]=d-m, m=0,1, … . Summation in (<ref>) can run over finite set of terms providing the renormalizability of the theory, or formally extended to the infinite set in the framework of generalized RG theory with infinite set of couplings {}=^(m)_N. Within covariant perturbation theory the full metric is decomposed as a sum of the flat spacetime metric g̃_μν and the perturbation h_μν g_μν=g̃_μν+h_μν, so that each curvature invariant becomes expanded as an infinite series in powers of h_μν forming a new set of h-monomials on the flat space background ∫ d^4x √(g) ^(m)_N=∑_M=N^∞∫ d^4x √(g̃) I_M^(m)(h), I_M^(m)(h)∝∇̃...∇̃_mh(x)...h(x)^M. Then in the notations of the covariant perturbation theory the calculation of the renormalized effective action leads to the same sequence of monomials acted upon by the operator form factors _n^(i)({}, ∇̃_1,...∇̃_1) which make them nonlocal, {} denoting the full set of couplings (<ref>). Within dimensional regularization these renormalized coupling constants get rescaled by the normalization parameter μ and expressed in terms of their dimensionless analogues λ^(m)_N(μ) ^(m)_N=μ^d-mλ^(m)_N(μ), and the perturbation theory form factors also express as the functions of dimensionless arguments _M^(m)({},∇̃_1,...∇̃_M)= μ^d-mγ_M^(m)({λ(μ)},∇̃_1μ,... ∇̃_Mμ) Correspondingly the effective action becomes [ g_μν]=∑_(m)μ^d-m∑_M=0^∞∫ d^dx √(g̃) ×γ_M^(m)({λ(μ)},∇̃_1μ,... ∇̃_Mμ) I_M^(m)(h_1,h_2,...h_M) |_ {x}=x, where I_M^(m)(h_1,h_2,...h_M) is the analogue of the invariant (<ref>) with split spacetime arguments. A typical assumption of the RG theory that the renormalized action is independent of the running scale then leads to the set of equations for λ^(m)_N(μ) with the beta functions following from the residues of spacetime dimension poles in the formfactors _M^(m)({λ(μ)},{∇̃/μ}), μd/dμ[ g_μν]=0 →μd/dμλ^(m)_N(μ)=β^(m)_N(μ) ({λ(μ)}). A critical step now consists in the choice of the running scale which could probe the high energy limit of the theory and embrace a simultaneous scaling of all formfactors and invariant monomials of (<ref>). Then the replacement of the parameter μ by this scale will identically bring the effective action to the form explicitly revealing its UV limit. The choice of this scaling object can be very different depending on the concrete physical setup. If the theory has a dimensional scalar field ϕ with a nonvanishing and slowly varying mean value it would be natural to identify RG normalization μ with ϕ. This would lead to the nontrivially “running” in ϕ of the cosmological and Einstein terms, →(ϕ) and G→ G(ϕ), (amended of course by a gradient expansion series in derivatives of ϕ), but of course these terms acquire the interpretation of the Coleman-Weinberg type potential and nonminimal coupling of ϕ to the scalar curvature. We, however, are interested in the UV scaling of all derivatives ∇̃→∞, which in momentum space representation of scattering amplitudes is conventionally represented by the high energy Mandelstam invariants or some other combinations of external momenta. In the coordinate representation of the covariant perturbation theory of <cit.> the role of this scale should be played by some operator. So we suggest as a candidate for this object the following nonlocal operator D̃ which also formally tends to infinity in the limit of ∇̃→∞ and in fact embraces a simultaneous scaling of all invariant monomials in (<ref>), D̃≡(-∑_N=1^∞_N)^1/2, _N≡g̃^μν∇̃_μ∇̃_ν. Though being very formal, this operator is well defined in each N-th monomial order because it becomes truncated to the finite sum when acting on the monomial of N perturbations h_1,...h_N, and for N=0 it is just zero because of its action on an independent of x constant, D̃_N≡(-∑_M=1^N _M)^1/2, D̃_0=0. In the UV domain ∇̃_n→∞, when ∇̃_n/D̃_N=O(1), n≤ N, the formfactors in each N-th order become after the replacement μ→D̃ the functions of a single operator variable D̃_N, μ^4-mγ_N^(m)(λ(μ) | ∇̃_1μ,... ∇̃_Nμ) |_ μ→D̃_N → (D̃_N)^4-mγ_N^(m)(λ(D̃_N) | O(1))≡ (D̃_N)^4-mλ_N^(m)(D̃_N), and the expansion of the formally independent of μ action takes the form [ g_μν] |_ μ→D̃→ ∑_m∑_N=0^∞∫ d^4x √(g̃) × (D̃_N)^4-mλ_N^(m)(D̃_N) I_N^(m)(h_1,h_2,...h_N) |_ {x}=x. The next step consists in the recovery of the covariant form of the expansion in terms of the original spacetime curvature. Curiously, despite the fact that the covariant perturbation theory of <cit.> is rather often being referred to in literature, subtle details of this step are usually disregarded which leads to confusing statements on the ambiguity of this procedure, dependence on the gauge by which the metric perturbation h_μν is related to the curvature <cit.>, etc. At the same time, this procedure is unique, provided that one does not treat g̃_μν and ∇̃_μ as Cartesian δ_μν and ∂_μ, but rather proceeds in generic coordinate system and uses the only invariant statements that the curvature of the tilded metric is vanishing R̃^α_ βμν=0. This is the covariant equation for g̃_μν in terms of the curved metric g_μν and its curvature R^α_ βμν, whose solution exists as perturbation expansion in R^α_ βμν and also requires imposing the gauge <cit.>. But the result of substituting this solution back into manifestly noncovariant (double field) series (<ref>) is gauge independent because of the implicit invariance of the left hand side of (<ref>). In the convenient DeWitt type gauge ∇̃^ν h_μν-12∇_μ h=O[ h^2], h≡g̃^αβh_αβ, the solution for h_μν and ∇̃_μ in terms of g_μν and ∇_μ reads in the lowest order as <cit.> h_μν=-2/R_μν +O[ ^2], ∇̃_μ=∇_μ+O[ ]. Using this in (<ref>) we get the replacement of h-monomials by the covariant curvature monomials along with the replacement of D̃_N by D_N, I_N^(m)(h_1,h_2,...h_N)→ 1/_1..._N^(m+2N)_N(x_1,...x_N)+O[ ^N+1], D̃_N→ D_N+O[ ], where D_N is obviously defined by (<ref>) in terms of full-fledged covariant d'Alembertians =g^μν∇_μ∇_ν, and we reabsorb the coefficient (-2)^n into the symbolic definition of the N-th order covariant monomial – the analogue of the local ^(m)_N(x), see Eq. (<ref>), with split N spacetime arguments ^(m)_N(x_1,...x_N)= ∇...∇_m-2N(x_1)...(x_N), N≥ 1. For N=0 this monomial can be defined as an irrelevant constant bringing no contribution in the UV limit. Thus the UV limit of the effective action takes the form [ g_μν]→∫ d^4x √(g)∑_m,N≥ 0^∞λ_N^(m)(D_N)(D_N)^4-m/_1..._N × ^(m+2N)_N(x_1,⋯ x_N) |_ {x}=x, where we remind that the dimensionless formfactors λ_N^(m)(D_N) follow from the running RG couplings of the theory λ_N^(m)(μ) by the replacement of μ with the operator D_N. Let us consider application of this result to the cosmological constant sector involving the metric invariants of dimensionality m=0 and ^(4)_0=/16π G. This classical cosmological term gives rise to the infinite set of zero dimension invariants ∫ d^4x √(g)=∑_n=0^∞∫ d^4x √(g̃) I_n^(0)(g̃,h), I_0^(0)(g̃,h)=1, I_1^(0)(g̃,h)=-12h, I_2^(0)(g̃,h)=14h^2-12 h_μν^2, … (indices are contracted by the flat metric and h=g̃^μνh_μν), whereas at the quantum level they generate the sequence of high energy m=0 structures of (<ref>) ∫ d^4x√(g)∑_N=2^∞λ_N^(0)(D_N) (D_N)^4/_1..._N ^(2N)_N(x_1,... x_N)|_{x}=x, where the zeroth order term is zero in view of D_0=0 (see Eq.(<ref>) and the first order term is also absent due to its tadpole (total derivative) nature – remember that D_1=(-_1)^1/2 and D_1^4/_1=_1 is acting on ^(2)_1(x_1).[Important caveat is necessary here concerning the annihilation of the total derivative terms. The surface terms at infinity should be vanishing, which is equivalent to a good IR behavior of the nonlocal form factor λ_1^(0)(D_1) at → 0. We will assume this property basing on the maximum logarithmic singularity of λ_1^(0)(D_1) which is a function of log(-) solving the RG equation. The same also applies to integrations by parts considered in what follows. Otherwise, the procedure of subtracting the boundary terms, like the Gibbons-Hawking surface action at asymptotically flat infinity, will be needed, which we briefly discuss below.] The expansion starts at N=2 with the term which has the following structure 4∑∫ d^4x √(g) ^(2)(x) λ_2^(0)(√(-2)) ^(2)(x) =∫ d^4x√(g)(R_μνF_1()R^μν+ RF_2()R)+O[ ^3]. Here we took into account that the set of invariants ^(4)_2(x_1,x_2) can be represented as a sum of terms factored out into the products of Ricci tensors and Ricci scalars with some coefficients[Bilinear in Riemann curvature terms under the integration sign also reduce to bilinear combinations of R_μν and R by using the expression for Riemann tensor in terms of the Ricci one <cit.>, see footnote <ref>.] a and b, ^(4)_2(x_1,x_2)=aR_μν(x_1) R^μν(x_2)+b R(x_1)R(x_2), and also used an obvious corollary of integration by parts ∫ d^4x√(g) F(_1,_2)(x_1)(x_2) |_ {x}=x =∫ d^4x√(g) (x)F(,)(x). Remarkable feature of the expression (<ref>) is that the power-law operator factors in (D_N)^4/_1..._N at N=2 completely cancelled out to give the dimensionless formfactors F_1() and F_2() which originate as linear combinations of relevant running λ_2^(0)(√(-2)) obtained by solving the RG equation. Even more remarkable is the fact that this is a nonlocal term which is quadratic in the curvature even though it has originated from the sector of cosmological term expanded in the series of zero dimension invariants. This is what can be called as metamorphosis to high-energy partners of the cosmological constant suggested by J.Donoghue in <cit.>. Their structure is a direct corollary of the dimensionality arguments within RG approach. The arising form factors of the curvature squared terms are the descendants of RG running couplings of the zero dimension invariants which participate in the decomposition of the cosmological constant term. In fact, the same structure (<ref>) gets reproduced for the contribution of any dimension m in the expansion (<ref>). For even dimensionality[For the set of 2-dimensional curvatures only even dimensions m enter the expansion (<ref>), but this can always be generalized to the case of odd-dimensional “curvatures”, like for example the extrinsic curvature in Hořava gravity models.], m→ 2m, this can be easily demonstrated by decomposing any (2m+4)-dimensional quadratic invariant as this was done above ^(2m+4)_2(x_1,x_2)=∑_m_1+m_2=2m^(m_1+2)_1(x_1)^(m_2+2)_1(x_2). Using this in (<ref>) one has complete cancellation of the dimensional factor (D_2)^4-2m/^2∼^-m in the expression ∫ d^4x √(g)∑_m_1+m_2=2m^(m_1+2)_1(x) ×λ_2^(2m)(D_2)(D_2)^4-m_1-m_2/^2 ^(m_2+2)_1(x) =∫ d^4x√(g)(R_μνF_1()R^μν+ RF_2()R)+O[ ^3]. Noting that with ^(m+2)_1=∇⋯∇^m^(2)_1 this follows from integration by parts and the use of various corollaries of contracted Bianchi identity (∇^ν R_μν=12∇_μ R, etc.), ∫ d^4x √(g)∑_m_1+m_2=2m∇⋯∇_m_1(x)F()∇⋯∇_m_2(x) =∫ d^4x √(g)(R_μν ^m F_1()R^μν+ R ^m F_2() R) +O[ ^3]. Here the operators F_1() and F_2() have the same dimension as F() and originate from F() by the algebra of contracting the indices of covariant derivatives. Using this relation in the left hand side of (<ref>) one gets the right hand side with completely cancelled powers of . Thus, Eq.(<ref>) with m=2 implies the conversion of the gravitational coupling constant into the dimensionless formfactors of the Einstein term partners. These partners have the same structure as the cosmological term partners quadratic in curvatures. This is again the metamorphosis of RG running of the form 1/16π G(μ)=μ^2λ^(2)_2(μ)→ F_1,2(). Note that all this takes place in the UV limit where all curvatures in their monomials are rapidly varying in spacetime with their derivatives ∇→∞. At intermediate energies, when the mass scale M surfaces up, the scaling (<ref>) ceases to make sense and roughly should be replaced with D∼ M, and instead of (<ref>) one gets exactly the cosmological constant partners of Donoghue <cit.> which have the structure of M^4∫ d^4x √(g)(R_μνF_1^ part()/^2R^μν+ R F_2^ part()/^2R). The dimensionless form factors F^ part_1,2() here are accumulating loop corrections with nonlocal logarithmic structures of the form F^ part()∼lnM^2-/M^2. Note that these partners are still in high-energy domain -≥ M^2, but they are subdominant as compared to the leading contribution (<ref>) with dimensionless form factors which incorporate the logarithmically running solutions of RG equations. This is because the partners (<ref>) are suppressed by power law factors M^4/^2. Exact form of these formfactors at intermediate scales was derived at one-loop order in <cit.> for rather generic theory of massive fields by using the heat kernel technique of <cit.>. In IR domain -≪ M^2 they are of course expandable in local gradient series reflecting the decoupling phenomenon <cit.>. Similarly, the gravitational constant partner in IR reads as M^2∫ d^4x √(g)(R_μνF_1()/R^μν+ R F_2()/R), which reminds the construction of the nonlocal action for long-distance modifications of gravity theory in <cit.>. This differs from the cosmological constant partner by another powers of M and the power of in the denominator. One should be more careful at this point – while the case of (<ref>) is well defined in asymptotically flat spacetime, the cosmological constant partner (<ref>) is IR divergent for the reasons discussed above. The action of 1^2 is not well defined in four dimensions (or, equivalently, ∫ d^4x√(g)(1)^2 is IR divergent), so that the perturbation expansion in the dimension zero sector should be critically reconsidered. To trace the origin of this difficulty note that the first three terms of the cosmological term expansion (<ref>) are divergent, whereas a similar expansion for the Einstein term becomes well defined only after the subtraction of the Gibbons-Hawking surface term ∫_∞ d^3σ^μ(∂_μ h-∂^ν h_μν) at the infinity of asymptotically flat spacetime. Due to this subtraction we can write for the integral of the invariant ^(2)_1(x)=-R(x), weighted in the Einstein action by ^(2)_1=1/16π G, a legitimate expansion (<ref>) starting with the quadratic order in h_μν, ∫ d^4x√(g)(-R)-∫_∞ d^3σ^μ(∂_μ h-∂^ν h_μν) =∑_M=2^∞∫ d^4x√(g̃) I_M^(2)(g̃,h), I_2^(2)(g̃,h)=-14 h_μνh^μν+18 hh-12(∇̃^ν h_μν-12∇̃_μh̃)^2. Then the above calculational strategy leads to the effective action (<ref>) whose tree level IR limit should match low energy physics with the Planck mass cutoff M^2 and the form factors F_1(0)=1 and F_2(0)=1/2. This tree level answer up to ^3-corrections directly corresponds to the above expression for I_2^(2)(g̃,h) with h_μν given by Eq. (<ref>) in terms of the curved space metric g_μν <cit.>. To the best of our knowledge, no such subtraction is known for cosmological term expansion (<ref>), so that its rigorous treatment is still to be done. It is interesting if new structures can be generated by the regularization of this IR behavior. Apparently, this should be based on the analogue of the Graham-Fefferman construction for asymptotically AdS spaces <cit.> and deserves further studies. In any case, the UV behavior of both cosmological and gravitational constant partners, which should be not sensitive to IR problems, is determined by curvature squared terms (<ref>) with running dimensionless “couplings”. Their formfactors F_1() and F_2() follow from the RG running of the relevant constants λ^(0)_2(μ) and λ^(2)_2(μ), but the transition λ^(0,2)_2(μ)→ F_1,2() is not straightforward and is mediated by Eqs.(<ref>) and (<ref>)-(<ref>). § CONCLUSIONS To summarize our notes on conformal anomaly, nonlocal effective action and running scales let us briefly dwell on possible applications of our results and related issues. As it is clear from the above considerations, the conformal anomaly action is a carrier of the effective rather than fundamental conformal degree of freedom. Either in the nonlocal or the Wess-Zumino form, it is the difference of action functionals of two configurations belonging to the orbit of the conformal group. So unless one of these actions is known the corresponding physical setup is not complete. In this respect, our approach is very different from the works which endow the conformal factor e^2σ the nature of the fundamental field <cit.> or, for example, ready to sacrifice the fundamental nature of the Higgs boson in favor of 36 fundamental zero dimension scalars σ for the sake of a complete eradication of Weyl anomaly and justification of the primordial cosmological perturbations spectra <cit.>. The CFT driven cosmology of Sect.<ref> seems to present such an example where the physical setup is complete within a certain approximation scheme. This approximation is associated with the dominance of conformal invariant matter fields over the loop effects of gravity and other types of matter and simultaneously puts the model in the subplanckian domain of energies below the cutoff M_P/√(β) when the coefficient of the topological conformal anomaly β≫ 1 <cit.>. To match with the widely accepted bounds on the energy scale of inflation ∼ 10^-6M_P one needs β∼ 10^13, which cannot be attained by a contribution of low spin conformal fields β=(1/360)(N_0+11N_1/2+62N_1) unless the numbers N_s of fields of spin s are tremendously high. On the contrary, this bound can be reached by appealing to the idea of conformal higher spin (CHS) fields <cit.>. A relatively low tower of higher spins will be needed, because a partial contribution of spin s to β grows as s^6. These partial contributions β_s for CHS totally symmetric tensors and Dirac spin-tensors read in terms of ν_s – their respective numbers of polarizations (negative for fermions) <cit.>, β_s=ν_s^2(3+14ν_s)/720, ν_s=s(s+1), s=1,2,3,... , β_s=ν_s(12+45ν_s+14ν_s^2)/1440, ν_s=-2(s+12)^2, s=12,32,52,... . The solution of hierarchy problem thus becomes a playground of 1/N-expansion theory for large number N of conformal species. Moreover, with the inclusion of CHS fields the status of conformal anomaly essentially changes and becomes similar to that of the chiral anomaly. Chiral anomaly has phenomenological confirmation within chiral symmetry breaking theory, it also has important implications in lepton physics, physics of early Universe, its baryon asymmetry theory, etc. It has a topological nature and is generated in virtue of Adler-Bardeen theorem only at the one-loop level. Local Weyl anomaly also has topological (a-type) contribution <cit.>, but for low spins it is contributed by all orders of loop expansion. CHS spins, however, have their inverse propagators ∼^s+... and, therefore, for high s are UV finite beyond one loop approximation. So their Weyl anomaly is also exhausted by the one-loop contribution, and there is a hope that their effect in the CFT driven cosmology is nonperturbative. As this effect intrinsically, by a dynamical mechanism of effective equations of motion <cit.>, provides the upper bound on the energy range of inflation M_P/√(β)≪ M_P, this also justifies omission of graviton loops and quantum effects of other (non-conformal) types of matter. There are, however, serious problems on the road to the realization of this model. To begin with, CHS fields in curved spacetime are not explicitly known yet, except conformal gravitino with s=3/2 and Weyl graviton with s=2. Recent progress in generalizing these models to arbitrary s on the Einstein-space background allowed one to compute their 1-loop Weyl anomaly coefficients (<ref>)-(<ref>) (by indirect AdS/CFT method in <cit.> and directly in <cit.>). This result, however, leaves the issue of unitarity violation caused by inevitable higher derivatives in wave operators of these fields. Moreover, these fields should form a hidden sector not observable at present, which implies the necessity of their eradication in the course of cosmological expansion. What might be useful for this purpose is the idea of renormalization group flow from UV to IR decreasing the value of β (the so called a-theorem of <cit.>) or Weyl symmetry breaking which would generate masses of CHS fields and thus shorten their massless tower. Finally and most importantly, the fundamental theory of these interacting CHS fields should necessarily be organized within a special higher spin symmetry <cit.>. A complete version of this theory is still missing, not to say about its constructive extension to curved spacetime. Thus, the progress here strongly depends on advancing theory of CHS fields <cit.>. The issue of RG running constants and G has, as it is shown, a rather unexpected resolution. The manifestation of this UV running actually takes place in the nonlocal formfactors of the quadratic curvature (dimension four) terms rather than in the sector of low dimension operators. This metamorphosis originates from establishing a rather nontrivial scaling operator (<ref>) embracing all powers of the curvature expansion and exploiting a conventional RG assumption that the renormalized theory does not depend on the choice of normalization (or subtraction) point. Then simple, though somewhat tedious, dimensionality considerations lead to this result. Dimension zero and dimension two cosmological and Einstein terms do not run themselves but still contribute to the running of the dimension four terms which can be considered as UV partners of and G. This metamorphosis of RG running couplings into the formfactors of the curvature squared terms sounds important, because it is the quadratic term in the effective action that mainly determines either the asymptotic freedom of the model or its cutoff beyond which effective field theory breaks down. In the IR domain these partners, due to the presence of mass scale M, also start from the quadratic order in the curvature, but they have essential nonlocality – of the type M^4∫ d^4x √(g)(1)^2 coming from cosmological constant sector <cit.> and of the form M^2∫ d^4x √(g)1 originating from the gravitational constant one. While the latter is well defined in IR limit due to the subtraction from the IR divergent bulk Einstein action of the Gibbons-Hawking surface term <cit.>, for the IR cosmological partner <cit.> the situation is trickier – in view of IR divergences it requires additional subtraction procedure. Perhaps even more radical changes will be needed to circumvent this problem like the curvature expansion on top of the homogeneous (dS or AdS) background with nonzero curvature. Of course, there can be other choices of the running scale D different from (<ref>. Nothing prevents from replacing it, say, with (∑_N (-_N)^k)^1/2k or other combinations of contracted derivatives. However, for curvature squared terms of the action all such choices (satisfying the homogeneity property with respect to derivative rescalings) lead to one and the same operator ∼(-)^1/2 because for the second order of the curvature expansion all d'Alembertians reduce to the single one, _1=_2, in view of integration by parts (<ref>). The only ambiguity is the choice of the d'Alembertian itself, but it is fixed by the requirement of general covariance. Alterations in the choice of D certainly affect higher orders in the curvature, but the curvature squared part, which is most important for UV asymptotic freedom or determination of the effective field theory cutoff, stays uniquely defined. Ambiguity in the choice of D can arise in the class of theories which have a more or less conventional RG running of the gravitational coupling G – renormalizable Hořava gravity models <cit.>. In these Lorentz symmetry violating models a possible covariant curvature expansion undergoes (3+1)-splitting – the set of basic curvatures includes the extrinsic curvature K_ij, i,j=1,2,3, of spatial slices of constant time τ. The Einstein term of general relativity is replaced by the sum of the kinetic term ∼(16π G)^-1∫ d^4x √(g)K_ij^2 and the potential term built as a polynomial in 3-dimensional curvature and its spatial derivatives. The RG running of G in the kinetic term proceeds as the insertion of the form factor G^-1(D) between two factors of K_ij(x), 1/G∫ d^4x√(g) K_ij^2 →∫ d^4x√(g) K_ij(x)1/G(D)K^ij(x). Thus no tadpole problem for the RG running of G takes place here – just like in Yang-Mills type theories this occurs without forming a total derivative structure. However the relevant scaling operator D of a unit anisotropic scaling dimension, which replaces the spacetime covariant square root of (-), turns out to be ambiguous. Point is that in Lorentz violating models the notion of physical scaling dimension is replaced by the anisotropic one which in (3+1)-dimensional Hořava gravity is -3 for the time coordinate and -1 for spatial coordinates. Correspondingly the dimension 6 wave operator of the theory is of the second order in time derivatives and of the sixth order in spatial derivatives. Therefore, D∼ (-∂_τ^2-Δ^3/M^4)^1/6 where Δ is the spatial covariant Laplacian and M is a physical mass scale parameter. This parameter may be different in various (scalar and transverse-traceless) sectors of the metric field <cit.>, and this is a source of ambiguity in the running scale of Hořava models. Modulo this problem RG running in renormalizable non-projectable Hořava gravity is well defined and in (3+1)-dimensional case has a legitimate interpretation of asymptotic freedom <cit.>. § ACKNOWLEDGEMENTS Essential part of this paper was inspired during the workshop “Quantum Effective Field Theory and Black Hole Tests of Einstein Gravity” (IFPU, Miramare, Trieste, Italy, September 12-16, 2022), and A.O.B. is very grateful to organizers and participants of this workshop. The authors deeply appreciate the efforts by J.Donoghue, M.Duff, E.Mottola and H.Osborn of critically reading our manuscript. A.O.B. is also grateful for fruitful discussions and correspondence with John Donoghue, Michael Duff, Alexander Kamenshchik, Emil Mottola, Roberto Percacci, Hugh Osborn, Ilya Shapiro, Kostas Skenderis, Arkady Tseytlin, Alex Vikman, Richard Woodard and especially to G. A. Vilkovisky for long term collaboration on covariant perturbation theory for quantum effective action. This work was supported by the Russian Science Foundation grant No 23-12-00051. unsrturl
http://arxiv.org/abs/2306.01499v1
20230602124745
Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today
[ "Zhuo Wang", "Rongzhen Li", "Bowen Dong", "Jie Wang", "Xiuxing Li", "Ning Liu", "Chenhui Mao", "Wei Zhang", "Liling Dong", "Jing Gao", "Jianyong Wang" ]
cs.CL
[ "cs.CL", "cs.LG" ]
A Feature Reuse Framework with Texture-adaptive Aggregation for Reference-based Super-Resolution Xiaoyong Mei,  Yi Yang,  Ming Li*, Member, IEEE, Changqin Huang, Member, IEEE, Kai Zhang, Member, IEEE, Pietro Lió X. Mei, Y. Yang, M. Li, and C. Huang are with the Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). K. Zhang is with the Computer Vision Lab, ETH Zurich, Switzerland (e-mail: [email protected]). P. Lió is with the Department of Computer Science and Technology, University of Cambridge, UK (e-mail: [email protected]). *Corresponding author Submitted to IEEE TIP Shell et al.: Received: date / Accepted: date ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Recent investigations show that large language models (LLMs), specifically GPT-4, not only have remarkable capabilities in common Natural Language Processing (NLP) tasks but also exhibit human-level performance on various professional and academic benchmarks. However, whether GPT-4 can be directly used in practical applications and replace traditional artificial intelligence (AI) tools in specialized domains requires further experimental validation. In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis. Comprehensive comparisons between GPT-4 and traditional AI tools are conducted to examine their diagnostic accuracy in a clinical setting. Experimental results on two real clinical datasets show that, although LLMs like GPT-4 demonstrate potential for future advancements in dementia diagnosis, they currently do not surpass the performance of traditional AI tools. The interpretability and faithfulness of GPT-4 are also evaluated by comparison with real doctors. We discuss the limitations of GPT-4 in its current state and propose future research directions to enhance GPT-4 in dementia diagnosis. § INTRODUCTION In recent years, Large Language Models (LLMs), powered by advanced deep learning techniques and massive cross-disciplinary corpora, have significantly impacted the field of Natural Language Processing (NLP) and achieved great success in a wide range of NLP tasks <cit.>. As one of the most powerful LLMs, GPT-4 <cit.>, a transformer-based language model, has advanced the field of NLP even further. With its remarkable ability to comprehend and generate coherent and contextually relevant text, GPT-4 has become a powerful tool for various tasks, including machine translation, sentiment analysis, and question-answering systems <cit.>. Recent investigations show that, besides the above common NLP tasks, GPT-4 also exhibit human-level performance on various professional and academic benchmarks <cit.>. For example, GPT-4 achieves a score that falls in the top 10% of test takers on a simulated bar exam and exceeds the passing score on the United States Medical Licensing Examination (USMLE) by over 20 points without any specialized prompt crafting. The impressive performance of GPT-4 on various professional and academic benchmarks has prompted practitioners to explore the potential of GPT-4 (or its predecessor, e.g., GPT-3.5) in practical applications in specialized domains <cit.>, e.g., clinical medicine. However, given the complexity and specificity of such practical applications, it remains uncertain how effective GPT-4 could be in these contexts. Therefore, further experimentation is warranted to verify the capacity and potential impact of GPT-4. In this paper, we focus on the task of dementia diagnosis and seek to answer the following question: Can GPT-4 fulfil the requirements of dementia diagnosis and replace traditional AI tools in this task? Dementia, as one of the major causes of disability and dependency among the elderly population, has garnered increasing attention and concern <cit.>. With no cure for dementia currently available, early intervention is the most effective approach. However, early diagnosis and progression prediction remain challenging, with low accuracy, leading to most patients being diagnosed after having severe symptoms, i.e., in the later stage of dementia, when the best time for the interventions has already passed <cit.>. Traditional AI tools, such as supervised learning models, have been proposed to improve the performance of dementia early diagnosis and prediction <cit.>. The advantage of these models is that they can extract implicit new knowledge from the data, which may not appear in the existing literature. However, these models also have their shortcomings. Firstly, they require the collection of training datasets, which can be time-consuming and labour-intensive, with the quality of the training data having a significant impact on the final performance. Secondly, most of the effective machine learning models, e.g., deep learning and ensemble models, are black-box models <cit.>. Since we can hardly understand their decision mechanism, potential biases and risks could hide in these models. It is also challenging for these black-box models to assist doctors in diagnosis. Lastly, these conventional AI tools lack the capacity to leverage the knowledge contained in other data sources or corpora like LLMs. Unlike traditional machine learning methods that require a specific training dataset, LLMs like GPT-4 leverage their extensive knowledge acquired from massive cross-disciplinary corpora, which may enable them to obtain promising results even in zero-shot or few-shot settings <cit.>. This advantage eliminates the need to collect specialized training sets, significantly reducing the time and resources required for diagnostic model development. Furthermore, GPT-4 has the ability to provide interpretable explanations for its decisions, allowing doctors to gain insights into the underlying reasoning process <cit.>. Despite these promising aspects, there are still several questions to be answered when utilizing GPT-4 for dementia diagnosis. The first question is how to design a simple but effective prompt template for dementia diagnosis. In addition, without fine-tuning, the performance of zero-shot and few-shot learning for dementia diagnosis is unknown. In this paper, we aim to answer the above questions and explore the potential of GPT-4 in dementia diagnosis. We summarize the key contributions as follows: * We design simple but effective prompt templates of GPT-4 for dementia diagnosis. * We investigate the capabilities of GPT-4 on dementia diagnosis by comprehensively comparing GPT-4 with traditional AI tools and doctors on two real clinical datasets. * We identify the limitations and challenges faced by GPT-4 in the context of dementia diagnosis and discuss possible directions for future work. § MATERIALS AND METHODS §.§ Data origin and acquisition This study utilizes two distinct datasets. The first dataset is sourced from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), which includes ADNI 1, 2/GO, and 3 <cit.>. The ADNI is a longitudinal multicenter study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer’s disease (AD). The primary objective of using the ADNI dataset is to distinguish between patients with mild cognitive impairment (MCI) who develop AD, i.e., MCI converters (MCI-C), and those with MCI who do not develop AD, i.e., MCI non-converters (MCI-NC). Subjects are included consecutively. After pre-processing the original data and eliminating invalid records, 606 participants remain, with 253 (41.7%) MCI-NC and 353 (58.3%) MCI-C. It is confirmed that all MCI-NC patients do not progress to AD after at least 48 months of follow-up. Each subject has 51 features, including the demographic information, the results of selected cognitive tests (e.g., MMSE score), and other biomarkers (e.g., APOE4, AV45, and pTau). Considering that the training dataset of GPT-4 may contain information from the publicly available ADNI dataset, potentially leading to information leakage issues, the second dataset used in this study is a private dataset. The second dataset was collected by the Peking Union Medical College Hospital (PUMCH) from May 2009 to April 2021 <cit.>. Inclusion criteria require subjects to have a normal MMSE score (≥ 26) and the capability to complete all required neuropsychological assessments. Subjects are included consecutively. Diagnoses are determined using clinical history, neuropsychological tests, laboratory tests, and head CT or MRI scans. A total of 375 subjects are included, among which 67 (17.9%) subjects are diagnosed with cognitively normal (CN), 174 (46.4%) are diagnosed with MCI, and 134 (35.7%) are diagnosed with dementia. CN and MCI are collectively referred to as non-dementia. We use PUMCH-B to represent the binary classification tasks (Non-Dementia vs. Dementia), and PUMCH-T to represent the ternary classification task (CN vs. MCI vs. Dementia). The demographic information and the results of selected cognitive tests in each record are converted into 64 features after data preprocessing. The demographic characteristics of the ADNI and PUMCH datasets are shown in Table <ref>. §.§ Prompting A prompt is an input query or context that guides the LLMs to generate relevant and meaningful responses. Prompts are essential for directing the LLM towards the desired information or output, leveraging the vast knowledge of LLM effectively. To obtain accurate and contextually appropriate responses from LLMs by developing and optimizing prompts, prompt engineering is needed. Considering that the demographic information, the results of selected cognitive tests, and other biomarkers are all provided for the diagnosis of dementia, we employ the prompt template shown in Figure <ref>. This prompt template converts the dementia diagnosis (prediction) into a multiple-choice question format. Therefore, we can obtain the answer by creating a completion for the provided prompt using LLMs. The content in the dashed box is optional, and it enables us to seek better performance by few-shot learning. For different datasets (tasks), we also design different templates of questions and answer choices. Figure <ref> and <ref> are the templates (i.e., {{question}} {{answer_choice}} in Figure <ref>) for the ADNI and PUMCH-T, respectively. The templates for PUMCH-B and PUMCH-T are similar. In the template, we list the feature name and the corresponding feature value one by one. Since one test may correspond to different standards, to prevent confusion, we not only have doctors standardize feature names but also provide information such as total scores. Figure <ref> and <ref> show two examples of the PUMCH-T template. Furthermore, when using OpenAI's API, we can also set some parameters to constrain the responses of GPT-4 and GPT-3.5. For instance, we can set the parameter max_tokens to 1 to make GPT-4 respond only options. Additionally, we can use the parameter logit_bias to modify the likelihood of specified tokens appearing in the completion, thus making GPT-4 respond with only options A, B, and C. We can also adjust the parameter temperature to make GPT-4 more focused and deterministic. It should be noted that each data set we used is actually a table, and each row represents one subject (instance) while each column represents one feature in those tables. Although GPT-4 supports the inputs in a table format, its capability of handling table data input is quite poor compared with handling the input using our prompt template, especially under the few-shot learning settings. Therefore, although a table format input is simpler and shorter than an input using our template, we still use the proposed template to help GPT-4 understand and obtain a better performance. Our experiments also verify that the proposed template can help GPT-4 remember the few-shot examples. §.§ Evaluation We adopt accuracy to evaluate the classification performance. For each dataset, we randomly select 90% as the training set and use the remaining 10% as the test set. We split the dataset according to the roster ID. Therefore, no patient is included in both the training and test sets, and the risk of data leakage in supervised learning is avoided <cit.>. §.§ Model Comparison The performance of GPT-4 and its predecessor model, GPT-3.5, is compared with five representative supervised machine learning models, including interpretable models and complex models (black-box models) that are hard to interpret and understand. CART <cit.> is a rule-based model that builds a decision tree. Logistic Regression (LR) <cit.> is a linear model. Rule-based Representation Learner (RRL) <cit.> uses neural networks to learn interpretable rules. These three models are considered interpretable models. Random Forest (RF) <cit.> and eXtreme Gradient Boosting (XGBoost) <cit.> are considered complex models since they are ensemble models consisting of hundreds of decision trees. RF and XGBoost are hard to interpret due to their complex inner structures. § RESULTS §.§ Classification Performance We compare the classification accuracy of GPT-4 and GPT-3.5 with five representative machine learning models. The results are shown in Table <ref>. We can observe that the supervised model RRL consistently outperforms all other models across all datasets, verifying its good capability in dementia diagnosis. Although GPT-4 shows promising results, it still has a noticeable performance gap compared to RRL, particularly on the PUMCH-T dataset. Although GPT-4 outperforms simple models like LR and DT in accuracy in some cases, it could not entirely replace them, possibly due to the limitations of zero-shot or few-shot learning. Consequently, GPT-4 is far from replacing more effective models like RRL in dementia diagnosis tasks. We can also see that GPT-4 exhibits a significant improvement over GPT-3.5, particularly on the ADNI dataset. However, we cannot definitively rule out the possibility of information leakage in GPT-4 (i.e., the whole or part of the ADNI dataset is included in GPT-4). Considering the substantial improvements observed in the private datasets (i.e., PUMCH-B and PUMCH-T), GPT-4 is indeed more suitable and powerful for dementia diagnosis than GPT-3.5. For both GPT-4 and GPT-3.5, few-shot learning settings could have better results compared to zero-shot learning settings in some cases, indicating the potential benefits of providing additional context. §.§ Case Study We show how the diagnosis generated by GPT-4 looks like by case studies. By comparing GPT-4 with professional doctors, we can not only intuitively understand the differences between them, but also qualitatively evaluate the interpretability and faithfulness of GPT-4. In addition, due to the inability of doctors to perform dementia prediction tasks, we only compared them on the PUMCH dataset. Figure <ref> shows the first example of a comparison between GPT-4 and a doctor's diagnosis. It is important to emphasize that PUMCH is a Chinese dataset, and Figure <ref> shows the content translated from the Chinese original text. The Chinese original text is shown in Figure <ref> in the appendix. The first part of Figure <ref> shows an example of input using the template shown in Figure <ref>. The blue-highlighted option is the ground truth label for this example. This part provides a detailed display of the subject's basic information and cognitive test results (i.e., features). For ease of presentation, we only show abbreviations for feature names in the English translation version, while the detailed names are used in the Chinese original version. The detailed description of each feature can be found in Table <ref> in the appendix. The second part of the figure shows the diagnostic results of GPT-4. The blue-highlighted part is GPT-4's final conclusion, and the red-highlighted part is the cognitive function that GPT-4 believes may be related to dementia and have potential issues. To make GPT-4 explain its decision, we can add a sentence like "Please provide a detailed explanation" at the end of the input. The third part of the figure shows the doctor's diagnostic results. Similarly, the blue-highlighted part is the doctor's final conclusion, and the red-highlighted part is the cognitive function that the doctor believes has issues. We can see that for the first example, both GPT-4 and the doctor diagnose the subject as having dementia, but the explanations for the diagnosis are different. First, we can see that both GPT-4 and the doctor explain their decisions using natural language. Since these sentences are easy to read and understand, the interpretability of GPT-4 is good. Second, we can observe that GPT-4 analyzes the input sequentially (as the explanation shows) and then summarizes the results, while the doctor analyzes the input according to the cognitive functions and then integrates the results. In comparison, the doctor's diagnostic approach is more in line with human understanding, and its readability and interpretability are better. In addition, both GPT-4 and the doctor point out that the subject has problems in executive function, visuospatial function, memory, and calculation. GPT-4 also emphasizes the presence of depressive emotions. This indicates the consistency between GPT-4 and the doctor is relatively high, indirectly verifying the good faithfulness of the explanation provided by GPT-4 in this case. The second example is shown in Figure <ref>, with the format of each part being the same as that of Figure <ref>. We can see that there is a disagreement between GPT-4 and the doctor in terms of diagnostic results for the subject in Figure <ref>. GPT-4 misdiagnoses the CN subject as MCI, while the doctor correctly diagnoses the subject as CN. The reason for GPT-4's diagnosis is that the subject may have some degree of anxiety and depression, and his performance in memory, abstract thinking, and calculation abilities is slightly below the normal range. Although the subject has not reached the level of dementia, GPT-4 tends to diagnose him as MCI. On the other hand, the doctor believes that all the subject's test results are normal and suggests adding MMSE and ADL. If the ADL score is less than 23, the subject is considered completely normal. Comparing GPT-4 with doctors, we find that GPT-4 has different criteria for determining whether each test is abnormal. Doctors generally use the cut-off values corresponding to each test as the basis for judgment, while GPT-4 may not be able to fully match the test with its corresponding cut-off values, resulting in different judgments on individual tests compared to the doctor. In addition, GPT-4's preference for sequential analysis leads to a less accurate assessment of the subject's overall condition compared to the doctor. Finally, we can also find that doctors can expand their professional knowledge, such as seeking the ADL result that is not in the input, while GPT-4 is more limited to the existing input. In the experiment, we also observe that GPT-4's diagnosis is greatly influenced by the input, which puts higher demands on the quality of the input. For example, if we input the wrong total score for a test, its judgment on that test will be severely affected, and such errors may not be directly detected and corrected from GPT-4's results. There are also some tests that have a maximum score, but a higher score is not necessarily better, and GPT-4 may mistakenly think the subject's cognitive function is impaired due to the low score. § DISCUSSION The present study finds that although some research claims that large language models like GPT-4 exhibit human-level performance on various professional and academic benchmarks<cit.>, they still cannot outperform traditional AI tools in dementia diagnosis and prediction tasks. This finding contradicts some current research findings<cit.>, mainly due to our use of private datasets and more challenging tasks in our study. We conduct experiments on two real clinical datasets and use the private dataset PUMCH to avoid information leakage (leakage effects)<cit.>, thereby more accurately measuring GPT-4's ability in dementia diagnosis and prediction. Although many related works have investigated the capabilities of LLMs like GPT-4 in specialized domains, most of them use public datasets <cit.>, making it difficult to avoid inflated results due to information leakage. For example, compared with other traditional AI tools, GPT-4's performance on the PUMCH-T dataset is far worse than its performance on the ADNI dataset. Moreover, such information leakage is generally not intentional but introduced during the process of collecting corpora, making it difficult to avoid. Furthermore, since we select dementia diagnosis and prediction problems in real-world settings, the tasks involve a large number of test results from different cognitive domains and require handling numerous numerical features, making the tasks themselves more challenging and better able to test the model's capabilities. Additionally, the diagnostic and prediction tasks we select differ from typical settings. For example, the task of the PUMCH dataset is to diagnose subjects with MMSE scores greater than or equal to 26, i.e., early diagnosis of dementia in a population considered cognitively normal by MMSE, which is much more difficult than general dementia diagnosis tasks. The advantages and disadvantages of GPT-4 are exposed during the dementia diagnosis and prediction tasks, we summarize them as follows: GPT-4 Advantages. Despite GPT-4 not yet surpassing traditional AI tools, it has many promising advantages. Firstly, we find that GPT-4 performs much better than expected in our experiments and could already match or outperform supervised learning models like Logistic Regression and Decision Tree in some scenarios under zero-shot or few-shot settings. This indicates that GPT-4 may be able to replace traditional machine learning models in tasks with limited training data in the future. Secondly, GPT-4 can utilize existing medical expertise for diagnosis. For example, we only need to tell GPT-4 the name of a cognitive test, and it will know which cognitive function the test corresponds to. On the contrary, traditional models can hardly obtain useful information just from the feature names. Another advantage of GPT-4 is its ability to provide explanations for its decisions, a capability that most black-box models lack. In our case study, we conducted a qualitative analysis of GPT-4's interpretability and faithfulness by comparing its diagnostic basis with that of professional doctors. We find that the explanations provided by GPT-4 are highly readable and easy to understand. Moreover, for some correctly diagnosed cases, its diagnostic basis is not significantly different from that of doctors. GPT-4 Disadvantages. GPT-4 also has some notable drawbacks. The first issue is that GPT-4 currently cannot be fine-tuned, making it difficult to fully utilize existing data in complex tasks like early dementia diagnosis, resulting in poor performance. The second issue is that GPT-4 has high requirements for input quality. In addition to designing specific prompt templates, feature names must be described and constrained appropriately. Otherwise, GPT-4 may misinterpret the input content, significantly affecting its performance. The outputs of GPT-4 are also sensitive to the prompt template in some cases, making minor modifications to the template may result in quite different results. The third issue is that GPT-4's ability to handle tabular data is still insufficient. This limitation prevents us from using table formats to save input length, thereby limiting the number of few-shot examples. The fourth issue is that although GPT-4 lists many reasons in its explanations, we cannot determine how these reasons contribute to the final diagnostic conclusion. In practice, we find that GPT-4's reasons and conclusions might be inconsistent, indicating that faithfulness cannot be guaranteed. Future Directions. Future research could focus on addressing GPT-4's limitations, such as enabling fine-tuning for complex tasks, improving input requirements, enhancing tabular data handling, and ensuring faithfulness in explanations. Additionally, exploring the integration of GPT-4 with traditional AI tools to leverage their respective strengths could be a promising direction. It is also essential to investigate GPT-4's performance in other medical domains and tasks to better understand its potential in healthcare applications. Limitations. One limitation of our work is that we only used two datasets, which may not fully represent the diversity of dementia diagnosis and prediction tasks. Moreover, our study focused on GPT-4 and GPT-3.5, and the findings may not generalize to other large language models. Further research should consider using a wider population and more diverse datasets and comparing the performance of different LLMs in similar tasks. § CONCLUSION Our study provides valuable insights into the capabilities of large language models, specifically GPT-4, in the context of dementia diagnosis and prediction. To test GPT-4 accurately and fairly, we first design simple and effective prompt templates according to the tasks. Our experimental results on two real clinical datasets indicate that, although GPT-4 has shown remarkable performance in some professional benchmarks, it does not currently outperform traditional AI tools in dementia diagnosis and prediction tasks. We also evaluate the interpretability and faithfulness of GPT-4 by comparing it with professional doctors. Based on all the experimental results, we summarize the advantages and disadvantages of GPT-4 and propose future research directions. § ACKNOWLEDGEMENTS Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. This work was supported in part by National Key Research and Development Program of China under Grant No. 2020YFA0804503, 2020YFA0804501, National Natural Science Foundation of China under Grant No. 62272264, 61521002, and Beijing Academy of Artificial Intelligence (BAAI). § ETHICS APPROVAL AND CONSENT TO PARTICIPATE All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of PUMCH (No. JS1836). ACM-Reference-Format § FEATURE DESCRIPTION Table <ref> lists all the features used in the PUMCH dataset, including their Chinese names, English names, and detailed descriptions. UTF8gbsn § CASE STUDY Figures <ref> and <ref> show the first and second examples of a comparison between GPT-4 and a doctor's diagnosis on the PUMCH dataset (Chinese original text), respectively.
http://arxiv.org/abs/2306.03429v1
20230606060649
New class of Gibbs measures for two state Hard-Core model on a Cayley tree
[ "R. M. Khakimov", "M T. Makhammadaliev", "F. H. Haydarov" ]
math.PR
[ "math.PR", "math.FA" ]
= 24truecm = 16truecm = -2truecm = -2truecm
http://arxiv.org/abs/2306.05681v1
20230609055003
Finite-temperature second-order perturbation analysis of magnetocrystalline anisotropy energy of L10-type ordered alloys
[ "Shogo Yamashita", "Akimasa Sakuma" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[email protected] Department of Applied Physics, Tohoku University, Sendai 980-8579, Japan We present a novel finite-temperature second-order perturbation method incorporating spin-orbit coupling to investigate the temperature-dependent site-resolved contributions to the magnetocrystalline anisotropy energy (MAE), specifically K_1(T), in FePt, MnAl, and FeNi alloys. Our developed method successfully reproduces the results obtained using the force theorem from our previous work. By employing this method, we identify the key sites responsible for the distinctive behaviors of MAE in these alloys, shedding light on the inadequacy of the spin model in capturing the temperature dependence of MAE in itinerant magnets. Moreover, we explore the lattice expansion effect on the temperature dependence of on-site contributions to K_1(T) in FeNi. Our results not only provide insights into the limitations of the spin model in explaining the temperature dependence of MAE in itinerant ferromagnets but also highlight the need for further investigations. These findings contribute to a deeper understanding of the complex nature of MAE in itinerant magnetic systems. Finite-temperature second-order perturbation analysis of magnetocrystalline anisotropy energy of L1_0-type ordered alloys Akimasa Sakuma July 31, 2023 =========================================================================================================================== § INTRODUCTION The magnetocrystalline anisotropy energy (MAE) is an important characters of magnetic materials because it governs the coercivity. Rare-earth permanent magnets are examples of magnets with the high coercivity and are used for many modern applications. Recently, however, the development of rare-earth-free magnets has been accelerating to avoid the use of expensive rare-earth elements. L1_0-type transition metal alloys such as FeNi are examples of rare-earth-free high-performance permanent magnets<cit.>. Generally, coercivity has strong temperature dependence; thus, we need to understand the temperature dependence of the MAE for the development of high-performance transition-metal magnets at finite temperatures. The MAE for the uniaxial crystal E_MAE(T,θ) is usually expressed as follows: E_MAE(T,θ)=K_1(T) sin^2θ +K_2(T) sin^4θ+⋯, where K_1(T) and K_2(T) are the anisotropy constants. However, the theoretical description of the temperature dependence of the MAE remains controversial. In localized electron systems, for instance, 4f electron systems such as permanent magnets, theories based on the localized spin model combined with the crystal field theory have been well-established <cit.>. The Callen–Callen power law<cit.> is a line of these theories that can describe the temperature dependence of K_1(T) and K_2(T). In contrast, the cases of transition-metal magnets, which are treated as itinerant electron systems, are debatable. At 0 K, the mechanism of the MAE in the itinerant electrons systems, particularly K_1(0), can be explained by the second-order perturbation formula in terms of the spin -orbit coupling (SOC) according to the tight-binding model<cit.>. However, finite-temperature expressions for the MAE based on the band theory are not available yet. This is because the band theory is based on the mean field theory and cannot describe the spin-transverse fluctuations directly. One of the ways to describe the spin fluctuation based on the itinerant electron theory is the functional integral method<cit.>. This method is usually combined with coherent potential approximation (CPA). In this approach, the spin fluctuation can be expressed by random spin states with respect to its direction, which are called disordered local moment (DLM) states. First-principles calculations based on this scheme have been performed by several authors<cit.> to investigate the finite-temperature magnetic properties of magnetic materials as pioneering works. Subsequently, the temperature dependences of the MAE, transport properties, and Gilbert damping constants in the itinerant electron systems were investigated via the DLM-CPA method and the density functional theory<cit.>, along with model calculations<cit.>. In particular, the temperature dependence of the MAE for L1_0-type alloys has been calculated by several authors<cit.>. Recently, we calculated the temperature dependence of the MAE for L1_0-type FePt, MnAl, and FeNi using the DLM-CPA method<cit.>. The calculation results for FePt and MnAl indicated that the MAE decreases with an increase in the temperature. However, the calculated MAE for FeNi exhibited a unique behavior. It did not decrease monotonically with an increase in the temperature; rather, it exhibited plateau-like behavior in the low-temperature region. This behavior is similar to that of Y_2Fe_14B<cit.>, for which the mechanism of the temperature dependence of the MAE has been controversial. In the present study, to analyze these behaviors, we decompose the MAE of these alloys at finite temperatures into onsite and pair contributions using the second-order perturbation (SOP) method in terms of the SOC. In this method, we can extend the formula to describe K_1(0) in the tight-binding model<cit.> to the finite-temperature expression K_1(T). § CALCULATION DETAILS To develop the finite-temperature SOP formula, we use the tight-binding linearized muffin-tin orbital (TB-LMTO) method<cit.> with the atomic sphere approximations combined with the DLM-CPA method. First, in the DLM-CPA method, we need to calculate a distribution function ω({e},T) representing the probability with which the spin vectors are directed to {e} at temperature T. In this work, we adopt the single-site approximation; thus, ω({e},T) is decoupled into the simple product of the probability at each site ω_i(e_i,T) as follows: ω({e},T)=∏_iω_i(e_i,T). In a previous work, ω_i(e_i,T) was evaluated using the analogy of the Weiss field<cit.>. However, in this work, we determine ω_i(e_i,T) by evaluating the effective grand potential Ω_eff({e},T) of electrons. Here, we explain the calculation of Ω_eff({e},T) and ω_i(e_i,T). First, we introduce the Green function including the spin-transverse fluctuation at finite temperatures in the TB-LMTO method G(z, { e})<cit.> as follows: G_ij(z;{ e})=λ^β_i(z; e_i) δ_ij+ μ^β_i (z; e_i) g^β_ij(z;{ e}) μ̅^β_j (z; e_j), where z and g^β_ij(z;{ e}) are E+iδ and an auxiliary green function including spin-fluctuation, respectively. λ^β_i(z; e_i) and μ^β_i (z; e_i ) are given as follows: λ^β_i(z; e_i )=(Δ_i(e_i))^-1/2(1+(γ_i( e_i)-β)P_i^γ(z;e_i))(Δ_i(e_i))^-1/2, μ^β_i(z; e_i )=(Δ_i(e_i))^-1/2(P_i^γ(z;e_i))^-1P^β_i(z; e_i), μ̅^β_i(z; e_i )=P^β_i(z; e_i)(P_i^γ(z;e_i))^-1(Δ_i(e_i))^-1/2, where Δ_i^-1/2(e_i)=U^†(e_i)( Δ_i)^-1/2U(e_i), (P_i^γ(z;e_i))^-1=U^†(e_i)(P_i^γ(z))^-1U(e_i), γ_i(e_i)=U^†(e_i)( γ_i) U(e_i), P_i^β(z;e_i)=U^†( e_i)P^β_i(z) U( e_i), P^β_i(z)=P^γ_i(z){1-[β-γ_i]P^γ_i(z)}^-1, P^γ_i(z)=(Δ_i)^-1/2[z-C_i](Δ_i)^-1/2. Here, γ_i, Δ_i, and C_i are called potential parameters in the TB-LMTO method. The β values are summarized in several papers<cit.>. In this work, we neglect the SOC to calculate the Ω_eff({e},T) and ω_i(e_i,T). The effective grand potential of electronic part is expressed as Ω_eff({e},T)∼1/π∫dϵ f(ϵ,T,μ) ∫_-∞^ϵd E ImTr G (z;{ e}) =-1/π∫dϵ f(ϵ,T,μ)Im[Tr log λ^β(ϵ^+;{ e}) + Tr log g^β(ϵ^+;{ e}) ], where f and μ represent the Fermi-Dirac function and the chemical potential, respectively. The trace is taken over with respect to sites i, orbitals L, and spin indices σ. From here, we expand g^β(ϵ^+;{ e}) with the auxiliary coherent Green function g̅^β(z), which is defined as follows: g̅^β(z)=(P̅(z)-S^β)^-1, where S^β is given as S^β=S(1-β S)^-1. S is a bare structure constant matrix<cit.>. The auxiliary Green function g^β(ϵ^+;{ e}) is expanded as follows: g^β(z;{ e})=g̅^β(z) (1+Δ P (z;{ e})g̅^β (z))^-1, where Δ P(z;{ e})=P^β(z; { e})-P̅(z), and P̅ is a coherent potential function. We also need to obtain P̅ in a self-consistent manner (explained later). Using Eq. (<ref>), Eq.(<ref>) can be rewritten as follows: Ω_eff({e},T)=-1/π∫dϵ f(ϵ,T,μ) Im [Tr log λ^β(ϵ^+;{ e}) +Tr log g̅^β(z)-Tr log(1+Δ P(z;{ e})g̅^β(z))]. By taking the trace with respect to site i, the grand potential can be expressed as follows<cit.>: Ω_eff({e},T)=Ω_0+∑_iΔΩ_i ( e_i,T), ΔΩ_i (e_i,T)=1/πIm∫d E f(E,T,μ) Tr_Lσ log(1+Δ P_i(z; e_i)g̅^β_ii(z)). Here, we used the fact that { e } dependence of λ^β(ϵ^+;{ e}) vanishes in our case. Therefore, ω_i(e_i,T) can be expressed as follows: ω_i(e_i,T) =exp(-ΔΩ_i(e_i,T)/k_BT )/ ∫d e_iexp(-ΔΩ_i (e'_i,T)/k_BT), where k_B is the Boltzmann constant. Finally, we need to determine the converged ω_i(e_i,T) and P̅(z) self-consistently. The CPA condition to determine P̅(z) is given as: ∫d e_i ω_i(e_i,T) Δ P_i(z; e_i) [1+ Δ P_i(z; e_i)g̅^β_ii(z)]^-1 =0. We use Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) to obtain P̅_i and the converged ω_i(e_i,T) in a self-consistent manner. Once we obtain the converged ω_i(e_i,T), we can calculate the SOP formula at finite temperatures as follows: δ E^2nd (T, n) = -1/2π∑_ijImTr_Lσ∫_-∞^∞d E f(E,μ,T) ×⟨ G_ij(z;{ e}, n) H^soc_jG_ji(z;{ e}, n)H^soc_i⟩_{ω_i(e_i,T)}, where H^soc and n represent the spin -orbit Hamiltonian and the magnetization direction, respectively. Similar expressions were used in several works<cit.>. ⟨⋯⟩ denotes the average over { e} with a weight of ω_i(e_i,T), which is given as follows: ⟨⋯⟩_{ω_i(e_i,T)}= ∏_i∫d e_i ω_i(e_i,T) (⋯). The rotation of the magnetization direction is expressed with the SO(3) rotation matrices R( n) as follows<cit.>: S^l_1l_2_m_1m_2( n)=∑_m_3m_4 R^l_1*_m_3m_1( n) S^l_1l_2_m_3m_4 R^l_2_m_4m_2( n), where * denotes the complex conjugate. We substitute Eq. (<ref>) into Eq. (<ref>) to express the rotation of the direction of magnetization. We can decompose Eq. (<ref>) into onsite E_ii^2nd and pair E_ij^2nd contributions as follows: E^2nd_ii(T, n) =-1/2πImTr_Lσ∫d e_i ω_i( e_i,T) ∫_-∞^∞d E f(E,T,μ) ×{ H^soc_i (λ^β_i(z; e_i )+ μ^β_i (z; e_i) g̅^β_ii(z, n) ξ_i(z; e_i, n)μ̅^β_i (z; e_i )) × H^soc_i(λ^β_i(z; e_i)+ μ^β_i (z; e_i) g̅^β_ii(z, n) ξ_i(z; e_i, n) μ̅^β_i (z; e_i)) }, E^2nd_ij(T, n) =-1/2π∑_kImTr_Lσ∫d e_i ω_i( e_i,T) ∫d e'_jω_j( e'_j,T) ×∫_-∞^∞d E f(E,T,μ) {H̃^soc_i χ_ik (1-Γχ)^-1_kjH̃^soc_j}. Here, we introduce ξ_i(z; e_i, n), ξ̃_i(z; e_i, n), H̃_i^soc, Γ, and χ, which are given as follows: ξ_i(z; e_i, n)=[1+Δ P_i(z; e_i)g̅^β_ii (z, n)]^-1, ξ̃_i(z; e_i, n)=[1+g̅^β_ii (z, n)Δ P_i(z; e_i)]^-1. H̃^soc_i=ξ_i(z; e_i, n) μ_i^β (z; e_i) H^soc_iμ̅_i^β (z; e_i) ξ̃_i (z; e_i, n), Γ_i(z,T, n) =∫d e_i ω_i( e_i,T) [Δ P^β_i(z; e_i)ξ̃_i (z; e_i, n)] [Δ P^β_i(z; e_i)ξ̃_i(z; e_i, n)], χ_ij(z, n)=g̅^β_ij(z, n)g̅^β_ji(z, n) (1-δ_ij). For evaluating pair contributions, we expand the Green function including the spin-fluctuation with the T-matrix to include the vertex correction terms. Details regarding the derivation of the vertex correction terms are provided in several papers<cit.>. In practical calculations, we neglect the Fermi–Dirac distribution function in Eqs. (<ref>) and (<ref>). This does not cause serious numerical errors. In the present study, the K_1(T) part of the MAE at finite temperatures is defined as follows: ϵ_MAE(T) =∑_ij( E^2nd_ij(T,θ=π/2)-E^2nd_ij(T,θ=0) )∼ K_1(T). Using Eqs. (<ref>) and (<ref>), we can investigate the site-resolved contributions in K_1(T) and its temperature dependences. For calculation details, the lattice constants of each alloy are set to the same values used in a previous work<cit.>. The number of k-points for each calculation is also the same as that in the previous work. § RESULTS AND DISCUSSIONS To examine the accuracy of the developed method, let us first investigate how the SOP method can reproduce the MAE obtained via the force theorem (FT) in our previous work<cit.>. Figure <ref> shows the MAE calculated using the FT and the SOP method for FePt, MnAl, and FeNi. Small differences between the results of the two methods are observed for FePt and FeNi, whereas little difference is observed for in MnAl. For MnAl, the good agreement is reasonable, considering that the SOC of this system is far weaker than those of FePt and FeNi. From this view-point, the origin of the larger difference for FeNi compared with FePt is not simple, because the SOC in FeNi is weaker than that in FePt. This may suggest the peculiar K_1(T) behavior of FeNi, which will be discussed later. In total summary, the qualitative behaviors of the temperature dependence of the MAE in the previous work can well be reproduced by the SOP method focusing on K_1(T). The difference between the FT and the SOP method may arise from the higher-order perturbation term, which contributes to K_2(T). The total onsite and pair contributions to K_1(T) are also shown in Fig. <ref>. For FePt, onsite and pair contribution have opposite signs for the whole temperature region. The onsite contribution is suppressed by the pair contribution, which leads to uniaxial anisotropy of K_1(T). For MnAl, K_1(T) is mostly dominated by the onsite contribution. The pair contribution makes a small correction to the K_1(T). In this case, both contributions have a positive sign. For FeNi, the onsite and pair contributions have similar amplitudes, whereas the signs are opposite, as in the case of FePt. In addition, the peculiar behavior that K_1(T) exhibits a plateau in the low-temperature region is found to be due to the cancellation of the variations of the total onsite and total pair terms with the temperature change. We stress here that even though the crystal structures are the same for these alloys, the alloys differ with regard to the breakdown of these SOP results into onsite and pair contributions. In particular for FePt and MnAl, despite the similar behavior of the temperature dependence of the total K_1(T), the onsite and pair contributions differ significantly. Furthermore, comparing FePt and FeNi reveals that the temperature dependences of onsite and pair contributions are differ significantly between these two alloys. These characteristics of the K_1(T) of each alloy can be recognized via the present SOP theory at finite temperatures, which we believe is one of the advantage of this approach. Here, to investigate the characteristics of the temperature dependence of K_1(T) in the itinerant electron magnets, we compare those results with the results expected from the spin model. The Hamiltonian is given by the XXZ model<cit.> as follows: H=-∑_( i,j ) 2 J_ijS⃗_i·S⃗_j-∑_( i,j ) D_ijS^z_i S^z_j-∑_i D_i(S^z_i )^2, where S⃗_i, J_ij, D_i, and D_ij are a classical spin vector, an exchange coupling constant, a single-site anisotropy coefficient, and a two-site anisotropy coefficient, respectively. As presented in our previous work<cit.>, analysis with this model requires D_i>0 and ∑_jD_ij>0 for FePt and MnAl and D_i<0 and ∑_jD_ij>0 for FeNi to reproduce the temperature dependence of the total MAE. However, if one naively assumes that the D_i and D_ij terms correspond to onsite and pair contributions, respectively, at first glance, the signs of the terms in the XXZ model used in the previous work are not consistent with the results in Fig. <ref>, except for the case of MnAl. This mismatch originates from the fact that the temperature dependences of the MAE from the D_i and D_ij terms in the XXZ model behave as approximately ∝ M^3(T)<cit.> and ∝ M^2(T)<cit.>, respectively, regardless of the signs and amplitudes of the parameters D_i and D_ij, whereas those of the onsite and pair terms in the SOP method do not necessarily follow such simple rules but exhibit various behaviors depending on the system. For this reason, to reproduce the peculiar behavior of the total MAE of FeNi, the XXZ model has no other choice than to set D_i<0 and ∑_j D_ij>0; however, the onsite and pair contributions can produce such behavior with opposite signs from the XXZ model. Thus, the results in Fig. <ref> imply that the spin model is too simple and is insufficient to express the temperature dependence of the MAE of itinerant magnets. To examine each contribution in detail, we decompose the SOP results into each onsite and pair contribution. The breakdown of these contributions is shown in Fig. <ref> for all the alloys. For FePt, the results are shown in Fig. <ref> (a). The total onsite contribution in FePt is mostly from of Pt, and the Fe contribution is far smaller. In addition, it is found that the negative contribution of the total pair term in Fig. <ref> (a) mainly comes from the Fe-Pt pair, and it is suppressed by the positive contribution from Pt-Pt pairs. This leads to a negative total pair contribution. In this alloy, the onsite and pair contributions related to Pt significantly affect the temperature dependence of K_1(T). The results for MnAl in Fig. <ref> (b) indicate that all the contributions are positive and that the situation regarding the total onsite term is similar to that for FePt. It is mostly dominated by the Mn onsite contribution. The second-largest contribution is the Mn-Mn positive pair contribution, and the other contributions are negligible. Thus, the onsite and pair contributions of Mn determine the temperature dependence of K_1(T) in MnAl. For FeNi, as shown in fig. <ref> (c), the onsite contributions from Fe and Ni are positive and have similar amplitudes, leading to a total positive onsite contribution. We also find that the negative contribution of the total pair term mostly comes from a pair of different atoms, i.e., Fe-Ni. While the Fe-Ni contribution is suppressed by other positive pair contributions, it finally leads to negative finite total pair contributions. From these results, although most of the pair contributions are suppressed by other pairs and the net contribution becomes small, the pair contributions play important roles over the whole temperature region, particularly for FePt and FeNi. The importance of the pair contributions was also investigated by Ke<cit.> with the SOP method at 0 K. In this work, we confirmed that the pair contributions to K_1(T) significantly affect the temperature dependence of K_1(T) at not only 0 K but also finite temperatures. Finally, we virtually expand the lattice of FeNi to investigate the influence of electron itineracy on the onsite contribution K^on_i(T). Here, we fit K^on_i(T) by assuming the relation K^on_i(T)∝ M_i(T)^n and investigate the temperature dependence of the exponent n at each site. As mentioned previously, the onsite contributions K^on_i(T) should follow the relation K^on_i(T)∝ M_i(T)^3 if the localized spin model is suitable to explain the temperature dependence of K^on_i(T). We briefly examine the validity of the single-site anisotropy term, which is the simplest term, for the temperature dependence of the MAE in the itinerant magnets. In Fig. <ref>, the temperature dependences of n of the onsite contributions for FeNi with various volumes are shown. If we expand the lattice, the n value of K^on_Fe(T) ∝ M_Fe(T)^n increases, and it reaches 3 in the low-temperature region. However, the n value of K^on_Ni(T) ∝ M_Ni(T)^n is always far from 3 and is not changed drastically. If we use a spin model with the single-site anisotropy term to explain the temperature dependence of the onsite contributions, the n value must be fixed to 3 in the low-temperature region regardless of the sign and amplitude of D_i. In addition, if the lattice is expanded, the D_i and D_ij values are expected to change, and these temperature dependences are not changed in the localized spin model. However, our results imply that not only changing the value of D_i but also changing the temperature dependence of the onsite term itself with expanding the lattice. Therefore, we can again conclude that if we assume Eq. (<ref>) to explain the temperature dependence of the MAE, even the single-site anisotropy term in Eq. (<ref>) may not always be sufficient to describe the temperature dependence of K^on_i(T) in the itinerant ferromagnets. § SUMMARY In summary, we developed a finite-temperature SOP method to describe the temperature dependence of K_1(T) and applied it to L1_0-FePt, MnAl, and FeNi. We confirmed that the developed method can reproduce the results of a previous work<cit.>. We also investigated the onsite and pair contributions to K_1(T) with the developed method. We showed that not only the onsite contributions but also the pair-contributions significantly affect the temperature dependence of K_1(T). In particular, the unique behavior of K_1(T) for FeNi is attributed to the competition of onsite and pair-site contributions. In addition, for some results, it is found that the signs of onsite and pair contributions do not agree with the conditions used in the previous work<cit.>. Finally, we investigated the effect of electron itineracy for the temperature dependence of the K^on_i(T) of FeNi while expanding the lattice parameters. We found that the exponent n of the onsite contributions of both atoms depends on the volume and is not fixed to 3, which is expected from a spin model. From the above, our results imply that the XXZ model, even the single-site anisotropy term in this model, is insufficient for the itinerant ferromagnets. S.Y. acknowledges Dr. Ryoya Hiramatsu, Dr. Yusuke Masaki, and Prof. Dr. Hiroaki Matsueda of Tohoku University for fruitful discussions and support from GP-Spin at Tohoku University, Japan. S.Y. also appreciates Dr. Juba Bouaziz and Prof. Dr. Stefan Blügel of Forschungszentrum Jülich for constructive comments on this work. This work was supported by JSPS KAKENHI Grant Number JP19H05612 in Japan. 9 SRFeNi S. Goto, H. Kura, E. Watanabe, Y. Hayashi, H. Yanagihara, Y. Shimada, M. Mizuguchi, K. Takanashi, and E. Kita, Sci. Rep. 7, 13216 (2017). Herbst J. F. Herbst, Rev. Mod. Phys. 63, 819 (1991). Yamada M. Yamada, H. Kato, H. Yamamoto, and Y. Nakagawa, Phys. Rev. B 38, 620 (1988). Sasaki2015 R. Sasaki, D. Miura, and A. Sakuma, Appl. Phys. Express 8, 043004 (2015). Yoshioka2018 T. Yoshioka and H. Tsuchiura, Appl. Phys. Lett. 112, 162405 (2018). Yoshioka2020 T. Yoshioka, H. Tsuchiura, and P. Novák, Phys. Rev. B 102, 184410 (2020). Yamashita2020 S. Yamashita, D. Suzuki, T. Yoshioka, H. Tsuchiura, and P. Novák, Phys. Rev. B 102, 214439 (2020). Yoshioka2022 T. Yoshioka, H. Tsuchiura, and P. Novák, Phys. Rev. B 105, 014402 (2022). Akulov N. Akulov, Z. Phys. 100, 197 (1936). Zener C. Zener, Phys. Rev. 96, 1335 (1954). Callen1 E. R. Callen and H. B. Callen, Phys. Rev. 129, 578 (1963). Callen2 E. R. Callen and H. B. Callen, J. Phys. Chem. Solids 27, 1271 (1966). Bruno P. Bruno, Phys. Rev. B 39, 865 (1989). Laan G. van der Laan, J. Phys.: Condens. Matter 10, 3239 (1998). Kota1 Y. Kota and A. Sakuma, J. Phys. Soc. Jpn. 83, 034715 (2014). Solovyev I. V. Solovyev, P. H. Dederichs, and I. Mertig, Phys. Rev. B 52, 13419 (1995). Ke1 L. Ke and M. van Schilfgaarde, Phys. Rev. B 92, 014423 (2015). Ke2 L. Ke, Phys, Rev. B. 99, 054418 (2019). Cyrot M. Cyrot, Phys. Rev. Lett. 25, 871 (1970). Hubbard1 J. Hubbard, Phys. Rev. B 19, 2626 (1979). Hubbard2 J. Hubbard, Phys. Rev. B 20, 4584 (1979). Hasegawa1 H. Hasegawa, J. Phys. Soc. Jpn. 46, 1504 (1979). Hasegawa2 H. Hasegawa, J. Phys. Soc. Jpn. 49, 178 (1980). Hasegawa3 H. Hasegawa, J. Phys. Soc. Jpn. 49, 963 (1980). Oguchi T. Oguchi, K. Terakura, and N. Hamada, J. Phys. F 13, 145 (1983). Pindor A. J. Pindor, J. Staunton, G. M. Stocks, and H. Winter, J. Phys. F 13, 979 (1983). Staunton3 J. Staunton, B. L. Gyorffy, A. J. Pindor, G. M. Stocks, and H. Winter, J. Magn. Magn. Mater. 45, 15 (1984). Gyorrfy B. L. Gyorffy, A. J. Pindor, J. Staunton, G. M. Stocks, and H. Winter, J. Phys. F 15, 1337 (1985). StauntonPRL1992 J. B. Staunton and B. L. Gyorffy, Phys. Rev. Lett. 69, 371 (1992). StauntonPRL J. B. Staunton, S. Ostanin, S. S. A. Razee, B. L. Gyorffy, L. Szunyogh, B. Ginatempo, and E. Bruno, Phys. Rev. Lett. 93, 257204 (2004). StauntonPRB J. B. Staunton, L. Szunyogh, A. Buruzs, B. L. Gyorffy, S. Ostanin, and L. Udvardi, Phys. Rev. B 74, 144411 (2006). Deak1 A. Deák, E. Simon, L. Balogh, L. Szunyogh, M. Dos Santos Dias, and J. B. Staunton, Phys. Rev. B 89, 224401 (2014). Matsumoto M. Matsumoto, R. Banerjee, and J. B. Staunton, Phys. Rev. B 90, 054421 (2014). Juba1 J. Bouaziz, C. E. Patrick, and J. B. Staunton, Phys. Rev. B 107, L020401 (2023). Yamashita2022 S. Yamashita and A. Sakuma, J. Phys. Soc. Jpn. 91, 0934703 (2022). Hiramatsu1 R. Hiramatsu, D. Miura, and A. Sakuma, Appl. Phys. Express 15, 013003 (2021). Sakuma2022 A. Sakuma and D. Miura, J. Phys. Soc. Jpn. 91, 084701 (2022). Hiramatsu2023 R. Hiramatsu, D. Miura, and A. Sakuma, J. Phys. Soc. Jpn. 92, 044704 (2023). Sakuma2018 A. Sakuma, J. Phys. Soc. Jpn. 87, 034705 (2018). Miura2021 D. Miura and A. Sakuma, J. Phys. Soc. Jpn. 90, 113601 (2021). Miura2022 D. Miura and A. Sakuma, J. Phys. Soc. Jpn. 91, 023706 (2022). Sagawa M. Sagawa, S. Fujimura, H. Yamamoto, Y. Matsuura, and S. Hirosawa, J. Appl. Phys. 57, 4094 (1985). Grossinger R. Grossinger, X. Sun, R. Eibler, K. Buschow, and H. Kirchmayr, J. Magn. Magn. Mater. 58, 55 (1986). Andersen O. K. Andersen, Z. Pawlowska, and O. Jepsen, Phys. Rev. B, 34, 5253 (1986). Skriver H. L. Skriver, The LMTO Method (Springer, Berlin, 1984). Turek I. Turek, V. Drchal, J. Kudrnovský, M. Sǒb, and P. Weinberger, Electronic Structure of Disordered Alloys, Surface and Interfaces (Kluwer Academic, Dordrecht, 1997). Kudrnovsky J. Kudrnovský and V. Drchal, Phys. Rev. B, 41, 7515 (1990). Sakuma2000 A. Sakuma, J. Phys. Soc. Jpn. 69, 3072 (2000). Kobayashi2 N. Kobayashi, Y. Kota, and A. Sakuma, J. Phys. Soc. Jpn. 87, 094714 (2018). Ebert H. Ebert, Phys. Rev. B 38, 9390 (1988). Sakuma1 A. Sakuma, J. Phys. Soc. Jpn. 63, 1422 (1994). Butler W. H. Butler, Phys. Rev. B. 31, 3260, (1985). Carva K. Carve, I. Turek, J. Kudrnovský, and O. Bengone, Phys. Rev. B 73, 144421 (2006). Mryasov O. N. Mryasov, U. Nowak, K. Y. Guslienko, and R. W. Chantrell, Europhys. Lett. 69, 805 (2005). Evans R. F. L. Evans, L. Rózsa, S. Jenkins, and U. Atxitia, Phys. Rev. B 102, 020412(R) (2020). Cuadrado R. Cuadrado, R. F. L. Evans, T. Shoji, M. Yano, M. Ito, G. Hrkac, T. Schrefl, and R. W. Chantrell, J. Appl. Phys. 130, 023901 (2021).
http://arxiv.org/abs/2306.02769v1
20230605105327
On simple expectations and observations of intelligent agents: A complexity study
[ "Sourav Chakraborty", "Avijeet Ghosh", "Sujata Ghosh", "François Schwarzentruber" ]
cs.LO
[ "cs.LO", "cs.AI", "cs.CC" ]
KR2023 Instructions for Authors =8.5in =11in positioning same exampleExample theoremTheorem /TemplateVersion (KR.2022.0, KR.2023.0) proposition[theorem]Proposition corollary[theorem]Corollary remark[theorem]Remark lemma[theorem]Lemma claim[theorem]Claim defi[theorem]Definition question[theorem]Question conjecture[theorem]Conjecture observation[theorem]Observation note[theorem]Note notation[theorem]Notation breakablealgorithm algorithm height.8pt depth0pt 2pt 2pt for each S[FOR]ForEach[1] #1 On simple expectations and observations of intelligent agents: A complexity study Sourav Chakraborty Indian Statistical Institute, Kolkata, India [email protected] Avijeet Ghosh Indian Statistical Institute, Kolkata, India [email protected] Sujata Ghosh Indian Statistical Institute, Chennai, India [email protected] François Schwarzentruber Unive Rennes, IRISA, France [email protected] Received; accepted ======================================================================================================================================================================================================================================================================================================================================================================== Public observation logic (POL) reasons about agent expectations and agent observations in various real world situations. The expectations of agents take shape based on certain protocols about the world around and they remove those possible scenarios where their expectations and observations do not match. This in turn influences the epistemic reasoning of these agents. In this work, we study the computational complexity of the satisfaction problems of various fragments of POL. In the process, we also highlight the inevitable link that these fragments have with the well-studied Public announcement logic. § INTRODUCTION Reasoning about knowledge among multiple agents plays an important role in studying real-world problems in a distributed setting, e.g., in communicating processes, protocols, strategies and games. Multi-agent epistemic logic () <cit.> and its dynamic extensions, popularly known as dynamic epistemic logics () <cit.> are well-known logical systems to specify and reason about such dynamic interactions of knowledge. Traditionally, agents' knowledge is about facts and / mostly deals with this phenomenon of `knowing that'. More recently, the notions of `knowing whether', `knowing why' and `knowing how' have also been investigated from a formal viewpoint <cit.>. These agents also have expectations about the world around them, and they reason based on what they observe around them, and such observations may or may not match the expectations they have about their surroundings. Following <cit.>, such perspectives on agent reasoning were taken up by <cit.> and studied formally in the form of Public observation logic (). We present below a situation that is adept at modelling. The example is in the lines of the one considered in <cit.>: Let us consider a robotic vacuum cleaner () that is moving on a floor represented as a 7× 7 grid (see Figure <ref>). On the top right of the floor, there is a debris-disposal area, and on the bottom left, there is a power source to recharge. Two children Alice and Bob are awed by this new robotic cleaner. They are watching it move and trying to guess which direction it is moving. The system is adaptive, thus the global behaviour is not hard-coded but learned. We suppose that moves on a grid and the children may observe one of the four directions: right (), left (), up() or down(), and of course, combinations of them. Note that, for example, observing means that the bot moves one step left. Let Alice be aware of a glitch in the bot. Then her expectations regarding the 's movements include the following possibilities: * The bot may go up or right for debris-disposal, but may make an erroneous move, that is, a down or a left move. * The bot may go towards power source without error. The only difference between Bob's expectation and that of Alice is that Bob does not consider the bot to make an error while moving towards debris-disposal since he is unaware of the glitch. Suppose the is indeed moving towards power from the center of the grid. Hence if the bot makes one left move, , Bob would know that the bot is moving towards power whereas Alice would still consider moving towards debris-disposal a possibility. The example concerns certain rules that we follow in our daily life, they deal with situations where agents expect certain observations at certain states based on some pre-defined protocols, viz. the bot mechanism in the example given above. They get to know about the actual situation by observing certain actions which agree with their expectations corresponding to that situation. does not deal with the protocols themselves, but the effect those protocols have in our understanding of the world around us in terms of our expectations and observations. In <cit.> we have investigated the computational complexity of the model-checking problem of different fragments of , and in this paper, we will deal with the computational complexity of the satisfaction problem of various proper fragments of (cf. Figure <ref>). We will show how certain simple fragments of give rise to high complexity with respect to their computational behaviour. To prove the complexity results of some fragment(s) of we use a translation to Public announcement logic () <cit.>, whereas, for other fragment(s), a tableau method is utilized where the tableau rules provide a mix of modal logic reasoning and computations of language theory residuals. Outline. In Section <ref>, we recall the relevant definitions of . In Section <ref>, we describe an application of the satisfiability problem of . In Section <ref> we present a algorithm for using the tableau method. In Section <ref>, we prove that is in -Hard. In section <ref>, we present the complexity results for various fragments of . Section <ref> discusses related work, and Section <ref> concludes the paper. § BACKGROUND In this section, we provide a brief overview of a fragment of public observation logic () <cit.>, which we term as ^-. §.§ A fragment of POLmIn Let be a finite set of agents, be a countable set of propositions describing the facts about the state and be a finite set of actions. An observation is a finite string of actions. In the vacuum bot example, an observation may be and similar others. An agent may expect different potential observations to happen at a given state, but to model human/agent expectations, such expectations are described in a finitary way by introducing the observation expressions (as star-free regular expressions over ): [Observation expressions] Given a finite set of action symbols , the language ℒ_ obs of observation expressions is defined by the following BNF: [ π ∅|| a |π·π|π + π; ] where ∅ denotes the empty set of observations, the constant  represents the empty string, and a∈. In the bot example, the observation expression (· + ·) models the expectation of the bot's movement in either way, towards the power source or the debris-disposal area, whereas ()^3·()^3 models the expectation of moving towards the power source. The size of an observation expression π is denoted by |π|. The semantics for the observation expressions are given by sets of observations (strings over ), similar to those for regular expressions. Given an observation expression π, its set of observations is denoted by (π). For example, ()={}, and (· + ·) = {, }. The (star-free) regular language π w is the set of words given by {v ∈^* | wv∈(π)}. The language (π) is the set of prefixes of words in (π), that is, w∈(π) iff ∃ v∈^* such that wv∈(π) (namely, (π w)≠∅). (·) = (· + ·) =, and (· + ·) = {, , , , }. We now present a modified version of epistemic expectation models from <cit.> that capture the expected observations of agents. They can be seen as epistemic models together with, for each state, a set of potential or expected observations. Recall that an epistemic model is a tuple ⟨ S, ∼ ,V⟩ where S is a non-empty set of states, ∼ assigns to each agent in an equivalence relation ∼_i ⊆ S × S, and V : S → 2^ is a valuation function. [Epistemic expectation model with finite observations] An epistemic expectation model with finite observations is a quadruple ⟨ S, ∼ ,V, ⟩, where ⟨ S, ∼ ,V⟩ is an epistemic model (the epistemic skeleton of ) and : S _ obs is an expected observation function assigning to each state an observation expression π such that (π)≠∅ (finite non-empty set of finite sequences of observations). A pointed epistemic expectation model with finite observations is a pair (, s) where = ⟨ S, ∼ ,V, ⟩ is an epistemic expectation model with finite observations and s ∈ S. In what follows we will use the `epistemic expectation model' to denote the `epistemic expectation model with finite observations'. Intuitively, assigns to each state a set of potential or expected observations. We now provide the model definition of the example mentioned in the introduction (cf. Figure <ref>) For the sake of brevity, we do not draw the reflexive arrows. If the moves one step left, , then while Alice still considers moving to the debris-disposal area a possibility, Bob does not consider that possibility at all, as described by Example <ref>, and depicted by the edge in Figure <ref> between the states u and t, annotated by Alice and not Bob. world = [draw] The logic was introduced to reason about agent knowledge via the matching of observations and expectations, and as we mentioned earlier, the difference between and ^- is just a technical one. The main idea expressed in these logics is the following: While observing an action, people would tend to delete some impossible scenarios where they would not expect that observation to happen. For this purpose, the update of epistemic expectation models with respect to some observation w∈^* is provided below. [Update by observation] Let w be an observation over and let =⟨ S,∼,V,⟩ be an epistemic expectation model. The updated model |_w= ⟨ S',∼',V','⟩ is defined by: S' = {s|((s) w)≠∅}, ∼'_i=∼_i|_S'× S', V'=V|_S', and '(s)=(s) w. The main idea of the updated model is to delete the states where the observation w could not have happened. To reason about agent expectations and observations, the language for ^- is provided below. [^- syntax] Given a countable set of propositional variables , a finite sets of actions , and a finite set of agents , the formulas ϕ of ^- are given by: [ ϕ ⊤| p |ϕ|ϕϕ| K_iϕ| [π] ϕ ] where p∈, i∈, and π∈ℒ_ obs. Intuitively, K_iϕ says that `agent i knows ϕ and [π]ϕ says that `after any observation in π, ϕ holds'. The other propositional connectives are defined in the usual manner. We also define πϕ as ¬ [π]¬ϕ and K̂_iϕ as ¬ K_i ¬ϕ. Typically, πϕ says that `there exists an observation in π such that ϕ holds'. Formula K̂_iϕ says that `agent i imagines a state in which ϕ holds'. The logic ^- is the fragment of , that is, it is the set of formulas in which the π's do not contain any Kleene star *. A more restricted version is the fragment of , where π's are words, that is, observation expressions without + operators. We consider both the single-agent word fragment of ^-, and multi-agent word fragment of ^-. Furthermore, we consider single-agent ^-, and multi-agent ^- (full ^-). [Truth definition for ^-] Given an epistemic expectation model = (S, ∼ ,V, ), a state s∈ S, and a -formula ϕ, the truth of ϕ at s, denoted by ,sϕ, is defined by induction on ϕ as follows: [ ,s p ⇔ p∈ V(s); ,sϕ ⇔ ,s⊭ϕ; ,sϕψ ⇔ ,sϕ and , sψ; ,s K_iϕ ⇔ for all t: (s∼_i t implies ,tϕ); ,s [π]ϕ ⇔ for all observations w over Σ,; w∈(π) ∩((s)); implies |_w,sϕ ] where (π) is the set of prefixes of words in (π), that is, w∈(π) iff ∃ v∈^* such that wv∈(π) (namely (π w)≠∅). The truth of K_iϕ at s follows the standard possible world semantics of epistemic logic. The formula [π]ϕ holds at s if for every observation w in the set (π) that matches with the beginning of (i.e., is a prefix of) some expected observation in s, ϕ holds at s in the updated model |_w. Note that s is a state in |_w because w ∈((s)). Similarly, the truth definition of πϕ can be given as follows: ,sπϕ iff there exists w∈(π) ∩((s)) such that |_w,sϕ. Intuitively, the formula πϕ holds at s if there is an observation w in (π) that matches with the beginning of some expected observation in s, and ϕ holds at s in the updated model |_w. For the example described earlier, we have: - , t [](K_Bob debris K̂_Alice debris), if the moves one step left, , then while Alice still considers moving to the debris-disposal area a possibility, Bob does not consider that possibility at all. Satisfiability problem for ^-: Given a formula ϕ, does there exist a pointed epistemic expectation model ,s such that ,s ϕ? We investigate the complexity of this problem. The fragments of ^- that we consider are (i) single-agent word fragment, (ii) multi-agent word fragment, (iii) single-agent ^-, and, (iv) full ^-. § AN APPLICATION Let us now consider a scenario which can be aptly described using the satisfiability problem of . We go back to the cleaning bot example introduced earlier. Let Alice be agent a and Bob be agent b. Suppose the is moving towards the power source without making any error. Evidently, the possibilities considered by the agents, based on the information available to them are given as follows: - Possibilities considered by Alice who has the information about the glitch in the bot: K̂_̂âdebris∧K̂_̂â + debris∧K̂_̂âpower - Possibilities considered by Bob who is not aware of the glitch in the bot: K̂_̂b̂debris∧K̂_̂b̂power Now, we model the expectations as follows: Consider the expression, π^p_n = ( + )^n that represents a sequence of moves of length n the bot can make to get to to the power source without any error. We use a formula P_n to express the following: As long as the bot is observed to make n many moves towards the power source, reaching it is still a possibility. P_n = (⊤∧⊤) ∧[π^p_1](⊤∧⊤) ∧[π^p_2](⊤∧⊤)… ∧[π^p_n](⊤∧⊤) The first conjunct of P_n translates to move towards the power source, a move towards down or left can be observed. The second conjunct translates to the following: after the observation of a single left or down movement, another left or down movement can be observed. The other conjuncts can be described similarly. For the scenario described in the introduction, we can consider P_n to create a formula where n is at most 3, without an error. Let us denote such a formula by ψ_p. Similarly, a formula can express the movement towards debris-disposal with at most one error and with no error as ψ_de and ψ_d, respectively. A situation where the bot is moving towards the power source without any error, but a considers the possibility of moving towards debris-disposal with an error can be expressed as K̂_̂âψ_de∧ψ_p. Similarly, a formula can be considered for modelling the expected observation when both the agents consider the possibility of the bot moving towards debris-disposal area without an error: K̂_̂âψ_d∧K̂_̂b̂ψ_d. We call the (finite) set of all such formulas, Γ_p. Similarly, we can construct a set Γ_de of formulas, when the bot can make an error while going towards debris-disposal area or Γ_d when it is moving towards the debris-disposal without any error. Suppose we want to conclude the following in the current scenario: After one wrong move, b knows that the bot is not moving towards debris-disposal, but a still considers the possibility. The formula, 𝐼𝑁𝐹𝑂_ab, say, turns out to be + (K_b power∧K̂_̂âdebris) The actual scenario is that the bot is indeed moving towards power. Hence, to check whether 𝐼𝑁𝐹𝑂_ab can be concluded in this scenario, a satisfiability solver for ^- can check the (un)satisfiability of the formula ((⋀_ψ∈Γ_pψ)→𝐼𝑁𝐹𝑂_ab) § ALGORITHM FOR THE SATISFIABILITY PROBLEM OF POLSATALGO In this section, we design a proof system using the tableau method to prove satisfiability of . A term in a tableau proof is of the form (σ w ψ)| (σ w )| (σ,σ')_i, where i∈ Agt. The σ is called a state label that represents a state in the model, w∈Σ^* is a word over a finite alphabet and ψ is a formula in . The term (σ w ψ) represents the fact that the state labelled by σ survives after the model is projected on the word w, and after projecting on w, ψ holds true in the state corresponding to σ. The term (σ w ) represents the fact that the state labelled by σ survives after the model is projected on word w. The term (σ_1,σ_2)_i represents in the model, the states represented by σ_1 and σ_2 should be indistinguishable for the agent i∈ Agt, where Agt is a finite set of agents. For space reasons, the term (σ_1, σ_2)_i∈ Agt stands for the set of terms {(σ_1,σ_2)_i| i∈ Agt}. Without loss of generality, the formula φ is assumed to be in Negative Normal form, the syntax of which is as follows: φ := ⊤ | p | p | ψ∨χ | ψ∧χ | K̂_̂îψ | K_iψ | πψ | [π]ψ Given a formula we denote by φ, FL(φ) the Fischer-Ladner Closure of φ, (see <cit.>). §.§ The Tableau Rules The tableau rules for this fragment have been shown in Figure <ref>. Here an inference rule looks like this: C_1 | C_2 | …| C_n A . Here each C_i and A is a set of tableau terms. The C_is are called consequences, A is the antecedent. Intuitively the rule is interpreted as "If all the terms in A are true, then all the terms in at least one of C_i's are true". In Figure <ref>, the left column is the rule name and the right column is the rule. For example, the Box Project Rule states that "The state labelled by σ survives after projection on word w and it satisfies [π]ψ ((σ w [π]ψ)) and σ still survives a further projection on letter a((σ wa )) then after further projection on a, [π a]ψ should hold true in the state labelled by σ ((σ wa [π a]ψ)).". Recall π a denotes the residual of π by a (see Section <ref>). Similarly, the Diamond Project rule says that if a certain state σ, under some word projection w has to satisfy aψ, then that state σ has to survive projection on wa and also satisfy ψ under the same projection. A tableau proof can be assumed a tree. Each node of the tree is a set of tableau terms Γ. An inference rule can be applied in the following way: If A⊆Γ and C_i's are not in Γ, the children of Γ are Γ C_i for each i∈[n]. When no rules can be applied on a Γ, we say Γ is saturated (leaf node in the proof tree). If ∈Γ, we say that branch is closed. If all branch of the proof tree is closed, we say the tableau is closed, else is open. Given a formula φ, we start with Γ = {(σ ϵ φ), (σ ϵ )}(σ,σ)_i, i∈ Agt. Suppose we aim at deciding whether ϕ := K̂_̂îap∧aK_i ¬ p is satisfiable or not. For simplicity we suppose there is a single agent i. Here are the terms added to the set of terms: * (σ ϵ φ), (σ ϵ ), (σ,σ)_i(initialization) * (σ ϵ K̂_̂îap), (σ ϵ aK_i ¬ p) by AND rule * (σ' ϵ ap), (σ' ϵ ), (σ,σ')_i, (σ', σ')_iby Possibility rule * (σ', σ)_iby Symmetry rule * (σ' a p), (σ' a )by Diamond Project on 2 * (σ a ), (σ a K_i¬ p)by Diamond Project on 2 * (σ' a ¬ p)by Knowledge rule on 3, 5, 6 * by Clash rule on 5,7 As we obtain , the formula ϕ is not satisfiable (by the upcoming Theorem <ref>). §.§ Soundness and Completeness of the Tableau Rules In this section, we provide the soundness and completeness proof of the Tableau method for the satisfiability of Given a formula φ, if φ is satisfiable, then the tableau for Γ = {(σ ϵ φ), (σ ϵ ), (σ,σ)_i∈ Agt} is open. Given a formula φ, if the tableau for Γ = {(σ ϵ φ), (σ ϵ ), (σ,σ)_i∈ Agt} is open, then φ is satisfiable. The proof of Theorem <ref> is done by induction. We shift the proof of Theorem <ref> to the appendix. We now present the proof of Theorem <ref>. Since by assumption, the tableau for Γ = {(σ ϵ φ), (σ ϵ ), (σ,σ)_i∈ Agt} is open, there exists a branch in the tableau tree where in the leaf node there is a set of terms Γ_l such that it is saturated and ∉Γ_l. For the purpose of this proof, let us define a relation over the words w̅ that appears in Γ_l. For any two word w̅_1 and w̅_2 that appears in Γ_l, w̅_1≤_prew̅_2 if and only if w̅_1∈(w̅_2)). Now, this relation is reflexive (w̅_1∈(w̅_1)), asymmetric (if w̅_1∈(w̅_2) and w̅_2∈(w̅_1) then w̅_1 = w̅_2) and transitive (if w̅_1∈(w̅_2) and w̅_2∈(w̅_3) then w̅_1∈(w̅_3)). Hence this relation creates a partial order among all the words occurring in Γ_l. We also denote w̅_̅1̅<_prew̅_̅2̅ to interpret the fact that w̅_̅1̅≤_prew̅_̅2̅ and w̅_̅1̅≠w̅_̅2̅. Now we create a model = W, {R_i}_i∈ Agt, V, Exp out of Γ_l and prove that φ is satisfied by some state in the model. * W = {s_σ|σΓ_l} * R_i = {{s_σ_1, s_σ_2}|(σ_1,σ_2)_i∈Γ_l} * V(s_σ) = {p| (σ ϵ p)∈Γ_l} * Exp(s_σ) = ∑_w∈Λ_σw, where Λ_σ = {w| (σ w )∈Γ_l∄ w':((σ w' )∈Γ_lw<_pre w')} Note that, the new state label σ_n is only created in the possibility rule, with a reflexive relation on itself. Now consider the set R' = {(σ, σ')|{(σ w ), (σ' w' )}⊆Γ_l}. Hence this can be considered a binary relation over the set of all distinct σ that occurs in Γ_l. When a σ' is created by the possibility rule, it is reflexive. Also by the relation rules, they are made symmetrically and transitively related to every other label that has been previously there. Hence R' is an equivalence relation, hence making R_i in the model an equivalence relation. Now, Theorem <ref> follows from the following two claims, the proofs of which we present later. If (σ w )∈Γ_l then s_σ survives in |_w. For any word w that occurs in Γ_l, any label σ and any formula ψ, If (σ w ψ)∈Γ_l and (σ w )∈Γ_l then s_σ survives in |_w and |_w,s_σψ. We induct on the size of |w|. Base Case. Let |w| = 1. Hence w ∈{ϵ}Σ . Since Γ⊆Γ_l and (σ ϵ ), and s_σ is in |_ϵ =. For the case w = a for any a∈Σ. Hence there exists a word w' that occurs in a term in Γ_l labelled by σ such that w∈(w')) and there is no other word bigger than w' such that w' is in its prefix, since the proof is on finite words and formula, the proof terminates. Hence by definition of w'∈(Exp(s_σ)) which guarantees survival of s_σ in |_a. Induction Hypothesis. Assume the statement to be true for |w| = n. Inductive Step. Consider the case where |w| = n + 1. By assumption, (σ w )∈Γ_l. Hence by the fact that Γ_l is saturation and by the rule "Survival Chain", there is (σ w' )∈Γ_l, where w = w'a for some a∈Σ. Hence by IH, the result follows that s_σ survives in |_w'. Now, by termination, there are finite many unique words occurring in Γ_l. Clearly, w'≤_pre w. Since there are finite many words, there is a w_*, which is of maximum size such that w≤_pre w_* and (σ w_* )∈Γ_l. Hence w_*∈Λ_σ in the definition of Exp of the model. Therefore w_*∈(Exp(s_σ)) and since w'≤_pre w≤_pre w_*, s_σ survives in |_w', hence s_σ shall survive in |_w. Naturally, we shall induct upon the size of ψ. Base Case. Let ψ is of the form p or p. By the definition of the function V for the model and the previous proof, the statement stands true. Induction Hypothesis. Let us consider the statement is true for any ψ such that |ψ|<n' for some n'. Inductive Step. We prove for |ψ| = n'. Again, we go case by case on the syntax of ψ. * ψ = K̂_̂îχ. Since Γ_l is saturated, by the rule of possibility, {(σ' w χ), (σ, σ')_i, (σ' w )}⊆Γ_l. By IH on the subformula χ, the definition of the model, the proof of the previous statement, and the rule "survival chain", s_σ' survives in |_w and |_w,s_σ'χ. Also by definition, {s_σ, s_σ'}∈ R_i, hence proving |_w, s_σK̂_̂îχ. * ψ = K_iχ. Since Γ_l is saturated, and by previous statement s_σ' is surviving for every (σ' w ), by the rule of knowledge (σ' w χ)∈Γ_l for every (σ, σ')_i. Hence by IH on subformula, |_w, s_σ'χ for every σ' such that {σ,σ'}∈ R_i. * ψ = π + π'χ. Since Γ_l is saturated, hence by the ND Decomposition, either the term (σ w πχ)∈Γ_l or (σ w π'χ)∈Γ_l. By IH, |_w,s_σπχ or |_w,s_σπ'χ and hence |_w,s_σπ + π'χ. * ψ = ππ'χ. Since Γ_l is saturated, hence (σ w ππ'χ)∈Γ_l. By IH, since ππ'χ∈ FL(ψ), hence |_w, s_σψ. * ψ = aχ. Note that we don't consider a general word w' in the diamond as given w' = aw”, a formula w'χ is satisfiable if and only if aw”χ is satisfiable. * ψ = [π]χ. Let us consider (σ wa )∈Γ_l for some a∈Σ. Hence by the proof of the first statement, s_σ∈|_wa. Also |(π)| < |(π a)|. Hence by induction on the size of formula |_wa,sσ[π a]χ which implies |_w,s_σ[π]χ. This completes the proof of Theorem <ref> §.§ A nexptimeub Upper Bound Now we design an algorithm based on tableau and prove existence of an algorithm that takes non-deterministically exponential steps with respect to the size of φ. Now given a φ, we now create a tree of nodes, where each node T_σ contains terms of the tableau of the form (σ w ψ) and (σ w ), where w∈Σ^* is a word that is occuring in tableau, and ψ is a formula in FL(φ). Each node T_σ refers to a state label σ in tableau, a term of the (σ w ψ)∈ T_σ intuitively translates to in the state corresponding to σ, after projecting model on w, the state survives and there ψ is satisfied. Similarly, (σ w )∈ T_σ means state corresponding to σ survives after projection on w. The tableau tree created, we call it _ We saturate the rules carefully such that each node in the tree corresponds to a single state in the model. This technique is well studied in <cit.>. The satisfiability of is in . Given the tree _ we create in the procedure, a node T_σ is marked satisfiable iff it does not have bot, {(σ w K_iψ), (σ w ψ)}⊈ T_σ and all its successors are marked satisfiable. We prove three statements: * Statement 1: Each node is of at most exponential size, that is, has at most exponential many terms. * Statement 2: Maximum children a node can have is polynomial. * Statement 3: The height of the tree is polynomial. Proof of Statement 1. Since a term in a node T_σ is of the form (σ w ψ), where w is a word over some finite alphabet Σ and ψ is a formula of . According to the shape of the rules, a formula that can be derived is always in FL(φ). Since |FL(φ)|≤ O(|φ|)<cit.>, hence there can be at most O(|φ|) many formulas. Also, since a regular expression π occuring in a modality is star-free (that is does not contain the Kleene star), hence a word w∈(π) is of length at most |π| which is again of length at most φ. Also there are at most |FL(φ)| many regular expressions. Hence there are at most |Σ|^O(p(|φ|)), where p(X) is some polynomial on X, many unique words possible. Hence therefore, there can be at most exponential many terms in a single node. Proof of Statement 2. From a node T_σ, a child is created for every unique triplet of (σ w K̂ψ) in T_σ. Number of such triplets possible is, as proved is at most polynomial with respect to |φ|. Proof of Statement 3. For proving this, we use md(Γ), given a set of formulas Γ, is the maximum modal depth over all formulas in Γ. Finally we define F(T_σ) as the set of formulas occuring in the node T_σ. Consider T_σ, the node T^i_σ' is i- successor of T_σ and T^j_σ” be the j successor of T^i_σ' (i≠ j). Note that all the formulas in F(T^j_σ”) are from FL closure of all the K_j and K̂_̂ĵ formulas from F(T^i_σ'). Also all the formulas in F(T^i_σ') are in the FL closure of the K_i and K̂_̂î formulas occusring in T_σ. Hence md(T^j_σ”)≤ md(F(T^i_σ')). Therefore, there can be at most O(|φ|^c) such agent alterations in one path of _P (not linear because there can be polynomial many words paired with each formula). Now let us consider how many consecutive i succesors can happen in a path. Suppose a T_σ has a new i-successor node T_σ' for the term (σ w K̂_̂îψ). Due to the fact that the indistinguishability relation is equivalence for each agent due to the Transitivity, Symmetry rule and the reflexivity that infers in the possibility rule, hence all the possibility and the knowledge formula terms of the form (σ w' K̂_̂îξ) or (σ w' K_iξ) of agent i are in the successor node T_σ' in the form (σ' w' K̂_̂îξ) or (σ' w' K̂_̂îξ) respectively, along with the term (σ' w ψ). Hence the number of such unique combination of terms will be at most polynomial to the size of |FL(φ)|. Therefore, the height of _ is polynomial with respect to the |φ|. § HARDNESS OF SATISFIABILITY IN POLSATHARD In this section, we give a lower bound to the Satisfiability problem of . We reduce the well-known -Complete Tiling problem to come up with a formula in the fragment that only has 2 agents. satisfiability problem is -Hard. We reduce the -Complete tiling problem of a square whose size is 2^n where n is encoded in unary <cit.> (see Figure <ref>). The instance of the tiling problem is (, , n) where is a set of tile types (e.g 0.5), is a specific tile that should be at position (0, 0), and n is an integer given in unary. Note that the size of the square is exponential in n. We require the colours of the tiles to match horizontally and vertically. The idea of the reduction works as follows. We consider two tilings A and B. We will construct a formula tr(T, t_0, n) expressing that the two tilings are equal, contains t_0 at (0, 0), and respect the horizontal and vertical constraints. With the help of two epistemic modalities K_i and K_j we can simulate a standard K modal logic . For the rest of the proof, we consider such a modality and its dual . We encode a binary tree whose leaves are pairs of positions (one position in tiling A and one in tiling B). Such a tree is of depth 4n: n bits to encode the x-coordinate in tiling A, n bits to encode the x-coordinate in tiling B, n bits to encode the y-coordinate in tiling A, n bits to encode the y-coordinate in tiling B. A pair of positions is encoded with the 4n propositional variables: p_0, …, p_4n-1. The first p_0, …, p_2n-1 encodes the position in tiling A while the later p_2n, …, p_4n-1 encodes the position in tiling B. At each leaf, we also use propositional variables t A (resp. t B) to say there is tile t at the corresponding position in tiling A (resp. tiling B). The following formula enforces the existence of that binary tree by branching over the truth value of proposition p_ℓ at depth ℓ: ⋀_ℓ<4n^ℓ( p_ℓ¬ p_ℓ⋀_i<ℓ (p_i p_i) (¬ p_i ¬ p_i)) Now, by using of specific Boolean formulas over p_0, …, p_4n-1, it is easy to express equality, presence of t_0 at (0, 0) and horizontal and vertical constraints: ^4n(⋁_t t A     ⋀_t ≠ t' (¬ t A ¬t' A) ) ^4n(⋁_t t B     ⋀_t ≠ t' (¬ t B ¬t' B) ) ^4n (position in tiling A = 0) t_0 A ^4n (x-coordinate of position in A = 1 + x-coordinate of position in B)                  ⋁_t , t' |t matches t' horizontally ( t A t' B) ^4n (y-coordinate of position in A = 1 + y-coordinate of position in B)                  ⋁_t , t' |t matches t' vertically ( t A t' B) The main difficulty is to be sure that all pairs of positions with the same position for - let's say - tiling A indicates the same tile for the tiling A (i.e. the same variable t A is true). To this aim, we will write a formula of the following form [π_ any position in A ] ⋁_t ^4n t A    [π_any position in B] ⋁_t ^4n t B. To be able to perform observations to select any position in tiling A (resp. B) whatever the position in tiling B (resp. A) is, we introduce the alphabet Σ = A, A̅, B, B̅. We write these two formulas that make a correspondence between valuations on the leaves and observations: ^4n⋀_i=0..2n-1 [A + A̅]^i ([ (p_i A⊤ [A̅]); (¬ p_i A̅⊤ [A]) ]) ^4n⋀_i=2n..4n-1 [B + B̅]^i-2n([ (p_i B⊤ [B̅]); (¬ p_i B̅⊤ [B]) ]) The idea is that a 2n-length word on alphabet A, A̅ corresponds to a valuation over p_1, …, p_2n-1, and thus a position in tiling A and only that 2n-length word on alphabet A, A̅ is observable. In the same way, a word on alphabet B, B̅ corresponds to a valuation over p_2n, …, p_4n-1, thus a position in tiling B. We also say that the inner node (non-leaf) of the binary tree is never pruned by observations (all 2n-length words over A, A̅, B, B̅ are observable): ^< 4n⋀_i=0..2n-1 [Σ]^i ( A ⊤A̅⊤ B ⊤B̅⊤) The formula for ensuring the uniqueness of q_t^A whatever the position in tiling B, and the other way around are then: [(A+A̅)^2n] ⋁_t ^4n t A [(B+B̅)^2n] ⋁_t ^4n t B The intuition works as follows. When evaluating [(A+A̅)^2n] ^4n t A, we consider all words w in ℒ((A+A̅)^2n) and we consider any pruning |_w of the model which contains the binary tree . In |_w, only the leaves where the valuation on p_0, …, p_2n-1 that corresponds to w stays. With ⋁_t, we choose a tile type t in T. The modality ^4n then reaches all the leaves and imposes that t A holds. The reduction consists of computing from an instance (, , n) of the tiling problem the formula tr(, , n) which is the conjunction of (1-12), which is computable in poly-time in the size of (, , n) (recall n is in unary). Furthermore, one can check that (, , n) is a positive instance of the tiling problem iff tr(, , n) is satisfiable. § COMPLEXITY RESULTS OF FRAGMENTS OF POLSATFRAG In this section, we consider a few fragments of ^- and we give complexity results for them. First, we consider the single agent fragment of ^-, and then we prove complexity results for the word fragment of ^- (both single and multi-agent) using reductions to . §.§ Single agent fragment of POLsatfragsingleagent While we have shown (in Theorem <ref>) that the satisfiability problem of the is -Hard, the hardness proof holds only for the case when the number of agents is at least 2. However, we prove that satisfiability problem in the single Agent fragment of is -Hard, although single-agent epistemic logic S5 is -Complete. We prove it by reducing TQBF into our problem. The TQBF problem is: given a formula φ of the form Q_1x_1Q_2x_2… Q_nx_n (x_1,x_2,…,x_n) where Q_i∈{∀, ∃} and (x_1,x_2,…,x_n) is a Boolean formula in CNF over variables x_1,…, x_n, decide whether the formula φ is true. The satisfiability problem for single agent fragment of is -Hard. The proof follows in the same lines as the proof of -Hardness of the model-checking problem of the (<cit.>). We present the complete proof of Theorem <ref> in the appendix. §.§ Word fragment of POLsatfragword To investigate the complexity of the satisfaction problem of the word fragment of ^-, we use a translation of ^- to . Before going forward, let us give a very brief overview of the syntax and semantics of . §.§.§ Public announcement logic PALintro To reason about announcements of agents and their effects on agent knowledge, <cit.> was proposed. The underlying model that is dealt with in is epistemic, ⟨ S, ∼ ,V⟩ where S is a non-empty set of states, ∼ assigns to each agent in an equivalence relation ∼_i ⊆ S × S, and V : S → 2^ is a valuation function. The language is given as follows: [ syntax] Given a countable set of propositional variables , and a finite set of agents , a formula φ in Public Announcement Logic () can be defined recursively as: φ := ⊤ | p | ϕ | ϕ∧ϕ | K_iϕ | [ϕ!]ϕ where p∈, and i∈. Typically, [ϕ!]ψ says that `if ϕ is true, then ψ holds after having publicly announced ϕ'. Similarly, as in ^- syntax, the respective dual formulas are defined as, K̂_̂îψ = K_iψ ϕ!ψ = [ϕ!]ψ Formula ϕ!ψ says that ϕ is true, and ψ holds after announcing ϕ. Before going into the truth definitions of the formulas in , let us first define the notion of model update. [Model Update by Announcement] Given an epistemic model, = ⟨ S, ∼ ,V⟩, s ∈ S, and a formula ϕ, the model |_ϕ = ⟨ S', ∼' ,V'⟩ is defined as: * S' = {s∈ S|,sϕ} * ∼'_i=∼_i|_S'× S', * V'(s) = V(s) for any s∈ S'. Now we are all set to give the truth definitions of the formulas in with respect to pointed epistemic models: [Truth of a formula] Given an epistemic model = ⟨ S, ∼, V⟩ and an s∈ S, a formula φ is said to hold at s if the following holds: * ,s p iff p∈ V(s), where p∈. * ,sϕ iff ,s⊭ϕ. * ,sϕ∧ψ iff ,sϕ and ,sψ. * ,s K_iϕ iff for all t∈ S with s∼_i t, ,tϕ. * ,s [ψ!]ϕ iff ,sψ implies|_ψ,sϕ. §.§.§ On complexity To study the satisfiability problem for the word fragment of , we transfer the following result from to : <cit.> The satisfiability problem of is -Complete for the single-agent case and -Complete for the multi-agent case. is the extension of epistemic logic with dynamic modal constructions of the form [ϕ!]ψ that expresses `if ϕ holds, then ψ holds after having announced ϕ publicly'. The dynamic operator π in the word fragment of consists in announcing publicly a sequence of observations. W.l.o.g. as π is a word a_1… a_k, π can be rewritten as a_1…a_k. In other words, we suppose that the dynamic operators only contain a single letter. The mechanism of is close to Public announcement logic (). Observing a consists in announcing publicly that wa occurred where w is the observations already seen so far. We introduce fresh atomic propositions wa to say that letter a is compatible with the current state given that the sequence w was already observed. For all words w ∈Σ^*, we then define tr_w that translates a formula into a formula given that w is the already seen observations seen so far: tr_w(p) = p tr_w(¬ϕ) = ¬ tr_w(ϕ) tr_w(ϕψ) = tr_w(ϕ) tr_w(ψ) tr_w(K_i ϕ) = K_i tr_w(ϕ) tr_w( a ϕ) = wa tr_wa(ϕ) We finally transform any formula ϕ into tr(ϕ) := tr_(ϕ). Consider the formula ϕ := [a]aa⊤. tr(ϕ) is [p_a!]aaa⊤. Note that if a is false, the truth value of aa is irrelevant. ϕ is satisfiable in the word fragment of iff tr(ϕ) is satisfiable in . (sketch) ⇒ Suppose there is a pointed model , s_0 such that , s_0 ϕ. We define ' to be like except that for all states s in , for all w ∈Σ^*, we say that w is true at ', s iff (s) w ≠∅. It remains to prove that ', s_0 tr(ϕ). We prove by induction on ϕ that for all w ∈ words(ϕ), if (s) w ≠∅ then |_w, s ϕ iff ', s tr_w(ϕ). We only show the interesting case of φ = aψ. Here the _w(aψ) = wa_wa(ψ). By assumption, |_w,saψ. Hence |_wa,sψ. Therefore Exp(s) wa≠∅. By definition of ', p_wa is true in s. Therefore by IH ',s_wa(ψ). And since p_wa is true, hence ',sp_wa!_waψ. Conversely, assuming ',swatr_wa(ψ). Hence wa is true in s. By definition, wa is true iff (s) wa≠∅. Also by IH, |_wa,sψ. Hence |_w,saψ. ⇐ Suppose there is a pointed epistemic model ', s_0 such that ', s_0 tr(ϕ). We define a model like ' except that for all states s, (s) = w ∈Σ^* , s w. It remains to prove that , s_0 ϕ. For the rest of the proof, we prove by induction on ϕ that for all w ∈Σ^*, if (s) w ≠∅ then |_w, s ϕ iff ', s tr_w(ϕ). The proof goes similarly as earlier. Note that the single-agent and multi-agent word fragment of is a syntactic extension of propositional logic and the multi-agent epistemic logic respectively, which are -Hard and -Hard respectively. From the fact that the satisfiability problem of single agent and the multi-agent fragments of is in and respectively, we have the following corollaries of Proposition <ref>. The satisfiability problem of the single-agent word fragment of is -Complete. The satisfiability problem of the multi-agent Word fragment of is -Complete. § RELATED WORK The complexity of Dynamic Epistemic Logic with action models and non-deterministic choice of actions is -Complete too <cit.> and their proof is similar to the one of Theorem <ref>. The tableau method described for ^- uses a general technique where terms contain the observations/announcements/actions played so far. This technique was already used for PAL <cit.>, DEL <cit.>, and for a non-normal variant of PAL <cit.>. Decidability of (single-agent) epistemic propositional dynamic logic () with Perfect Recall () and No Miracles () is addressed in <cit.>. Although and are validities in ^-, there are differences to consider even in single agent. Firstly, in an model, a possible state can execute a program a and can non-deterministically transition to a state among multiple states, whereas in ^-, if a state survives after observation a, it gives rise to the same state except the function gets residued. Also, in , after execution of a program, the state changes hence the propositional valuation in the state changes, whereas in ^-, the state survives after a certain observation and hence the propositional valuation remains the same. Whereas in , observations update the model, there are other lines of work in which specifying what agents observe define the epistemic relations in the underlying Kripke model <cit.> (typically, two states are equivalent for some agent i if agent i observes the same facts in the two states). § PERSPECTIVES This work paves the way to an interesting technical open question in modal logic: the connection between and product modal logics. Single-agent is close to the product modal logic S5 × K, the logic where models are Cartesian products of an S5-model and a K-model. Indeed, the first component corresponds to the epistemic modality K̂_i while the second component corresponds to observation modalities π. There are however two important differences. First, in , valuations do not change when observations are made. Second, the modality π is of branching at most exponential in π while modalities in K-models do not have branching limitations. We conjecture that the two limitations can be circumvented but it requires some care when applying the finite model property of product modal logic S5 × K. If this connection works, it would be a way to prove -Completeness of star-free single-agent . Recall that is close to with propositional announcements only (see Proposition <ref>). We conjecture some connections between and arbitrary <cit.>, and more precisely with Boolean arbitrary public announcement logic <cit.>. Indeed, the non-deterministic choice + enables to check the existence of some observation to make (for instance, (a+b)^10ϕ checks for the existence of a 10-length word to observe), which is similar to checking the existence of some Boolean announcement. The next perspective is also to tackle with Kleene-star in the language. This study may rely on techniques used in epistemic temporal logics. PAL with Kleene-star is undecidable <cit.>. Again, the undecidability proof relies on modal announcements. Since is close to Boolean announcements, this is a hope for to be decidable. The idea would be to exploit the link between dynamic epistemic logics and temporal logics <cit.>, and rely on techniques developed for tackling the satisfiability problem in epistemic temporal logics <cit.>. unsrt § PROOF OF SOUNDNESS OF THE TABLEAU METHOD OF POLSTABLEAUSOUND In this section we give the proof the Theorem <ref>. We prove that if φ is satisfiable then there exists a subtree rooted at some child Γ_c of the root Γ in 𝒯 which is open using induction on the depth of the tableau tree 𝒯. Let the pointed epistemic model that satisfy φ be ,s. Base Case. Let the base case be |Γ| = 2 + |Agt|. Since we start from {(σ ϵ φ), (σ, σ)_i∈ Agt, (σ ϵ )}. Hence this implies φ = l or φ = [a]ψ, where l is a literal (a positive propositional letter or a negation of it). Hence the tableau remains open since no ∈Γ. Induction Hypothesis. Let the statement be true for any tableau tree 𝒯 of depth at most n. Inductive Step. Consider the tableau tree 𝒯 rooted at Γ = {(σ ϵ φ), (σ, σ), (σ ϵ )} of depth at most n + 1. Now we go case by case with φ: * φ = ψ∧χ: We apply AND rule, and hence Γ' = Γ{(σ ϵ ψ), (σ ϵ χ)}, which is a child rooted at Γ. Since ,sφ, by definition ,sψ and ,sχ. By IH, tableau tree rooted at Γ' is open, which suggestes, 𝒯 is open. * φ = ψ∨χ: We apply OR rule and hence we get two children, Γ_1 = Γ{(σ ϵ ψ)} and Γ_2 = Γ{(σ ϵ χ)}. Since ,sψ∨χ, hence ,sψ or ,sχ. By IH, one of the sub tableau tree rooted at Γ_1 or Γ_2 will be open, hence implying 𝒯 to be open. * φ = K_iψ: By applying the Knowledge rule, Γ' = Γ{(σ' ϵ ψ)|{(σ, σ')_i, (σ' ϵ )}⊆Γ}. Since ,s K_iψ, hence for every s'∈ such that s∼_i s', ,s'ψ. By IH, the tableau subtree rooted at Γ' is open. * φ = K̂_̂îχ: By applying the possibility rule, Γ' = Γ{(σ_n, σ_n)_i∈ Agt, (σ_n ϵ ), (σ_n ϵ χ)}{(σ, σ_n)_i, (σ_n, σ)_i}{(σ', σ_n)_i, (σ_n, σ')_i|{(σ' ϵ ), (σ,σ')_i}⊆Γ_l}. Since, ,sK̂_̂îχ, hence there is an s_n∈ such that ,s_nχ, also s_n∼ s' for every s'∈ since ∼ is an equivalence relation. Hence by IH, the sub tableau tree rooted at Γ' is open. * φ = ππ'ψ: Hence by the diamond decomposition rule Γ' = Γ{(σ ϵ ππ'ψ)}. Since ,sππ'ψ, hence |_w_*,sψ, for some w_*∈(ππ') which also suggests, there is a w∈(π) and a w'∈(π') such that w_* = ww'. Hence |_w,sπ'ψ, which implies ,sππ'ψ. By IH, the tableau for Γ' is open. * φ = aψ. Hence now by Projection rule, Γ' = Γ{(σ a ), (σ a ψ)}. Since ,saψ, therefore s∈|_a and |_a,sψ. And hence by IH, tableau tree rooted at Γ' is open. * φ = [π]ψ. Say (σ a ) is added for some diamond formula term, say of the form (σ ϵ aψ'), hence the proof has added (σ a [π]ψ). By assumption , saψ', and hence s∈|_a. Therefore |_a,s[π a]ψ, hence the proof starting from terms {(σ, σ), (σ a ), (σ a [π a]ψ)} will remain open. For the other case (σ a ) is not added, hence s∉|_a, therefore the tableau remains open. The box modality case goes similarly as in case of single agent. § THE POLSTABLEAUALGO ALGORITHM FOR STAR-FREE MULTI-AGENTS Now we design an algorithm based on tableau and prove existence of an algorithm that takes non-deterministically exponential steps with respect to the size of φ. Now given a φ, we now create a tree of nodes that contains terms of the form (w, ψ) and (w, ), where w∈Σ^* is a word that is occuring in tableau, and ψ is a formula in FL(φ). Each node T_σ refers to a state label σ in tableau, a term of the (w,ψ)∈ T_σ intuitively translates to in the state corresponding to σ, after projecting model on w, the state survives and there ψ is satisfied, and hence refers to the term (σ w ψ) in tableau. Similarly, (w, )∈ T_σ means state corresponding to σ survives after projection on w, and hence refers to the term (σ w ) in the tableau. The tableau tree created, we call it _ For this algorithm, we change the definition of saturation and unsaturation a bit from the earlier definition. We say T_σ is unsaturated against a rule R iff there is a term in (w, ψ)∈ T_σ or (w, )∈ T_σ, such that (σ w ψ) or (σ w ) lies in the numerator of R but there is no denominator (σ w ψ') of R such that (w, ψ') is in T_σ, similar for terms like (w, ). We call the term (w, ψ) or (w, ) here to be the reason for unsaturation. We saturate the rules carefully such that each node in the tree corresponds to a single state in the model. This technique is well studied in <cit.>. StarFree-SAT § SATISFIABILITY PROBLEM OF SINGLE AGENT FRAGMENT OF POLSONEAGENTHARDNESS-HARD In this section we prove Theorem <ref> We will prove the hardness by reduction from TQBF. Given a QBF formula φ = ∃ x_1 ∀ x_2 … Q_nx_n(x_1,…, x_n), where Q_i is ∃ if i odd, and is ∀ is even, and (x_1,…, x_n) is a propositional formula in CNF over variables x_1,…,x_n. Without loss of generality, we suppose we have m clauses with at most 3 literals in each. The objective is to define a -formula (), computable in poly-time in ||, such that is QBF-true iff () is -satisfiable. To save space, we denote x_i as x_i, where x_i is a variable in the TQBF. We also write x_i for x_i. Definition of (). We encode valuations over x_1, …, x_n by words on the alphabet i, i | i=1..n. We say that a literal _h is consistent with a word w if a__h appears in w. For instance the word 1 2 3 encodes the valuation in which x_1 is true and both x_2 and x_3 are false. Set of valuations are represented by languages. For 1 ≤ u ≤ v ≤ n, B^u_v := (u+1 + u+1)(u+2 + u+2)…(v + v), (by convention B^u_v = ϵ when u=v). Intuitively, B^u_v is the language encoding the set of all possible valuations over propositions x_u+1, …, x_v. * We first define several formulas to express constraints on expectations: * The formula T_i := (i⊤∧i⊤) imposes that the current state survives after observing both i as well as i. * For each literal _h being x_h or x_h, we build a formula L_h that enforces the expectation at the current state contains all words encoding valuations over propositions x_1, …, x_n in which _h is true: L_h := ⋀_i = 1^h-1([B^0_i-1]T_i ∧ [B^0_h-1](a__h⊤∧ [a__h]) ∧ ⋀_i=h+1^n[B^0_h-1a__hB^h_i-1]T_i * Finally for each clause C_j = (_h∨_r∨_k), we define the formula (C_j) := K(p_j→(L_h∨ L_r∨ L_k))∧K̂p_j. The subformula K̂p_j enforces the existence of a p_j-state. And the subformula K(p_j→(L_h∨ L_r∨ L_k)) enforces that any p_j-state survives on all the words from w∈ B^0_n which are consistent with either l_h, or l_r or l_k. * S := (Q_1x_1)(Q_2x_2)…(Q_nx_n)⋀_j=1^mK̂p_j where (∀ x_i) = [i + i], (∃ x_i) = i + i for any i∈[n]. Intuitively, the choice of a valuation over x_1, …, x_n by the two players ∃ and ∀ in the QBF-prefix Q_1x_1… Q_n x_n is simulated by the choice of a word in B^0_n by the two players . and [.] so that all clauses are true (i.e. all p_j-state survives the observation of w: ⋀_j=1^mK̂p_j). The -formula is defined by = ⋀_C_j∈φ(C_j)∧ S∧ (⋁_j=1^m p_j) Note that can be computed in poly-time in |φ|. Let us prove that φ is a QBF-true iff is -satisfiable. ⇒ First assume φ is QBF-true. Hence there is a Quantifier tree that is certifying the truth. We create a model: * W = {1,2,…, m} * R = W× W * V(j) = {p_j} * Exp(j) = ∑_l_i∈ C_j(Π_k=1^i-1(a_k + a̅_̅k̅)_c(l_i)Π_k=i+1^n(a_k + a̅_̅k̅)) First we prove _φ,1(C_j) for every j∈[m]. Consider for any state j, since V(j) = {p_j}, hence K̂p_j stands true from any state. Now we prove, since the formula K(p_j→ (_m(l_h)∨_m(l_i)∨_m(l_k))) insists, that _φ,j(_m(l_h)∨_m(l_i)∨_m(l_k)) Given C_j = (l_h∨ l_k∨ l_r), since φ is true, there is a path (among many) is the quantifier tree where at the end of the path (leaf node), it is true that in every clause at least one literal is assigned true. Let us consider in C_j, in this path, l_h was assigned true. By induction on 1≤ i< h, it can be proved that _φ|_w,j T_i for every w∈(B^1_i-1). By the definition of the model and by the fact that for every w∈(B^1_h-2), _φ|_w,j T_h-1, it can be seen that _φ|_w,j([a_l_h]∧a_l_h⊤) for every w∈(B^1_h-1) Again, by induction on h < i≤ n, it can be proved that, _φ|_w,j T_i for every w∈(B^0_h-1a_l_hB^h_i-1). Hence _φ,1(C_j). Now we prove If φ is true then _φ,1_c(Q_1x_1)_c(Q_2x_2)…_c(Q_nx_n)⋀_j=1^m K̂p_j. We prove for any i∈{0,…, n-1}, _φ|_(l_1)…(l_n-i),1(Q_n-i+1x_n-i+1)…(Q_nx_n)⋀_j=1^mK̂p_j, where (l_1,…,l_n-i) represent any assignment respect to the true paths in the quantified boolean tree upto level n-i. Base Case Consider the case for i=1. Since by assumption n is even, (Q_nx_n) = [n + n]⋀_j=1^mK̂p_j. Since by assumption, (l_1,…,x_n) and (l_1,…,x̅_̅n̅) are a satisfying assignment for ξ, hence _φ|_(l_1)…n, 1⋀_j=1^mK̂p_j as well as _φ|_(l_1)…n,1⋀_j=1^mK̂p_j. Inductive Step Consider i = k+1. Consider the case where i is even. Hence (Q_n-i+1x_n-i+1) = n-i+1 + n-i+1. Therefor by assumption (l_1,…, x_n-i+1) or (l_1,…, x̅_̅n̅-̅i̅+̅1̅) is an assignment that is making ξ true. Hence by IH , _φ|_(l_1)…(l_n-i)n-i+1, 1(Q_n-i+2x_n-i+2)…(Q_nx_n)⋀_j=1^mK̂p_j or _φ|_(l_1)…(l_n-i)n-i+1,1(Q_n-i+2x_n-i+2)…(Q_nx_n)⋀_j=1^mK̂p_j, which implies _φ|_(l_1)…(l_n-i), 1n-i+1 + n-i+1(Q_n-i+2x_n-i+2)…(Q_nx_n)⋀_j=1^mK̂p_j. ⇐ Now assume has a model such that ,s. Now we derive the quantifier tree certifying φ to be true. We prove the following: If t∈|_(l_1)…(l_n) and M,t p_j then (l_1,…,l_n) is a satisfying assignment for clause C_j. Without loss of generality, let us consider C_j = (l'_h∨ l'_r∨ l'_k). Hence to σ = (l_1,…,l_n) to be a satisfying assignment for C_j, at least one of l'_h, l'_r or l'_k should be in σ. Suppose (l_1,…,l_n) is not a satisfying assignment. Hence l_h = l̅'̅_̅h̅, l_r = l̅'̅_̅r̅ and l_k = l̅'̅_̅k̅. Also by assumption, p_j is true in t. Therefore either L'_h or L'_r or L'_k is true here. Consider the term L'_h. By definition the term [B^0_h-1](a_l'_h⊤∧[a_l'_h]) is ANDed and hence is true, but this cannot be true since after projecting on (l_1)…(l_h-1), B^0_h-1(l_1)…(l_h-1) is non-empty and hence |_(l_1)…(l_h-1),t(a_l'_h⊤∧[a_l'_h]). But this is a contradiction since (l_h) = (l̅'̅_̅h̅) = a_l'_h. For any 1≤ i≤ n, If s∈|_(l_1)…(l_n-i) and |_(l_1)…(l_n-i), s(Q_n-i+1x_n-i+1)…(Q_nx_n)⋀_j=1^mK̂p_j then Q_n-i+1x_n-i+1… Q_nx_nξ|_(l_1,…,l_n-i) is true. We state that the statement as the Induction Hypothesis. Now we prove the Base Case for it, that is i = 1. Base Case. By assumption s survives in the projection and |_(l_1)…(l_n-1),s[n + n]⋀_j=1^mK̂p_j. By proposition 1, since at least one state with p_j for each j is surviving, hence all the clause is still surviving in ξ|_(l_1,…,l_n-1). Since s has at least one of p_j true here and because of the K formula, s survives on both projection on n as well as n, and hence after that at least one state where p_j is true surviving for each j. Hence ∀ x_nξ_l_1,…,l_n-1 is true. Inductive Step. The Inductive Step is similarly proven as in base case.
http://arxiv.org/abs/2306.05982v1
20230609155135
Quantum Internet Addressing
[ "Angela Sara Cacciapuoti", "Jessica Illiano", "Michele Viscardi", "Marcello Caleffi" ]
quant-ph
[ "quant-ph" ]
⋮ Quantum Internet Addressing Angela Sara Cacciapuoti, Senior Member, IEEE, Jessica Illiano, Michele Viscardi, Marcello Caleffi, Senior Member, IEEE A.S. Cacciapuoti, J. Illiano, M. Viscardi, M. Caleffi, are with the www.quantuminternet.itwww.QuantumInternet.it research group, FLY: Future Communications Laboratory, University of Naples Federico II, Naples, 80125 Italy. E-mail:mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it. Web: http://www.quantuminternet.itwww.quantuminternet.it. Michele Viscardi acknowledges PNRR MUR project CN00000013, Marcello Caleffi acknowledges PNRR MUR project RESTART-PE00000001, Angela Sara Cacciapuoti acknowledges PNRR MUR NQSTI-PE00000023. ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The design of the Quantum Internet protocol stack is at its infancy and early-stage conceptualization. And different heterogeneous proposals are currently available in the literature. The underlying assumption of the existing proposals is that they implicitly mimic classical Internet Protocol design principles: “A name indicates what we seek. An address indicates where it is. A route indicates how to get there”. Hence the network nodes are labeled with classical addresses, constituted by classical bits, and these labels aim at reflecting the node location within the network topology. In this paper, we argue that this twofold assumption of classical and location-aware addressing constitutes a restricting design option, which prevents to scale the quantumness to the network functionalities, beyond simple information encoding/decoding. On the contrary, by embracing quantumness within the node addresses, quantum principles and phenomena could be exploited for enabling a quantum native functioning of the entire communication network. This will unleash the ultimate vision and capabilities of the Quantum Internet. Quantum Addressing, Quantum Routing, Entanglement, Quantum Path, Overlay Quantum Network, Forwarding. § INTRODUCTION The Quantum Internet is envisioned as the final stage of the quantum revolution, opening fundamentally new communications and computing capabilities beyond quantum cryptography <cit.>. These unparalleled functionalities have the potential of radically changing the world in which we live in ways we cannot imagine yet. As a matter of fact, a preliminary set of specifications for the Quantum Internet has already being drafted, with several experimental and standardization efforts, ranging from IETF with the seminal “architectural principles” RFC <cit.>, to ITU, IEEE, GSMA, and ETSI. In this vibrant context, the state-of-art related to the design of the Quantum Internet protocol stack is at its infancy and early-stage conceptualization <cit.>. Hence, we are still very far from having a complete and univocal protocol model, as we have for classical Internet. And different heterogeneous proposals are currently available in the literature <cit.>. Indeed, the underlying hypothesis of the existing proposals is to implicitly mimic classical Internet Protocol (IP) design principles: “A name indicates what we seek. An address indicates where it is. A route indicates how to get there” <cit.>. Hence, network nodes are implicitly labeled with classical addresses, constituted by classical bits, and these labels aim at reflecting the node location within the network topology. In this paper, we argue that this twofold assumption of classical and location-aware addressing constitutes a restricting design option, which prevents to scale the quantumness to the network functionalities, beyond simple information encoding/decoding. Conversely, by embracing quantumness within the network functionalities, quantum principles and phenomena could be exploited for enabling a native quantum functioning of the entire communication network. This can be regarded as an additional level of Internet quantization, where the original level was to quantize the messages delivered by the network, while the second level is to quantize the network functionalities. To this aim, a quantum addressing is a mandatory pre-requisite for any network functionality design, and in the following we will focus on it, as archetypal case study capable of providing the reader with an overview of the potentialities offered by a native quantum network functioning. § BACKGROUND: CLASSICAL INTERNET ADDRESSING Since Kleinrock's seminal work <cit.> dated over forty years ago, classical Internet routing has pursued scalability mainly through clustering. More into details, it is well known that maintaining complete topological knowledge about the network topology at each node, through one routing table entry for each destination, becomes quickly prohibitive both in terms of storage and update cost, as the number of network nodes grows. Hence, the key design principle behind classical Internet routing has been the wisely selection of the partial topological information to be stored at each node, by reducing substantially the size of the routing tables. For this, topological details about remote portions of the network are discarded. Hence, at each node, (almost) complete topological information should be available for destinations that are close[Close according to some meaningful metric from a topological perspective, with the representative example constituted by hop-count.] to the node, whereas less information should be maintained for destinations far away. And the further the destination is, the less the information is. This design principle is achieved through hierarchical clustering of the network: nearby nodes are grouped into clusters, clusters into super-clusters, and so on in a bottom-up fashion with multiple levels of hierarchy among clusters. Routing tables are thus organized so that they keep only one entry for all the nodes in each cluster level and, if the cardinality of the clusters grows exponentially as the level increases (i.e., 𝒪(2^k) nodes in a k-level cluster), the number of routing entries scales logarithmically with the network size. Almost all the proposals for classical Internet trying to address routing table scalability – included the ones used nowadays in the form of CIDR for inter-domain routing or OSPF/ISIS areas for the intra-domain routing – are based, explicitly or implicitly, on the hierarchical routing principle <cit.>. It must be noted though that classical Internet topology is not a static hierarchical topology per-se, such as the one exhibited by regular static graphs like trees or grids. Rather, it is a dynamic topology exhibiting scale-free characteristics. This implies that the topological information reduction, rather than based on some peculiar characteristics of the underlying physical graph, is fundamentally obtained by embedding some topological information within node labels. Thus, node labels cannot be arbitrary identities – i.e., flat addresses such as IEEE 802 MAC ones – but they must somehow reflect the node location within the network topology, as it happens with IP addresses by design <cit.>. Unfortunately, location-aware addressing such as IP one doesn't come for free. As instance, it requires extensive assignment planning and management, as well as additional network functionalities, with DNS as pivotal example, for mapping univocal node identities (i.e., names in IP terminology) to node addresses. Furthermore, the reduction of the topological information stored at each node implies a sub-optimality of the path discovery process, regardless of the particulars of the adopted routing protocol. Indeed, packets can be forwarded through longer routes. Overall, from a network perspective, Internet routing scalability is achieved through topological depletion. Path discovery is not performed through the entire physical network, rather it is performed through the overlay routing network implemented according to the incomplete topological information stored within the routing tables. Hence, the overlay network is built on top of the underlying physical network by incorporating only a subset of the links available within the physical graph, as represented in Figure <ref>. This topological depletion allows to reduce Internet physical graph structure to some regular graph, such as a tree, which in turn is strongly influenced by the underlying network characteristics. § QUANTUM ADDRESSING As discussed in Section <ref>, quantum addressing – as the quantum equivalent of the univocal network addressing provided by IP and its consequences on routing within the Quantum Internet – is yet an unexplored research domain. A notable exception is <cit.>, where quantumness is exploited for enabling quantum networks to perform different tasks and to address other devices in a coherent fashion through control quantum registers. In this paper, for the first time to the best of our knowledge, we discuss key drawbacks arising by adopting classical, location-aware addressing within the Quantum Internet, namely, i) failing in modeling the peculiarities of entanglement-enabled connectivity, ii) failing in embracing the unique propagation characteristics of quantum information carriers, and iii) failing in modeling the very fundamental goal of Quantum Internet routing. §.§ Entanglement-Enabled Connectivity From a network perspective, entanglement enables a new and richer form of connectivity, with respect to classical networks <cit.>. Specifically, once an entangled state – say an EPR pair for the sake of exemplification – has been shared between two nodes, a qubit can be “transmitted” via quantum teleportation, regardless of the instantaneous conditions of the physical quantum link connecting the two nodes. Remarkably, qubit transmission is still possible even when there is no longer a quantum link connecting the nodes together. In this sense, entanglement enables a new form of connectivity, referred to as entanglement-enabled connectivity, which differs from classical Internet connectivity in that: i) it exhibits a weaker dependency on the underlying physical communication link, and ii) it exhibits unconventional temporal dynamics, since entanglement is depleted once used. Furthermore, entanglement can be swapped and, hence, it is possible to dynamically, namely, at run-time, change the identities of the entangled nodes. Hence, entanglement redefines the very same concept of topological neighborhood[It is worthwhile to underline that neighborhood is a crucial concept in classical Internet routing, where the store-and-forward paradigm exploits neighbor nodes for delivering packets to remote nodes.], with no counterpart in the classical world <cit.>. Accordingly, entanglement enables half-duplex unicast links between any pairs of nodes, regardless of their relative positions within the underlying physical network topology. In other words, any pair of nodes can be neighbor as long as they share entanglement. Additionally, entanglement is not limited to EPR pairs. Indeed, when it comes to multipartite entanglement, the dynamic nature of the entanglement-based connectivity becomes even more evident. As instance, by distributing an n-qubit GHZ state among n network nodes, an EPR pair can be distributively extracted by any pair of nodes, with the identities of the entangled nodes chosen at run-time. Hence, even if all the n nodes are possible neighbor nodes, only two out of n can actually exploit the entanglement to create a half-duplex unicast link. From the above, it becomes evident that any addressing scheme for the Quantum Internet, aiming at achieving routing scalability, cannot resort to classical node identities reflecting node location within the physical network topology, as it happens with classical IP addresses. Rather, it should aim at properly capturing and tracking the rich, dynamic nature of entanglement-enabled connectivity. As a matter of fact, entanglement-enabled connectivity should not only be captured by the quantum addressing. But, it should be properly engineered for improving the routing process, as further discussed in Section <ref>. Specifically, as described in Section <ref>, hierarchical routing achieve scalability through incomplete topological information. This is equivalent to build an overlay routing network with special graph properties through topological depletion, by storing within the routing tables only a subset of the forwarding possibilities offered by the physical neighbors. Conversely, entanglement-enabled connectivity allows to augment the neighbor set, by creating “additional” links toward remote nodes through entanglement swapping. Hence, it enables the possibility to build and to engineer an overlay entangled network where the network graph properties needed by the routing process are obtained through topological augmentation, rather then topological depletion, as depicted in Figure <ref>. This possibility will be further discussed in Section <ref>. It is important to highlight that, although overlay networks enabled by entanglement share some similarities with classical virtual overlay networks – such as those arising, as instance, with P2P systems – entanglement-enabled connectivity unlocks characteristics with no classical counterpart, as discussed in the following. Classical overlay networks aim at form virtual neighboring relationships, used to build a specific overlay graph. The overlay graph properties are thus exploited by the overlay routing protocol. Yet neighborhood in classical overlay networks is a virtual concept. Usually, there is no physical link between two nodes that are neighbor in the overlay network. Rather the two nodes are remote within the underlying physical topology, and the physical multi-hop path between the two nodes does not exhibit any particular graph property. This – unless assuming the overlay network provided with a complete knowledge about the topology of the portion of the network where the two neighbor nodes belongs, which is unreasonable from a scalability perspective – implies that the actual packet forwarding through the underlying physical network introduces a performance degradation that grows with the network size. Conversely, entanglement-enabled overlay networks provide the nodes with entanglement-enabled links (e-link) that can be used on-the-fly, without introducing any delay nor any performance degradation due to the mismatch between overlay and underlay network as in the classical case. Indeed, any quantum[As regards to classical signaling, it can be delayed further in time thanks to the deferred measurement principle.] overhead induced by the establishment of the e-links toward remote nodes occurred beforehand. Thus, the set-up process can be properly engineered for establishment e-links proactively as discussed in Section <ref>, so that the actual entanglement utilization does not incur in any additional overhead. §.§ Quantum Path Existing models for the Quantum Internet protocol stack overlook an additional level of quantization, that comes into play when the unique propagation characteristics of quantum information carriers are taken into account. Specifically, counter-intuitively quantum mechanics allows a particle to propagate simultaneously among multiple space-time trajectories <cit.>. This peculiar property enables scenarios where quantum information carrier propagates through a quantum path, i.e., through a path[It is worthwhile to note that, despite their counter-intuitive nature, quantum paths have been already experimentally implemented, and they have been shown to provide significant advantages for a number of problems arising in both quantum computation and quantum communications, ranging from noise suppression to entanglement generation and distribution. We refer the reader to <cit.> for an in-depth overview of quantum paths.] in a quantum superposition of different configurations. This yields different powerful setups <cit.>, such as superposition of different (in space) links or superposition of different alternative orders among the links. Accordingly, the communication path is quantized <cit.>. It is a matter of fact that the exploitation of a quantum path cannot rely on classical node addressing, which fails to capture the quantum features of the quantum paths. Specifically, once a quantum packet is sent on a quantum path, the “packet location” is not univocally determined since it is in superposition of different time/space configurations. Rather, the packet location is indefinite and, hence, a quantum address is mandatory for describing such a superposition. The quantum path framework is a very powerful tool, key for any routing protocol genuinely quantum <cit.>, since it allows to significantly enhance the performance of the quantum network, by exploiting end-to-end paths with no-classical counterpart. Indeed, through a quantum path, the quantum carrier is delivered via different sets of intermediate nodes and different set of point-to-point links that exhibit different qualities of service. Hence, the genuine quantumness exhibited by the quantum path can exploit all the degrees of diversity (ranging from spatial through causal to temporal diversity), without any violation of the no-cloning theorem as it would happen by trying to adopt classical multi-path routing strategies <cit.>. This is depicted in Figure <ref>. Hence, for a successful quantum protocol stack design, it is key to recognize that providing the network nodes with a quantum address is mandatory for taking full advantage of the unique propagation characteristics of quantum carriers. As instance, with respect to the mentioned figure, the quantum packet propagates through a quantum route where the first hop is an even superposition of two quantum links, i.e., e_|n_s⟩,|n_1⟩ and e_|n_s⟩,|n_2⟩. §.§ Quantum Routing Up to now, existing literature on the Quantum Internet has considered quantum routing as the problem of distributing end-to-end entanglement between remote network nodes, according to some routing metric. As instance, several proposals have accounted for the temporal constraints induced by decoherence effects within the routing process, either by defining coherence-times-aware routing metrics or by incorporating these temporal constraints within the routing protocols. Similarly, several proposals have focused on optimizing entanglement distribution, with proposals ranging from fidelity maximization through purification/distillation to end-to-end path discovery. And a widely investigated area is constituted by the adoption of quantum repeaters, which combine entanglement swapping and entanglement purification to extend the entanglement over end-to-end path. Yet, when it comes to quantum routing, there exists a fundamental difference with respect to classical routing that has been mainly overlooked so far. Classical information is generated at the source for a given (usually, unicast) destination. Accordingly, classical routing goal is to find the “best” route toward the destination, and indeed any classical routing metric measures the utility of a neighbor node in terms of its “proximity” toward the destination. Conversely, the goal of the Quantum Internet routing is no longer to discover the route toward the destination. Rather, the goal is to entangle the source with the “closest” node that is already entangled with the destination. At a first sight, classical and quantum routing goals might seem identical. In fact, someone might object that after all, for entangling these intermediate nodes with the destination, the very same problem underlying classical routing – i.e., discovering a path (from these intermediate nodes rather than from the source) toward the destination – must have been solved beforehand. But this objection overlooks a fundamental difference[The interested reader is referred to <cit.> for an in-depth treatise of the differences arising with quantum information and entanglement with respect to classical information.] between information and entanglement. Information, both classical and quantum, is valuable for the destination only. Any other intermediate node – while forwarding it to the destination – cannot exploit it for its communication needs. Hence, the beneficiary of classical and quantum information is fixed and pre-determined. Conversely, entanglement represents a communication resource valuable for any cluster of nodes sharing it, regardless of where it has been originally generated and regardless of the identities of the nodes originally supposed to use it. Indeed, the only requirement for exploiting an entangled qubit locally available is to coordinate with the other nodes sharing the entanglement resource. In a nutshell, while information exhibits a local, predetermined value, entanglement is characterized by a global, dynamic usefulness. From the above, it follows that, whenever a proactive[By borrowing ad-hoc networks terminology, we can classify the strategies for the entanglement distribution from a network engineering prospective as either proactive or reactive. Proactive strategies aim at early distribution of entanglement resources – ideally, with a new generation process starting as soon as the entanglement resource is depleted – whereas reactive strategies aim at on-the-fly distribution of entanglement, with a new generation process starting on demand, when needed.] entanglement distribution strategy is adopted, the Quantum Internet can exploit the additional degree of freedom represented by the global and dynamic usefulness exhibited by entanglement for providing the communication services, as illustrated in Section <ref>. From the above, it becomes evident that classical location-aware addressing – where routing tables store partial information toward clusters of destinations as it happens with classical IP – fails in providing useful topological information activated by entanglement-based networks. Indeed, the overall objective of routing tables should switch from tracking next hops toward destinations to track entanglement resources. With respect to this aspect, it is important to underline that entanglement is not limited to bipartite entangled states such as EPR pairs. Rather multipartite entanglement greatly enriches the features of entanglement-based connectivity <cit.>, which in turn is deeply affected also by the specific properties characterizing the selected multipartite entanglement class[As instance, GHZ states constitute the natural substrate for applications aiming at distributively achieving some consensus or some form of synchronization, whereas W states represent a valuable tool for breaking any symmetry among the different parties, hence enabling applications based on leader election or distributed resource access <cit.>.]. It must be noted, though, that multipartite entanglement requires further coordination and signaling among the entangled nodes, when compared to EPR pairs. For this, network nodes aim at exploiting multipartite entangled state must be provided with the identities of all the nodes sharing such a state, along with the class of entanglement to whom the state belongs to. Clearly, proactive entanglement distribution requires coherence times longer than those associated with the execution of the network functionalities. And, whenever this requirement cannot be satisfied, reactive entanglement distribution represents the only possible strategy. § FROM SOFTWARE DEFINED NETWORKS TO ENTANGLEMENT DEFINED NETWORKS §.§ Overlay Network Design The main idea for exploiting quantum addressing is to challenge the paradigm underlying classical hierarchical routing. Rather than designing a (classical) addressing scheme that enables scalable routing tables at the price of link depletion in the overlay routing network, we aim at designing a quantum addressing scheme that builds an overlay entangled network through link augmentation. Specifically, in Figure <ref> we sketch a toy-model in which hierarchical principles are hybridized with entanglement marvels. Accordingly, within the figure, the nodes are organized in a two-level hierarchy. Level-one clusters are organized with a single super-node serving some end-nodes, whereas level-two clusters are organized in a peer-to-peer topology among the super-nodes with augmented fully connectivity. When it comes to the generation of entangled states, it is very reasonable, given the current maturity of quantum technologies, to assume a specialized super-node responsible for entanglement generation. The rationale for this assumption is twofold. On one hand, it accounts for the complex mechanisms and the dedicated equipment underlying the entanglement generation. On the other hand, it accounts for the mandatory requirement of some sort of local interaction among the qubits to be entangled. Accordingly, we consider the hierarchical overlay network in Figure <ref>. Clearly, the choice of the overlay entangled network in the figure is not restrictive, since there exists two degrees of freedom in designing the overlay network that can (and should) be jointly optimized: i) clustering, and ii) connectivity within each cluster level. With reference to clustering, although it is a complete new research area, we could envision to borrow some well-practices developed in the classical networking, by exploiting some physical topological information. With reference, instead, to the augmented connectivity of level-two overlay graph, we highlight that the architecture of the entanglement-enabled connectivity plays a crucial role, since it determines the features of the level-two overlay network, and its capability to activate specific network functionalities. As a consequence, it should be recognized that the specific entanglement class(es) selected to realize the level-two overlay network is a design choice, which has to be carefully individuated. In this context, for instance, the amount of communication qubits available at each super-node to be devoted to maintain proactively the level-two overlay graph plays a crucial role, and it deeply influences the overall routing performance of the scheme built upon it. Preliminary research seems suggesting that memory and communications costs for augmented connectivity scale efficiently with the network size <cit.>, but this research area constitutes still an open issue. In the vision developed through Figure <ref>, we envision that the level-two hierarchy is also responsible for maintaining topological information needed to navigate the overlay graph and to fulfill the communication needs of the end-nodes. Specifically, as discussed in Section <ref>, quantum routing requires a paradigm shift with respect to classical routing. This becomes particularly evident by inspecting the information stored within the routing tables in Figure <ref>. Quantum routing communication opportunities are not represented as a classical link interface toward a (physical) next hop, but they are rather represented as an entanglement interface – namely, one or more communication qubits stored within the node – toward a neighbor node within the overlay entangled network. As a matter of fact, multipartite entanglement requires additional information – such as the identities of all the nodes (e-node column of table in Figure <ref>) sharing the resource and the particular class to whom the entangled resource belongs (e-type column of table in Figure <ref>) – to be stored within the table, as shown in Figure <ref>. §.§ Overlay Network Navigation We conclude the section with another remark. Specifically, by maintaining proactively the level-two overlay graph, the very concept of quantum routing is changed. Indeed, the quantum routing problem can be efficiently solved via quantum algorithms, which exploit the entanglement-based overlay graph and the routing tables available at the super-nodes. Preliminary research about distributed Grover algorithms goes in this direction <cit.>. However, further research is needed. Furthermore, level-two overlay graph could be further exploited to coherently control, through the quantum addresses, the involved super-nodes so that entanglement is generated without the need of physically navigating the graph, as suggested preliminary in <cit.>, by exploiting the quantum path framework. From the above, we are building a new Quantum Internet ecosystem, which moves from the software-defined paradigm to the entanglement-defined one. § CONCLUSION We conclude the paper with some considerations about the general design of the overall Quantum Internet protocol stack. Short-term efforts toward Quantum Internet are reasonably trying to reconcile quantum information and quantum entanglement to classical information – with an approach that can be defined as design by analogy <cit.>. This research activity is unquestionably important for the deployment of pilot small-scale networks, as well as from telcom operators' viewpoint aiming at maximizing current network assets revenue. Yet, from a long-term perspective, quantumness and its unconventional features should not be overlooked. Rather, they should be spotlighted and emphasized to have a deep impact on the network design, radically influencing the quantum network functionalities through a major paradigm shift, somehow similar to the shift from circuit-switching to packet-switching design for classical networks <cit.>. Although we have more doubts than answers, we do look forward to contribute to such an exciting research area, which could pave the way for the Internet of future such as Arpanet paved the way for today’s internet. IEEEtran
http://arxiv.org/abs/2306.05480v2
20230608180413
Artificial General Intelligence for Medical Imaging
[ "Xiang Li", "Lu Zhang", "Zihao Wu", "Zhengliang Liu", "Lin Zhao", "Yixuan Yuan", "Jun Liu", "Gang Li", "Dajiang Zhu", "Pingkun Yan", "Quanzheng Li", "Wei Liu", "Tianming Liu", "Dinggang Shen" ]
cs.AI
[ "cs.AI" ]
Artificial General Intelligence for Medical Imaging Analysis Xiang Li, Lu Zhang, Zihao Wu, Zhengliang Liu, Lin Zhao, Yixuan Yuan, Jun Liu, Gang Li, Dajiang Zhu, Pingkun Yan, Quanzheng Li, Wei Liu, Tianming Liu Senior Member, IEEE, and Dinggang Shen Fellow, IEEE (Corresponding authors: Xiang Li, Tianming Liu, Dinggang Shen) Xiang Li and Quanzheng Li are with the Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston 02115, USA. (e-mail: {xli60,li.quanzheng}@mgh.harvard.edu). Lu Zhang and Dajiang Zhu are with the Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington 76019, USA. (e-mail: [email protected] and [email protected]). Zihao Wu, Zhengliang Liu, Lin Zhao and Tianming Liu are with the School of Computing, The University of Georgia, Athens 30602, USA. (e-mail: {zihao.wu1,zl18864,lin.zhao,tliu}@uga.edu). Yixuan Yuan is with the Department of Electronic Engineering, Chinese University of Hong Kong, Hong Kong. (e-mail: [email protected]). Jun Liu is with the Department of Radiology, Second Xiangya Hospital, Changsha 410011, China. (e-mail: [email protected]). Gang Li is with the Department of Radiology at the University of North Carolina at Chapel Hill, Chapel Hill 27599, USA. (e-mail: [email protected]). Pingkun Yan is with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute, Troy, New York 12180, USA. (e-mail: [email protected]). Wei Liu is with the Department of Radiation Oncology, Mayo Clinic, Scottsdale 85259, USA. (e-mail: [email protected]). Dinggang Shen is with the School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China. (e-mail: [email protected]). July 31, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models. We emphasize the importance of integrating clinical expertise, domain knowledge, and multimodal capabilities into AGI models. In addition, we lay out key roadmaps that guide the development and deployment of healthcare AGI models. Throughout the review, we provide critical perspectives on the potential challenges and pitfalls associated with deploying large-scale AGI models in the medical field. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare and beyond. ChatGPT, GPT-4, LLM, AGI, Medical Imaging § INTRODUCTION AGI models such as LLMs have demonstrated success <cit.> in general domains. However, applying them directly to healthcare can pose significant challenges, potentially leading to decreased performance or even making it impossible <cit.>. These challenges primarily stem from the medical domain's unique characteristics, such as the specialized nature of clinical text and medical imaging, and the expertise needed for accurate interpretation. For example, medical imaging data, which encompasses images obtained from various modalities <cit.> such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, microscope, and more, is primarily acquired for diagnostic or therapeutic purposes, aiming to provide vital insights into a patient's health condition <cit.>. Interpreting these images demands specialized knowledge and expertise in anatomy, pathology, and radiology. In contrast, natural images typically consist of everyday objects that can be accurately recognized by most individuals based on common sense. This distinction presents a notable challenge when it comes to annotating a substantial volume of high-quality medical imaging data, as it necessitates the involvement of numerous experts, which is often impractical or unfeasible. Consequently, the medical domain faces a scarcity of extensive training data required for directly training large models from scratch, in contrast to the abundant availability of such data in the general domain. Moreover, due to the heterogeneity between medical data and general data, it is not optimal to directly apply LLMs (e.g., GPT-4 <cit.>) and large multimodal models (e.g., Kosmos-1 <cit.>) trained on general data to the medical domain. In addition, clinical data usually contains sensitive patient information. Strict regulations, such as HIPAA (Health Insurance Portability and Accountability Act) <cit.> in the United States, govern the storage, transmission, and handling of patient data. As a result, applying publicly available foundational models, such as GPT-4, to medical data becomes exceedingly difficult. In general, adapting AGI models to the medical domain requires careful consideration of the unique challenges posed by medical data. To unlock the potential benefits of AGI in improving healthcare outcomes, researchers in the medical field are dedicated to developing effective approaches to tailor these models to the specific requirements of the medical domain. These efforts primarily concentrate on three key areas: data, knowledge, and models. In the realm of data, the primary aim is to address two key challenges: data scarcity and the presence of sensitive information. For example, one publication proposed the ChatAug method <cit.>, which is built on ChatGPT and can rephrase each sentence in the training samples into multiple conceptually similar but semantically different samples. This augmentation technique is particularly valuable in clinical report studies, where different clinicians may provide personalized reports for the same clinical symptom. By training LLMs with the augmented data samples generated by ChatAug, the models can generate diagnostic reports in diverse styles and accurately capture the typical symptoms of diseases without being influenced by specific expression styles. In the medical imaging domain, a recent study utilized diffusion models <cit.> to produce high quality synthetic data for skin disease classification that incurs no classification degradation on test data. In another interesting work <cit.>, a novel framework was proposed to leverage GPT-4 for the de-identification of clinical notes. This framework can automatically identify and remove Protected Health Information (PHI) from unstructured medical texts while preserving the original structure and meaning of the texts. Compared to the traditional de-identification methods, this approach demonstrates higher accuracy and remarkable reliability in masking private information, showcasing the immense potential of LLMs in handling medical text data. To better guide AGI models, researchers try to develop effective strategies to incorporate expert knowledge and domain knowledge in the learning process. Several notable studies in this topic deserve attention <cit.>. In medical imaging, a recent study demonstrated that simple MLP-based SAM-adapters could be used to inject task and domain-specific knowledge into the Segment Anything Model (SAM) <cit.>. Another work in NLP <cit.> introduces a dynamic prompt generation approach that iteratively improves the quality of prompts during the training process by incorporating domain knowledge and expert guidance. This iterative approach allows the prompts to evolve and adapt, resulting in improved performance of LLMs in medical tasks. By leveraging domain-specific knowledge and expert input, the generated prompts can effectively guide the models towards more accurate and contextually relevant outputs. Another noteworthy work <cit.> proposes the use of multimodal prompts, which effectively integrate multi-modal information <cit.> for computer-aided diagnosis (CAD). As we have previously discussed, the medical domain is characterized by its multimodality <cit.>, where the diagnosis of diseases often requires the integration of information from various sources, including different medical imaging modalities, clinical context, patient history, and other relevant information. By incorporating multimodal prompts, models can better leverage the diverse and complementary information available, leading to more comprehensive and accurate analyses and predictions. Alongside the research on data and knowledge, there have been efforts to explore and examine the design principles of models in both highly specialized domain (e.g., radiology and radiation oncology) and the general domain. One recent study <cit.> discussed the question of whether model design should be generic or domain-specific. This work specifically focused on radiology language inference, a highly specialized task, and provided a detailed comparison between generic models and local domain-specific models in terms of performance and cost (as large models can be expensive). The study yielded insightful results that shed light on the advantages and disadvantages of each approach. Another review paper <cit.> took a more fundamental discussion and explored the relationship between AGI and the human intelligence. The authors discussed the essential characteristics that an AGI system should possess based on the fundamental features of human intelligence. In addition to the technical discussions, another important area of research focuses on the practical applications of AGI models <cit.>. Maximizing the potential of these models in various fields has become a hot topic of investigation. In the medical domain, aside from conventional tasks such as medical image segmentation, disease diagnosis, and medical report generation, researchers are exploring the possibility of utilizing these large models for more meaningful applications. For example, LLMs can be used to help educate the next generation of medical professionals in radiation oncology <cit.>. The goal is to leverage these models to alleviate the scarcity of medical resources and provide more personalized and human-centric healthcare. By harnessing the capabilities of large language models, there is great potential to revolutionize the healthcare field. § LLMS AND AGI: CHARACTERISTICS AND TECHNOLOGICAL FOUNDATIONS To help readers understand the technical contents of this paper, we first provide a summary of the enabling technologies of LLMs. In a previous study <cit.>, a comprehensive summary was provided on the development and categorization of language models using an evolutionary tree. In this review paper, we will focus on analyzing the key characteristics and essential technological foundations of LLMs and AGI system compared to traditional language models. §.§ Key characteristics of LLMs/AGI Current LLMs and AGI possess several key characteristics that distinguish them from traditional deep models. These characteristics contribute to their capabilities and potential impact. * Emergent Ability: Emergence, as described by Nobel Physics prize laureate, Dr. Philip Anderson <cit.>, occurs when quantitative changes in a system lead to qualitative changes in its behavior. Recently, it has been observed that LLMs exhibit a multitude of “emergent" abilities that larger models can accomplish but smaller models cannot <cit.>. Interestingly, many of these abilities extend beyond text analysis, encompassing tasks such as performing mathematical operations, generating executable code, and even decoding movies using emojis. Analyses indicate that certain tasks and models go beyond a complexity threshold where the model's functionality starts to increase exponentially. Presently, researchers are engaged in a race to not only identify additional emergent abilities but also to understand the underlying reasons and mechanisms behind their occurrence. Their aim is to predict and comprehend this unpredictability. Gaining a comprehensive understanding of emergence holds the potential to address fundamental questions in AGI development and evolution. It can shed light on whether complex models are genuinely engaging in novel processes or simply excelling in statistical analyses. Furthermore, this understanding can assist researchers in harnessing the potential benefits of emergent abilities while mitigating associated risks. * Multimodal Learning Ability: The multimodal learning ability of AGI models refers to their ability to comprehend and generate text in conjunction with other modalities, such as images, audio, or video. This attribute is crucial towards developing advanced AGI systems that surpass human intelligence, as it enables a more holistic and comprehensive understanding of information across different modalities. Notably, several models have made significant contributions to multimodal learning. CLIP <cit.>, DALL-E <cit.>, GLIDE <cit.>, VisualGPT <cit.> are among the prominent models recognized for their contributions to image-to-text and text-to-image generation tasks. These models have demonstrated remarkable capabilities in bridging the gap between images and textual descriptions. Additionally, METER <cit.> stands out as a representative model for Visual Question Answering (VQA), a task involving answering text-based questions based on visual content. More recently, models such as PaLM-E <cit.> and Kosmos-1 <cit.> have pushed multimodal capabilities to new highs.For instance, PaLM-E is capable of guiding robots and demonstrating generalist abilities, and BiomedGPT <cit.> has demonstrated good generalizability and robustness for a series of down-stream tasks as a pre-trained model for multi-modal medical data. These models represent significant advancements in multimodal learning, showcasing the potential of foundational models to comprehend and generate text in conjunction with other modalities. The integration of multimodal capabilities contributes to the development of more comprehensive and versatile AI systems that can process and understand information across different modalities, bringing us closer to the realization of advanced AGI. §.§ Technological foundations of LLMs/AGI Current LLMs and AGI rely on several technological foundations that provide the necessary tools and techniques to enhance their capabilities and drive their advancements. We will introduce these key technological foundations in this section. * Transformer Architecture: The Transformer model, introduced in 2017 by Vaswani et al. <cit.>, has revolutionized the field of Natural Language Processing (NLP) and has become the standard neural network architecture of LLMs <cit.>. The significance of the Transformer architecture lies in its ability to address the limitations of prior models, such as Recurrent Neural Networks (RNNs) <cit.>, in handling variable-length sequences and context awareness. Unlike RNNs, the Transformer model does not rely on sequential processing, making it more efficient and suitable for parallel computation. This parallelism enables the Transformer to process and understand large amounts of text simultaneously, significantly improving its training and inference speed. At the core of the Transformer is the multi-head attention mechanism, which allows the model to assign different weights to tokens based on their relevance. This attention mechanism enables the Transformer to capture long-term dependencies in a more effective manner, enhancing its performance across a wide range of NLP tasks. By attending to different parts of the input sequence, the Transformer can effectively understand the relationships between tokens and capture contextual information. Additionally, the parallelizable nature of the Transformer architecture makes it highly adaptable and scalable. This flexibility enables large-scale pre-training, where the model is trained on vast amounts of data to learn general language patterns and representations. These pre-trained models can then be fine-tuned for specific downstream tasks, such as text generation or language translation. This approach allows the Transformer to leverage its learned knowledge and generalize well to various domains and tasks. Although previous models can already be very deep, the Transformer's design allows models to take precedence over inductive biases by fully benefiting from the large size of the network to scale up the training. In addition, unlike some traditional models that heavily rely on hand-crafted rules or prior linguistic knowledge, the Transformer learns directly from the data, making it more flexible and capable of capturing complex patterns and nuances <cit.>. * In-context Learning: In-context learning <cit.> is a powerful technique employed by language models to learn and make predictions based on a limited context provided during inference or interaction (see Figure <ref> part (a) for an example). Unlike traditional approaches that heavily rely on extensive pre-training or fine-tuning using large labeled datasets, in-context learning allows models to leverage specific examples or instructions within the input prompt to guide their behavior and generate contextually relevant outputs. The key idea of in-context learning is to learn from analogy. To facilitate this process, in-context learning starts by constructing a demonstration context using a few examples. These examples showcase the desired input-output behavior that the model should learn. Subsequently, the demonstration context is combined with a query question, resulting in a prompt that contains both the query and the demonstration context. This prompt is then fed into the language model for prediction. In contrast with supervised learning, where model parameters are updated through backpropagation during a training stage, in-context learning does not involve explicit parameter updates. Instead, it relies on pretrained language models that directly perform predictions based on the provided examples and prompts. The model is expected to discern the underlying patterns within the demonstration context and generate accurate predictions accordingly. With the remarkable scaling of model size and the availability of large-scale corpora, LLMs have demonstrated their proficiency in in-context learning. These models possess the capability to extract meaningful information from a few examples within the given context. By leveraging the extensive knowledge encoded in their pretrained weights, LLMs excel at learning from limited context and making informed predictions. * Prompt Engineering: Traditionally, the process of collecting and labeling responses for training or fine-tuning NLP models has been both time-consuming and costly. However, prompt engineering <cit.> offers a more efficient alternative. A prompt, in this context, refers to a set of instructions that customize the behavior and guide the subsequent interactions and outputs of an LLM (see Figure <ref> (b) for a taxonomy of prompt design). Recent studies have demonstrated the effectiveness of prompt engineering in adapting large-scale pre-trained language models to specific downstream tasks, eliminating the need for extensive fine-tuning. Prompt engineering allows for customization and tailoring of LLM capabilities. By designing appropriate prompts, developers can shape the LLM’s behavior to align with specific objectives, tasks, or user requirements. This level of customization enhances the practical usability and applicability of LLMs across various domains and use cases. In general, prompt engineering plays a vital role in current LLMs and provides a promising way for developing advanced AGI system. By designing appropriate prompts, developers can leverage the power of pre-trained models and optimize their performance, making LLMs and AGI systems more effective, efficient, and adaptable in the specific domain. * RLHF: The alignment of an AGI system is a critical process that ensures its behavior is in line with desired principles, values, and objectives. This alignment is essential to ensure that AGI systems operate ethically, responsibly, and in accordance with human intentions and societal norms. In current LLMs, a novel strategy known as Reinforcement Learning from Human Feedback (RLHF) has been adopted to achieve alignment <cit.>. RLHF involves training LLMs by incorporating human feedback to guide their learning process. RLHF has been employed in LLMs such as InstructGPT <cit.> and ChatGPT, to train these models by incorporating feedback from human supervisors. By leveraging human knowledge and preferences, RLHF ensures that the learned behaviors of LLMs align with human values and requirements. In traditional reinforcement learning, an agent learns through trial and error, receiving rewards or penalties based on its actions <cit.>. RLHF takes this process further by actively involving humans in the training loop. Human supervisors play a pivotal role by providing feedback to the learning agent, shaping its behavior, and guiding it towards desired outcomes. This approach is particularly valuable in domains where human expertise and intuition are essential, such as complex decision-making tasks or domains with limited or imperfect simulations. RLHF is not only pertinent to LLMs but also an important strategy in the development of AGI. It contributes to achieving alignment between AGI systems and human values. By incorporating human feedback, RLHF helps ensure that AGI systems learn and operate in a manner that aligns with human intentions and desires. § LARGE LANGUAGE MODELS FOR MEDICAL IMAGING Large language models are at the forefront of AGI development. The advancement of LLMs is bringing a revolution to medical imaging. In this section, we highlight key points of consideration that facilitate the integration and adaptation of LLMs in healthcare. Medical practitioners and healthcare organizations are now envisioning a future where AI-powered systems assist in disease diagnosis, patient outcome prediction, medical education, and streamlining administrative work. Simultaneously, these models could provide a unified knowledge space integrating various data modalities, promoting a comprehensive approach to patient care. While these possibilities paint an exciting picture for the future of healthcare, it's also imperative to address the associated challenges, such as data privacy, regulations, and imbalances, along with the technical obstacles in prompt crafting and healthcare data preparation. This section will delve into the detailed potential applications and the confronting challenges of LLMs in healthcare. §.§ Roadmaps * Expert-in-the-loop: The incorporation of expert knowledge is vital towards harnessing the full potential of artificial intelligence (AI) within specialized medical fields. While LLMs like ChatGPT and its successors have demonstrated proficiency in general medical knowledge <cit.>, they might encounter challenges when performing domain-specific tasks due to a deficiency in specialized knowledge <cit.> and generate erroneous content. A promising approach to overcome these obstacles is the adoption of Reinforcement Learning with Expert Feedback (RLEF). LLMs like ChatGPT are trained with RLHF <cit.>, where generated responses during training are ranked to enhance and refine the model's outputs. Yet, the application of this approach to specific medical specialties presents unique complexities due to the nuanced nature of these disciplines. For example, in cardiology, an AI model might be tasked with interpreting electrocardiogram (ECG) readings <cit.>. While the model could identify basic patterns, accurately diagnosing complex cardiac conditions like myocardial infarctions or arrhythmias necessitates the expertise of a cardiologist. In psychiatry, a model might struggle to differentiate between overlapping mental health conditions based on patient-reported symptoms, demanding the nuanced understanding of a psychiatrist <cit.>. In radiology, distinguishing between a benign lesion and a malignant tumor on a CT scan requires the specialized knowledge that a radiologist provides <cit.>. Likewise, in radiation oncology, determining the precise dosage and target area for radiation therapy calls for insights from radiation oncologists <cit.>. By integrating RLEF within the reinforcement learning loop, AGI models can mirror the nuances of diverse medical specialties more accurately, leading to clinically relevant and precise outputs. With sufficient expert feedback, ChatGPT could potentially interpret ECG readings, help in distinguishing between complex psychiatric conditions, accurately identify malignancies in radiological scans, and suggest more nuanced radiation therapy plans. Further refinement of these LLMs requires a strategy that incorporates iterative, learning-based approaches deeply rooted in various medical specialties. This strategy entails incorporating more "human-in-the-loop" <cit.> interventions during training, acknowledging their significant contribution in enhancing the model's precision in clinical scenarios. Indeed, employing semi-supervised <cit.> and active learning methodologies <cit.> within a “human in the loop learning” paradigm <cit.> holds promise for enhancing the model's proficiency. Despite data scarcity in specialized fields, strategically training LLMs within these domains is crucial for progress. This approach not only allows the models to benefit from a diverse set of clinical scenarios but also helps them generalize this learning across various medical specialties. Recognizing the potential technical limitations among medical practitioners in effectively utilizing ChatGPT, it becomes evident that interdisciplinary collaboration is of utmost importance. To unlock the full potential of AI in specialized medical fields, a concerted, multidisciplinary effort is needed. This collaborative approach not only fosters innovation but also ensures the development of AI tools that are reliable, accurate, and beneficial for real-world clinical applications across a broad range of medical specialties. * Tailoring to the Medical Domain: It is important to tailor language models to specific domains <cit.>. There's a strong need to consider domain-specific data <cit.> and adopt more refined ways of prompting, including learning how to ask questions more effectively in a clinical context. In addition, proper domain adaptation involves learning from multiple data modalities <cit.>, a crucial aspect of applying LLMs to medical imaging or medicine in general. The current reward modeling and simple ranking system used by ChatGPT is considered somewhat rudimentary, especially if the model is to be applied across multiple modalities or specialized fields. Hence, future work should progress towards a more complex and multi-dimensional ranking approach, enabling a more nuanced understanding and interpretation of various medical scenarios. A major challenge in this domain is the difficulty in generating a large medical data database. Hospitals are often reluctant to share their data due to privacy and legal concerns. Therefore, finding a legal and ethical way to access and utilize such data is of paramount importance for training LLMs like ChatGPT in the future. This would allow these models to better understand and adapt to the intricacies of specific medical fields, thereby improving their performance and applicability in these areas. * Prompt tuning: Prompt tuning is an effective and low-cost approach to adapt Large Language Models (LLMs) to specific domains <cit.>, data, and tasks <cit.> without updating the massive model parameters of large foundational LLMs. Prompts with task instructions and demonstrations were first utilized in GPT-3 <cit.> for adapting a pretrained model to various downstream tasks without modifying any pretrained parameters. Prompt tuning further refines the pretrained model by adding prompt tokens and solely training the prompt embedding, without updating the original parameters <cit.>. In language models, prompt tuning exhibits superior performance, even surpassing full fine-tuning <cit.>. Recently, multimodal prompts have been also introduced into visual models to further enhance their performance on downstream tasks <cit.>. * Multi-modal Modeling: The importance of multi-modal learning in the field of artificial intelligence, especially in the context of LLMs like ChatGPT, has been highlighted by the emergence of models such as Flamingo <cit.> and PaLM-E <cit.>. These innovative models leverage visual transformers (ViT) <cit.> and its variations to encode image information, seamlessly integrating it with linguistic data within the LLM. This multi-modal approach allows for a simultaneous interpretation of textual and visual information, thereby enhancing the model's comprehension and response accuracy. As the evolution of LLMs continues, it is imperative to focus on strategies that can improve the integration of diverse data types. Specifically, refining methods for transmitting image information to these models could potentially augment their capacity to handle tasks requiring a simultaneous understanding of visual and textual inputs, further extending the boundaries of their applicability in complex, multi-faceted domains such as medicine and healthcare. * Integrating Medical Informatics: The successful integration of LLMs, such as ChatGPT, into clinical workflows is fundamentally dependent on the robustness of medical informatics. The swift, efficient, and precise collection and integration of data is integral for effectively deploying or updating these models in healthcare environments. As a result, there is a burgeoning need to investigate approaches for sourcing textual or other pertinent data from complex <cit.> hospital information systems. This access to data is imperative for incorporating LLMs or multi-modal models into areas such as clinical diagnostics, treatment, radiation oncology, and radiology. In this context, informatics plays a vital role, serving as a crucial bridge connecting vast, intricate data sets with AGI applications in healthcare. The discipline's function in enabling data accessibility, deciphering it, and applying it subsequently is invaluable in the forward march of AI in clinical environments. This emphasizes the call for scientific exploration to pivot towards the further refinement and enhancement of medical informatics, thereby nurturing a more potent relationship between this field and AGI. This harmonious integration promises not only to boost the operational efficiency of healthcare systems but also to augment patient outcomes via more precise and prompt diagnoses and treatments. * Knowledge Graph: Knowledge graphs can bolster the utility of LLMs like ChatGPT in the healthcare sector. A medical knowledge graph, with its expansive reach beyond simple language generation, can effectively enhance the input and prompts to LLMs <cit.> and validate the outputs <cit.> of LLMs. The deployment of such a tool could bring significant advancements to the field by offering a structured and contextually meaningful representation of data. Building a comprehensive knowledge graph is a complex task <cit.> due to the sophistication required in identifying entities and extracting relationships, which are integral components of such a graph. The current research frontier in this context is focused on devising strategies that facilitate automatic extraction of knowledge from large data corpora. The objective is to create a foundational knowledge base that can be readily deployed in clinical practice. By automating the process of knowledge extraction, the speed of development and implementation of knowledge graphs in healthcare can be significantly accelerated. This would consequently enhance the performance, precision, and efficiency of AI-driven tools in healthcare, thereby promoting improved patient outcomes and operational excellence in clinical settings. * Collaboration: Collaborative endeavors are pivotal in the deployment and optimization of LLMs in healthcare, particularly in addressing complex issues such as data diversity and data privacy <cit.>. Effective modeling of medical data requires careful consideration of patient demographics, socio-economic diversity, and cultural factors <cit.>. It is necessary to develop models that are trained and adapted to data representative of diverse hospitals and institutions across a nation or region. This cannot be fulfilled without multidisciplinary collaboration and multi-institutional cooperation. Patient privacy is also important during collaboration. One potential approach to uphold patient confidentiality is the deployment of localized LLMs within individual hospitals, thus confining sensitive patient information within the premises. An alternative method to balance the requirements of data sharing and privacy protection involves sharing encoded data rather than the original data. Given the prolific use and development of encoding embeddings, such a strategy could be feasible. For instance, techniques like privacy-aware word embeddings <cit.> can transform raw data into a less sensitive format. This transformed data, devoid of personally identifiable information, can then be shared more freely for model training purposes. The potential also exists for shared encoders that can process both text and image data, ensuring consistent encoding across diverse data types. The exploration of such strategies underlines the ongoing efforts to make effective use of data for model improvement while upholding the vital principle of patient privacy. Further examination could illuminate whether sharing encoded data could be more effective and secure than sharing the original, potentially sensitive data. Some other methodologies such as federated learning <cit.> and multi-institution blockchains <cit.> are also potential solutions. §.§ Current and potential applications * Disease Diagnosis and Patient Outcome Prediction: LLMs such as ChatGPT offer promising capabilities in determining the cause of diseases and predicting patient outcomes. By harnessing the power of LLMs, we can transform text data into a suitable format for predicting outcomes and diagnosing patients exhibiting similar symptoms. This concept can be exemplified by feeding case studies of patients with identical symptoms to ChatGPT to predict potential disease outcomes based on patterns from the provided data. * Application in Medical Education and Patient Consultation: LLMs could be an instrumental tool in educating the next generation of healthcare practitioners and even in patient consultations <cit.>. For instance, medical students could interact with ChatGPT for scenario-based learning, and patients may use it as an accessible source of information to better understand their medical conditions. * Streamlining Clinical and Administrative Work: LLMs such as ChatGPT and GPT-4 could help physicians with writing clinical notes <cit.> or producing radiology reports <cit.>. In addition, LLMs can significantly reduce the burden of clinical administrative tasks such as drafting administrative documents. freeing up physicians' time for more important stuffs in patient care. * Unified Knowledge Space: Expressing information in a unified space that encompasses text, images, and genetic data can revolutionize fields such as bioinformatics and genomics. By creating a unified knowledge graph, relevant information from diverse sources can be mapped into a graphical space, which could potentially change the field of bioinformatics, clinical informatics, medical imaging, and genomics. A consolidated sphere of knowledge can serve as a springboard for breakthroughs. For example, future models can leverage this vast data repository. They could comprehensively analyze a patient's textual clinical history, diagnostic imaging findings, laboratory test outcomes, and individual genetic information. In doing so, they would be equipped to render more insightful, well-rounded decisions, thereby improving patient outcomes. * Integrating text data and other modalities: The significance of integrating different types of data into a unified space for efficient decoding and analysis is pivotal across various medical domains. In cardiology, for example, text-based patient reports, imaging data from ultrasound or MRIs, audio data like heart sounds, and echocardiograms (ECG) signals could be organized as multimodal inputs to LLMs. An advanced language model could then analyze these multimodal inputs to provide more comprehensive and accurate diagnosis of heart conditions, thus supporting informed clinical decision-making. Similarly, in telemedicine and mental health, the amalgamation of patient-reported textual data, audio-visual inputs, wearable device metrics, biometric data, and even environmental sensor data is beneficial. LLMs such as Kosmos-1 <cit.>, with their capacity to process and make sense of such multimodal inputs, can potentially revolutionize these fields by providing more accurate, personalized, and nuanced health assessments and treatment suggestions. §.§ Challenges and pitfalls * Prompting Challenges: The construction of standardized and appropriate prompt words stands as a significant hurdle for the optimal performance of LLMs like ChatGPT <cit.>. The effectiveness of these prompts directly influences the quality of responses, thus crafting the right prompt is as pivotal as refining the model itself. * Data Privacy and Regulations: Medical data utilization in AGI models brings about unique challenges in terms of data privacy and compliance with regulatory bodies <cit.>. Protecting personal health information and meeting the stringent requirements of institution review boards (IRBs) and HIPAA guidelines <cit.> is paramount in any healthcare-related AI deployment. A potential solution could be the application of local LLMs within hospitals, which may mitigate privacy concerns while allowing models to learn from a vast array of clinical data. * Data Accessibility: Accessing medical data for AGI models raises not only technical but also legal and ethical complexities <cit.>. Balancing the necessity of access for model training and operation against the privacy rights of patients and healthcare providers is a complex undertaking. * Preparation and Curation of Healthcare Data: The need for image and corresponding text data in healthcare poses a significant challenge. The availability of comprehensive datasets akin to the MIMIC series <cit.> is scarce but critical for the effective training and utilization of LLMs in healthcare applications. * Deployment in Local Hospitals: The execution of LLMs locally in hospitals necessitates substantial computational resources. This could prove to be a limiting factor, particularly for smaller hospitals with limited access to high-performance computational infrastructure. * Data Imbalance: Real-world clinical data often exhibits imbalance, with disproportionate representation of certain conditions, demographics, or other variables <cit.>. This poses a challenge for certain models that rely on balanced, representative data for robust and unbiased outputs. § LARGE VISION MODELS FOR MEDICAL IMAGING In recent years, deep learning has made significant strides in the field of medical imaging, revolutionizing the way medical images are analyzed and interpreted. Various deep learning models, such as Convolutional neural networks (CNNs) <cit.> and Vision Transformer (ViT) <cit.>, have shown remarkable success in a wide range of tasks like medical image reconstruction, segmentation, and classification. Many of these models have been deployed to assist radiologists and clinicians in tasks such as identifying abnormalities, localizing tumors, and quantifying disease progression <cit.>. Recently, the emergence of large vision models, such as the notable Segment Anything (SAM) model <cit.>, may further propel the advancement in medical imaging field, ultimately leading to improved patient outcomes and more efficient healthcare practices. In this section, we will delve into the roadmaps for deploying and adapting large vision models in the medical domain, explore the current and potential applications, and discuss the challenges and pitfalls associated with their integration and utilization of large vision models. §.§ Roadmaps * Large-Scale Datasets: Large-scale, high-quality and diverse medical imaging datasets play a crucial role in deploying and adapting large vision models in the medical domain. Gathering and curating such datasets require collaborations between researchers, medical professionals, and institutions. These datasets need to cover a wide range of pathologies, modalities, and patient populations to capture the full spectrum of medical conditions and ensure the generalizability of the models. Additionally, data privacy and security measures must be in place to protect patient information and comply with ethical standards. Federated learning <cit.> is a promising approach for addressing privacy and data security concerns in the deployment of large vision models in the medical imaging field. By leveraging federated learning, large vision models can be trained using a vast amount of distributed medical imaging data while preserving data privacy. This approach is particularly beneficial in scenarios where data cannot be easily shared or centralized due to legal, ethical, or logistical constraints. Federated learning also provides an avenue for harmonizing diverse datasets from different institutions, capturing the variability and heterogeneity in medical imaging. This can improve the generalization capability of large vision models and their performance across multiple healthcare settings, enhancing the model's robustness and adaptability. Approaches for ensuring fair and secure use of data would also be vital for a sustainable and reliable ecosystem in medical image data usage <cit.>. * Model Adaption: The adaptation of large vision models to the medical domain involves fine-tuning or transfer learning techniques <cit.>. Pretrained models on large-scale general image datasets, such as ImageNet <cit.>, may serve as a valuable starting point. However, to leverage their learned representations effectively, these models need to be further trained on medical imaging data. Fine-tuning requires careful optimization and regularization strategies to adapt the models to specific medical imaging tasks while preventing overfitting and preserving their generalization capabilities. Recently, adding adapters <cit.> to large vision models has indeed gained popularity as a flexible and efficient approach to model customization and transfer learning. Adapters allow for the integration of task-specific information without modifying the entire model architecture, enabling faster and more cost-effective model development. * Multimodal imaging: Medical imaging often involves multiple modalities, such as ultrasound, MRI, CT, and PET, etc., each providing unique information. The adaption of large vision models should consider the combination and fusion of information from these multiple modalities to extract complementary features and enhance diagnostic accuracy. Exploring fusion techniques, such as late fusion <cit.>, early fusion <cit.>, or cross-modal attention mechanisms <cit.>, may be crucial to leverage multi-modality data effectively and improve the performance of the adapted models. * Interpretability: Large models often operate as complex black boxes, making it challenging to understand the reasoning behind their predictions <cit.>. To establish trust and facilitate clinical decision-making, efforts should be underway to develop techniques for explaining the decision-making process of these models. Interpretability methods such as attention maps <cit.>, saliency maps <cit.>, and Grad-CAM <cit.> have been explored to provide insights into which regions of an image contribute most to the model's decision, aiding in understanding and verifying the model's outputs. However, the interpretability of large vision models has not been fully explored, which requires further efforts of the community. * Few-shot/Zero-shot Learning: Few-shot and zero-shot learning hold significant potential in medical imaging field, where obtaining extensive annotated datasets can be time-consuming, expensive and even infeasible <cit.>. LLMs, such as GPT-3 <cit.> and its successors <cit.>, have demonstrated impressive zero-shot learning capabilities, where they can generate meaningful responses or perform tasks without explicit training on specific examples. Translating this concept to the medical imaging domain, large vision models can potentially <cit.> exhibit zero-shot learning abilities, allowing them to recognize and analyze new medical conditions or imaging modalities for which no labeled training data is available. This capability could enable them to adapt to previously unseen diseases, imaging techniques, or even cross-modal tasks. For example, a large vision model trained on a diverse range of medical imaging data could potentially infer and interpret new types of images or identify rare conditions with limited labeled data, based on the analogies and knowledge learned from similar cases. * Scalability: Training and deploying large vision models demand significant computational resources, including high-performance computing infrastructure and efficient parallel processing capabilities. However, real-time applications may require optimizations to meet the time constraints of clinical settings. For example, real-time image analysis is required for rapid decision-making in emergency situations or during surgical procedures. The scalability of large vision models is crucial to ensure their practical applicability and efficiency in these time-sensitive scenarios. To address the scalability challenge, advancements in hardware acceleration, such as specialized graphical processing units (GPUs) <cit.> or tensor processing units (TPUs) <cit.>, can significantly boost the computational efficiency of large vision models. Parallel processing techniques <cit.>, distributed computing <cit.>, model compression <cit.>, and model distillation methods <cit.> can also enhance scalability by optimizing memory utilization and reducing computational overhead. §.§ Current and Potential Applications * Segment Anything: Large vision models have demonstrated significant potential in various medical imaging applications, enabling improved diagnostic accuracy, treatment planning, and disease monitoring. One notable large vision model is the Segment Anything (SAM) model <cit.>, which was trained on SA-1B dataset with over 1 billion masks on 11 millions images. SAM supports promptable segmentation over various segmentation tasks and demonstrates impressive and powerful zero-shot generalization ability. Recently, SAM has attracted a lot of attention in the medical imaging field. A group of studies explored SAM's ability in medical image segmentation tasks <cit.>. For example, He et at. evaluated the SAM's accuracy on 12 medical image segmentation datasets, showing that SAM underperforms the state-of-the-art methods in these datasets and suggesting the necessity to adapt SAM for medical imaging application <cit.>. To improve the performance of SAM on medical image segmentation, another groups of studies explored adapting SAM for specific tasks <cit.>. Ma and Wang et al. utilized a straightforward way by fine-tuning SAM to general medical image segmentation<cit.>. And the results indicate a great improvement compared with default SAM model. Wu et al. proposed Med SAM Adapter by adding adapter to both image encoder and mask decoder with comparable and even superior performances over state-of-the-art methods <cit.>. Zhang and Liu et al. customized SAM for medical image segmentation by applying the low-rank-based (LoRA) finetuning strategy to SAM <cit.>. SAM has also been employed to medical image annotation. Liu et al. integrated SAM with 3D Slicer, an open-source software for visualization, segmentation, and analysis of medical images, to support interactive annotation <cit.>. Despite the aforementioned progresses, SAM has great potential in enabling precise quantitative measurements, volumetric analyses, and 3D reconstructions and supporting clinicians in making informed decisions and providing personalized patient care. * AI-Generated Content (AIGC): Stable Diffusion (SD) and Generative Adversarial Networks (GAN) AIGC aims to generate digital content based on human input (e.g., instructions and exemplars) <cit.>. <cit.> has gained recognition for its high-performance capabilities in generating detailed images by leveraging a latent diffusion model conditioned on text. This model has exhibited remarkable efficacy in various tasks, including image inpainting and super-resolution. Additionally, the DALL·E 2 model introduces a groundbreaking ability to generate synthetic images from textual descriptions <cit.>. In the context of the medical domain, the SD and DALL·E 2 models hold immense potential for addressing prevalent challenges. For example, medical images often suffer from issues such as noise, artifacts, and low contrast, which can significantly impact diagnostic accuracy. The SD model can be harnessed to mitigate these problems by effectively removing noise and artifacts while enhancing the overall image quality in medical scans. This enhancement facilitates clearer visualization of anatomical structures and pathologies, aiding healthcare professionals in accurate diagnosis and treatment planning. DALL·E 2 can be utilized to generate synthetic medical images for training purposes, allowing medical professionals to simulate rare or challenging clinical scenarios and improve their diagnostic skills. Moreover, these synthetic images can be employed to augment limited datasets, providing a valuable resource for training large vision models in medical imaging. * LLM-enhanced Imaging: The GPT-4 model also holds great promise for medical imaging applications. GPT-4 builds upon its predecessors' language understanding capabilities and can be adapted to interpret clinical notes, radiology reports, and other textual data associated with medical images. This integration of text understanding and image analysis enables context-aware diagnosis, personalized treatment recommendations, and efficient retrieval of relevant medical literature. By leveraging the vast amount of textual information in healthcare, GPT-4 has the potential to enhance clinical decision-making and improve patient outcomes. §.§ Challenges and Pitfalls While the adaptation and utilization of large vision models in the medical domain demonstrate great promise, there are several challenges and pitfalls that may be encountered and need to be effectively addressed. * The Availability of High-quality Annotations: Training large vision models requires large-scale, accurately annotated datasets. However, medical imaging datasets are often limited in size and quality due to factors like data scarcity, privacy concerns, and variations in imaging modalities. The scarcity of labeled data can hinder the performance and generalizability of large vision models. To overcome this challenge, approaches such as active learning <cit.>, data augmentation, and transfer learning from related tasks can be employed. Collaboration among researchers, healthcare institutions, and regulatory bodies is also essential to foster the sharing and creation of standardized, annotated datasets that cover a wide range of medical conditions and patient demographics. * High Accuracy Standards: One of the critical challenges in deploying large vision models in the medical imaging field is the requirement for higher accuracy standards compared to general image applications. While a certain level of accuracy may be acceptable in general computer vision tasks, there is a higher demand for accuracy, reliability, and precision in medical imaging applications where decisions regarding diagnosis and treatment are made based on the results provided by these models. A lower classification accuracy or an inaccurate segmentation in medical images could lead to misdiagnosis, incorrect treatment plans, delayed interventions, or missed critical findings, potentially compromising patient safety and well-being. * Long-tail Problem: Another significant challenges that large vision models face is the long-tail distribution of diseases and conditions. The long-tail phenomenon refers to the uneven distribution of labeled data, where a few common diseases or conditions have abundant training examples, while a vast majority of rare or less prevalent conditions have limited labeled data available. This long-tail challenge poses difficulties for large vision models as they may struggle to accurately classify and diagnose rare or uncommon medical conditions due to limited exposure during training. The lack of sufficient labeled examples for these conditions can result in a performance bias towards the more prevalent diseases, leading to suboptimal accuracy and sensitivity in real-world clinical settings. * Ethical Issues: The ethical implications of using large vision models in healthcare must be carefully considered. Issues such as data privacy, patient consent, and potential biases in the models require attention. Large vision models heavily rely on extensive datasets for training, which raises concerns about the privacy and security of patient information, as well as vulnerabilities of the healthcare systems to cybersecurity hazards such as backdoor attacks <cit.>. Strict data governance policies, anonymization techniques <cit.>, and adherence to regulatory frameworks such as HIPAA are crucial to protect patient privacy. Furthermore, biases present in the training data <cit.> can inadvertently propagate into the predictions of large vision models, leading to disparities in healthcare outcomes. Addressing and mitigating these biases is vital to ensure fair and equitable deployment of these models across diverse patient populations. In addition to the challenges mentioned above, there are also other notable challenges in deploying large vision models in the medical imaging field, such as the interpretability of the large vision models, the deployment in clinical scenario, etc. In conclusion, while leveraging large vision models for medical imaging holds tremendous potential, addressing the aforementioned challenges is essential for realizing the full benefits of large vision models and ensuring their safe, effective, and ethical integration into clinical practice, and leading to improved healthcare outcomes and enhanced patient care. § LARGE MULTIMODAL MODELS FOR MEDICAL IMAGING Data in the real world is captured across various modalities such as text, images, videos, and audio, mirroring the perceptual capabilities of the human brain. In the medical field, data encompassing medical reports, clinical notes, radiology images, physician dictations, audio recordings of physician-patient dialogues, and surgical videos <cit.> paint a comprehensive picture of the hospital's operational processes. Typically, these data can be effortlessly consolidated at the patient level as they all pertain to specific patients and/or diseases, paving the way for multimodal models to leverage these rich data for a broad range of applications in the medical domain. Recently, rapid development has been observed in large multimodal models, fueled by the emergence of LLMs and expansive vision models. Newly released models, such as Meta's Segment Anything model <cit.>, OpenAI's DALL·E 2 <cit.>, and Stability AI's Stable Diffusion <cit.> model, all exhibit text-to-image capabilities to prompt segmentation or generation tasks. As a result, we will mainly focus on Vision Language models in the medical domain in this section. Specifically, we will describe the roadmap for deploying and adapting large multimodal models in the medical domain, introduce current and potential applications, and discuss challenges that could further catalyze the use of large multimodal models in the medical domain. §.§ Roadmaps * Scaling and Quality of Paired Data: In the medical field, paired multimodal data are essential for adapting large multimodal models; however, they are never sufficient for data-hungry large-scale models, regardless of whether they are in the general or medical domain. In the general domain, it is relatively easy to collect text-image pairs from web images that are accompanied by contextual content or human-annotated captions, which don't necessitate specialist knowledge for captioning <cit.>. New released multimodal dataset in general domain such as LAION-5B <cit.> contains billions of text-image pairs. Conversely, in the medical domain, multimodal datasets are significantly smaller compared to the general domain, reducing from billions to thousands, exemplified by MIMIC-CXR <cit.> and Open-I <cit.>, with 227,835 and 3,955 radiology image-report pairs respectively. Despite their considerably smaller scale in comparison to the general domain, the quality of multimodal data in the medical domain is inherently superior. For instance, in MIMIC-CXR, radiology reports are accurately and concisely written by radiologists to describe the findings and impression from image. As a tradeoff, the amount of multimodal data in the medical domain is very limited, as it requires experts with medical knowledge to annotate and must adhere to certain privacy rules before being released for public use. * Pretraining Multimodal Foundational Models: Given the fact that multimodal data in the general domain consists of a relatively large amount of data but with sparse information and varying quality, while data in the medical domain is limited but of high quality, an efficient and effective approach to deploying and adapting a large multimodal model in the medical domain is to first pretrain a foundation model on large-scale general domain dataset and then fine-tune the pretrained foundation model on high-quality medical dataset. Leveraging the zero-shot in-context learning abilities of pretrained foundational models, such as OpenClip <cit.> for image and text retrieval and Segment Anything Model for image segmentation, we can feasibly adapt these models to the medical field. They can be readily used for tasks such as medical report generation, radiology image comprehension, and medical image segmentation, even when only limited fine-grained medical multimodal data is available. * Parameter-Efficient Fine-Tuning: Recently released foundation models possess two primary characteristics: 1) a large-scale of parameters and 2) large pretraining datasets. For instance, GPT-3 <cit.> has 175 billion parameters trained on 45 TB of data. LLaMA <cit.> models range from a scale of 7 B to 65 B trained on 1.4 trillion tokens. This expansive scale leads to high computational costs for full fine-tuning. Hence, Parameter-Efficient Fine Tuning (PEFT) strategies <cit.> can serve as a potent method to rapidly adapt these pretrained foundation models to the medical domain. Below are some prevalent PEFT strategies for tuning large pretrained models: Adapter tuning <cit.> introduces lightweight modules, referred to as "adapters", within the layers of the pretrained large models. When these models are fine-tuned on downstream tasks, only the parameters of the adapter are updated, while the pretrained parameters remain fixed. This method allows the fine-tuned model to retain knowledge from the pretraining phase while efficiently adapting to the specific downstream task. LoRA <cit.> introduces a low-rank bottleneck by decomposing the weight matrix of the pretrained model into low-rank matrices. During fine-tuning, only these low-rank matrices are updated, significantly reducing computational costs. QLoRA <cit.> further enhances the LoRA approach to ensure that fine-tuning a 4-bit quantized pretrained LLM with LoRA does not compromise performance compared to 16-bit full fine-tuning. Utilizing these parameter-efficient fine-tuning strategies allows pretrained foundation models to be quickly adapted to the medical domain. They deliver expert-level performance and impressive generalizability across multimodal medical imaging, including CT scans, MRIs, and X-rays, among others. §.§ Current and potential applications * Image-to-Text Models: At present, the principal deployment of multimodal models in the medical domain is primarily within Visual Question Answering (VQA) models <cit.>. These models enhance diagnostic efficiency by facilitating the effective analysis of medical images such as X-rays, CT scans, and MRI scans. By posing queries to the VQA model regarding features, abnormalities, or regions of interest within an image, medical practitioners can receive pertinent and detailed answers, aiding in the interpretation of complex medical images. This technology has the potential to reduce diagnostic errors, enhance accuracy, and speed up the diagnostic process, ultimately leading to improved patient outcomes. CLIP (Contrastive Language-Image Pretraining) <cit.> is a vision-language model in the general domain developed by OpenAI. It is trained by learning to associate images and their corresponding textual descriptions using contrastive learning. The pretrained image encoder can be utilized to obtain image embeddings that facilitate tasks such as image classification, and image-text retrieval. Due to its versatility and impressive cross-modal understanding capabilities, CLIP is a robust tool for combining visual and textual information. MedCLIP <cit.> enhances and extends the general domain CLIP model to the medical domain by decoupling images and texts for multimodal contrastive learning to employ unpaired medical image and text. In conjunction with the newly released open-source Language Learning Models (LLMs) such as LLaMA <cit.>, Alpaca <cit.>, and Vicuna <cit.> which serve as text decoders, MedCLIP <cit.> can function as an image encoder in VQA models. XrayGPT <cit.>, a medical domain VQA model similar to miniGPT-4 <cit.> in the general domain, automates the analysis of chest radiographs based on provided X-ray images. Specifically, the pretrained LLM Vicuna <cit.> is first fine-tuned using medical conversation data comprising approximately 100k real dialogues between patients and doctors and 30k radiology conversations to acquire domain-specific knowledge in the medical domain. Subsequently, a high-quality dataset of 217k chest X-ray findings report from two datasets (MIMIC-CXR and OpenI) are used for fine-tuning the XrayGPT <cit.>, which aligns the frozen MedCLIP as medical image encoder and Vicuna as text decoder to generate chest radiographs summarization from given X-ray images. * Text-to-Image Models: While significant progress has been made in the field of text-prompt image analysis in general domains with models such as Stable Diffusion and DALL·E 2, it remains a formidable challenge in the medical domain <cit.>. Medical images, characterized by complex features, varying intensities, and region-specific content, demand a heightened level of precision and strict adherence to specific rules and principles in comparison to general images. The advent of multimodal models has opened prospective applications focusing on text-prompt image analysis. Below are three potential applications of Text-to-Image models. * Computer-Aided Diagnosis: Text-prompt image analysis could enhance diagnostic capabilities. By prompting text targeting specific medical image analysis tasks in clinical scenarios, such as emergency rooms, it can bolster the efficiency of diagnosis by doctors and radiologists <cit.>. For instance, the 'Segment Anything' model could be deployed in the medical domain by utilizing text prompts that describe the targeted abnormality feature to segment the aberrant lesion. With SAM's zero-shot capability, it is highly adaptable for application in medical imaging across different modalities without the need for specific fine-tuning for each. Furthermore, computer-aided image analysis could be employed for second opinions, comparisons, and decision support, thus leading to more accurate and confident diagnoses. * Medical Education and Training: Text-prompt image generation could be employed to generate realistic training scenarios for medical students and healthcare professionals. By simulating a diverse array of cases, rare conditions, and complex anatomical variations, synthetic medical image generation could enrich educational programs and offer invaluable hands-on experience in a controlled environment. * Surgical Planning: Precise and detailed preoperative planning is critical for complex surgical procedures. Text-prompt medical image generation can facilitate the creation of realistic 3D anatomical models that focus on abnormalities. This aids surgeons in understanding complex structures, optimizing surgical approaches, and simulating the expected outcomes of surgical interventions. Such a system could potentially reduce risks, improve surgical precision, and enhance patient outcomes. * Use of Eye-Tracking Technology: The use of eye-tracking technology can provide valuable data for training LLMs <cit.>. By recording where a physician's attention is focused during case study reviews, models like ChatGPT could learn to emulate this prioritization of information, potentially improving the model's diagnostic reasoning capabilities. §.§ Challenges and pitfalls * Lack of Paired Data: One of the most significant challenges in this context is the scarcity of large-scale, paired medical image-text data. This is predominantly due to the heterogeneous and complex nature of medical data. To advance the development and refinement of multimodal models in the medical domain, a large-scale, paired dataset combining visual and textual medical information is essential. A potential solution could be the application of pseudo-labels, such as generated radiology image reports or synthesized medical images. These can serve as synthetic datasets or automatic annotations that may enhance model training and performance. Nevertheless, this approach should be applied cautiously as the quality of synthetic data might fluctuate, and any inaccuracies or inconsistencies could potentially affect the efficacy and robustness of the resulting model. * Backdoor Attacks: The issue of cybersecurity represents another critical concern when deploying large foundational models in healthcare. These models may be vulnerable to backdoor attacks, where malicious actors manipulate the model during training or inference, posing a substantial security threat to downstream users <cit.>. This flaw could result in a compromised model within a hospital setting, thereby exposing the institution to data leakage and potential security violations. Hence, cybersecurity must be accorded top priority during the deployment and adaptation of multimodal models in the medical domain. § FUTURE DIRECTIONS Although we have discussed some of the future directions under each section, it is worth providing summarized discussions of the relevant points to have a holistic view of the potential future works. §.§ Federated Learning in the Era of LLMs Federated learning <cit.> is an innovative machine learning approach that addresses privacy, data ownership, and scalability challenges by enabling the training of models across decentralized devices or data sources. In the medical domain, federated learning holds great promise as it allows the development of powerful and accurate machine learning models while maintaining the privacy and security of sensitive healthcare data <cit.>. Recently, the advancement of transformer-based LLMs has opened up new possibilities for federated learning in the medical domain. LLMs can provide a common foundation for different healthcare centers participating in federated learning, enabling them to have a standardized architecture. This standardization makes it easier to share and collaborate on model development. Furthermore, the tokenization process used in LLMs contributes to the effectiveness of federated learning. By adopting a standardized token space, models from different centers can maintain a shared understanding of the input data. This shared understanding allows for effective data sharing and collaboration, as information exchange can be conducted at the token level. It enables the models to exchange relevant information while effectively shielding the original and private data, ensuring data privacy is preserved. Moreover, when a large multimodal model is used, the tokenization process also facilitates the fusion of multimodal data at the token level. Different modalities of data can be mapped to a unified token space, enabling the integration of diverse data types and enhancing the completeness and richness of information within the federated learning framework. In general, effective combination of LLMs and federated learning has significant potential in developing more powerful AGI systems. §.§ Adaption of Large Models The adaptation of large models to new tasks and domains may involve several key techniques: prompt fine-tuning, reinforcement learning, adapter modules (e.g., LoRA), and model distillation. These techniques can be applied interchangeably across language, vision, and multimodal tasks, allowing models to adapt effectively to diverse domains. Prompt fine-tuning is a crucial step in model adaptation. By gradually adjusting the model's parameters using specific examples and expected outputs, the model becomes better aligned with the target task. This iterative process enables the model to capture task-specific nuances and improve its performance by fine-tuning its responses and predictions. Reinforcement learning techniques play a significant role in optimizing the adapted model's performance. By incorporating feedback signals, such as task-specific rewards, the model can learn from trial and error, adjusting its behavior to maximize the desired outcome. This reinforcement-based optimization enhances the model's decision-making capabilities and overall performance. Adapter modules, like LoRA, offer a flexible and efficient approach to adapt large models. These modules allow task-specific adjustments to be introduced without extensively modifying the entire model architecture. By selectively modifying certain parts of the model, the adapters enable the model to specialize for specific tasks while minimizing any potential negative impact on the overall model's performance and capabilities. Model distillation is an effective technique for transferring knowledge from larger models to smaller, more resource-efficient ones. By distilling the knowledge from a larger model into a compact representation, the adapted model can retain the essential insights and generalization capabilities of the original model while being more suitable for deployment in resource-constrained environments. This knowledge transfer enhances the performance and efficiency of the adapted model. Despite the effectiveness of the aforementioned methods in adapting large models to new tasks and domains, it is important to acknowledge that there may be other approaches and techniques. The field of model adaptation is constantly evolving, and researchers continue to explore novel methods to improve the adaptability of large models. These alternative methods may include techniques such as meta-learning, transfer learning, domain adaptation, or even domain-specific architecture design. Each of these methods may have its own advantages and applicability depending on the specific task and domain at hand. Therefore, while prompt fine-tuning, reinforcement learning, adapter modules, and model distillation serve as valuable tools for adaptation, researchers and practitioners should remain open to exploring and experimenting with alternative methods to further advance the field of large model adaptation. §.§ Learning from experts * Direct Expert Involvement in Reinforcement Learning with Expert Feedback (RLEF): This strategy involves inviting clinicians to participate in the process of rating the outputs of the AGI models during their training phase. Clinicians can rank the generated responses based on clinical validity and usefulness, which can enhance the proficiency of the models in delivering clinically accurate information. For instance, in the training phase of an AGI model specialized in cardiology, a cardiologist could provide feedback and rank the model's interpretation of an electrocardiogram. This feedback guides the model in refining its capability to analyze electrocardiograms accurately and mimic the nuanced decision-making process of a real cardiologist. * Multidimensional Ranking Approach: In contrast to a simple ranking system, a more complex and multidimensional approach could better capture the nuances of medical scenarios. For example, consider an AGI model trained to suggest differential diagnoses based on a collection of patient symptoms. A panel of physicians could then evaluate the model's output based on several factors - accuracy of diagnosis, appropriateness to the presented symptoms, consideration of patient medical history, and the potential severity or urgency of each suggested condition. This multi-faceted feedback could provide a more comprehensive training signal for the AGI, thereby enhancing its ability to provide valuable input in complex, real-world clinical situations. * Development of Interdisciplinary Teams: Assemble teams comprising AI specialists, healthcare professionals from diverse specialties, ethicists, and legal experts. These teams can provide well-rounded feedback during the development and validation phases of the models, thus ensuring the AGI tools are accurate, ethical, and comply with regulations <cit.>. * Active Learning Sessions: Clinicians can engage in active learning <cit.> sessions with the AGI models. This strategy allows clinicians to guide the AGI models through a series of (real or simulated) patient scenarios, ensuring that they learn and adapt based on the expert’s interpretation and response to the same scenarios. For instance, a psychiatrist could engage with a Large Multimodal Model (LMM) trained on patient responses and gestures to assist in mental health consultations. Through these sessions, the psychiatrist can provide expert responses to simulated patient scenarios, allowing the model to learn and improve its understanding and handling of complex psychiatric cases * Refinement of Model Prompting and Tailoring to the Medical Domain: AGI models can benefit from the expertise of clinicians in refining the prompting mechanism for specific medical contexts. For example, an LLM could be trained to generate more clinically relevant prompts for a neurological exam under the guidance of neurologists. Prompts can be tailored to elicit detailed symptom descriptions, like refining the question "Can you describe your headache?" to "Can you describe the location, duration, intensity, and any associated symptoms of your headache?". §.§ Leveraging Multimodal Approaches Multimodal models entail the synergy of multiple data modalities, including but not limited to text, images, and audio, to enhance the understanding and reasoning capabilities of medical models. For instance, the language reasoning ability of multimodal models can be employed to facilitate image understanding by extracting and aligning meaningful information from both medical images and reports to improve diagnostic accuracy. Specifically, language reasoning can provide rich, context-specific annotations that aid in illustrating the unique features in medical images, in order to analyze radiological images more effectively, interpret clinical notes, and provide more comprehensive assessments. Additionally, the use of Artificial Intelligence-Generated Content (AIGC) for generating synthetic data with ground truth in text and image formats presents an innovative pathway towards improving model accuracy <cit.>. This direction involves the utilization of multimodal models to autonomously generate diverse and validated content, be it in the form of text-based reports or annotated images, that serves as training data in the medical domain. This could significantly improve accuracy and also generalizability as the diverse generated data in different modalities ensures the system's robustness and efficiency to adapt to diverse tasks and use-cases such as disease diagnosis, medical imaging analysis, and treatment recommendations. This capability is particularly valuable in scenarios where annotated medical data is scarce or inaccessible. Moreover, the multimodal model could close the loop from image generation to the generation of comprehensive analytical reports. It provides a framework for developing an auto reconstruction as feedback loop that not only produces fine-grained image generation with text-guided prompts but also facilitates the generation of precise, comprehensive medical analysis reports from these images, to better learn the image and text embedding in the medical domain. This bi-directional process is essential for effectively translating the insights gained from the texts and images into actionable clinical recommendations. §.§ Safeguarding healthcare: Mitigation strategies for AGI in clinical practice Given the array of potential risks associated with AGI models, there is a need for comprehensive mitigation strategies to ensure their safe and effective use in the medical domain. This section outlines some proactive measures to counteract the risks for LLMs, large vision models (LVM), and large multimodal models (LMM). * Adopting Robust Supervision and Validation Frameworks: Regularly validating the performance of AGI models in a controlled environment can help identify areas of weakness or misinformation. This process should involve interdisciplinary teams of AI specialists, medical professionals, ethicists, and legal experts to assess the models from different perspectives. * Training on Recent and Representative Data: Continuously updating the training data to include recent advancements in medical knowledge and guidelines can help keep the models' advice aligned with current best practices. The training data should also represent diverse demographics to minimize biases in the generated advice <cit.>. * Establishing a Two-way Communication: Allowing the model to ask clarifying questions when necessary can help generate more individualized advice. This can partially address the issue of AGI models not having direct access to a patient's medical records and minimize potential harm. * Implementing Safety Guardrails: Employing a system that alerts users when they are soliciting or being given medical advice is essential. A disclaimer stating that AGI models cannot replace professional medical advice would promote safer use. * Strengthening Privacy Measures: Using robust privacy-preserving techniques such as differential privacy <cit.>, federated learning <cit.>, and de-identification <cit.> in the training process can help safeguard sensitive data. * Bias Detection and Mitigation: Regular audits should be conducted to detect and correct biases <cit.> in AGI models. Machine learning techniques that can detect and mitigate bias in AI models can be incorporated into their development and validation processes <cit.>. * Educating Users: Increasing public awareness about the limitations of AGI models is crucial. Users should be informed about the potential risks associated with relying on AI for medical advice. * Regulatory Oversight: Regulatory bodies need to provide guidelines and policies on the use of AGI models in the healthcare domain. Compliance with regulations such as GDPR and HIPAA should be ensured. * Collaborative Models: Designing AI models that can work in collaboration with human doctors, rather than replacing them, can help maximize the benefits of AI while minimizing its risks. This also incentivizes wide acceptance of AGI in the health sector. These counteractive measures are integral to promoting the responsible use of AGI in the medical field, and to maintain the integrity of medical practice and the privacy of patients. It is essential that we continue to investigate these models, raising concerns and finding solutions to ensure that their potential is harnessed in a safe, ethical, and beneficial manner. §.§ Combining Large-Scale and Local Small-Scale Models The combination of large-scale models and local small-scale models could significantly enhance the scope of computer-aided diagnosis and analysis, enabling its application across diverse scenarios and locations. Large-scale models, such as ChatGPT, GPT-4 <cit.>, and Med-PaLM <cit.> which pretrained on vast amounts of data, possess extensive knowledge and robust representations. These large foundation models are typically housed in central institutions or hospitals that possess large in-house training data, high computational resources, and substantial funding to drive cutting-edge medical image analysis and NLP research and applications. However, deploying these models directly in resource-constrained hospital settings or scenarios requiring real-time inference may not be practical due to computational limitations. Knowledge distillation <cit.> could be a potential solution to this challenge. Large-scale foundation models can act as teachers to train local small-scale models that are more suitable for deployment in constrained settings. These small-scale models can leverage the knowledge distilled from their larger teachers while being optimized for real-time inference and limited computational resources. This approach strikes a balance between model capacity and efficiency, ensuring that medical practitioners can benefit from the power of large models while catering to the practical constraints of healthcare environments. By leveraging these local small-scale models, real-time medical artificial intelligence systems designed for scenarios with limited computational resources or urgent need for rapid or even instantaneous inference will transition from theoretical construction to practical application. For instance, these models can be deployed in emergency rooms, where quick decision-making is critical, or they can be integrated into surgical tools such as endoscopes for real-time inference during surgery. Furthermore, small-scale models could play a crucial role in facilitating real-time dialogues and guidance with home-hospital AI assistants, thereby extending the reach of high-quality medical care. § CONCLUSION The potential of Artificial General Intelligence (AGI) in healthcare, particularly through Large Language Models, Large Vision Models, and Large Multimodal Models, is promising. AGI models have the capability to completely revolutionize healthcare, especially when they include experts in the loop, incorporate medical domain knowledge, and cater to the multimodal nature of real clinical data. Strategies such as Reinforcement Learning with Expert Feedback (RLEF), federated learning and the use of semi-supervised and active learning methodologies offer valuable methods to enhance AGI models for medical applications. However, deploying AGI in healthcare is not without challenges. Ethical considerations, data accessibility challenges, the significant demand for large-scale datasets, risks of erroneous outputs, data privacy and security concerns, and regulatory hurdles underscore the need for careful handling of these transformative technologies. Interdisciplinary collaboration and close cooperation between scientists and clinicians will be key to unlocking the full potential of AGI in healthcare. As we venture deeper into the AGI era, the future of healthcare appears to be closely tied with the development of AGI technologies. Despite the challenges, the opportunities offered by AGI in improving patient care and outcomes are vast. This review hopes to serve as a reference point and catalyst for further exploration into this dynamic and rapidly evolving field. IEEEtran.bst
http://arxiv.org/abs/2306.07138v2
20230612142220
Discovering Ferroelectric Plastic (Ionic) Crystals in the Cambridge Structural Database: Database Mining and Computational Assessment
[ "Elin Dypvik Sødahl", "Seyedmojtaba Seyedraoufi", "Carl Henrik Görbitz", "Kristian Berland" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.chem-ph" ]
[email protected] Department of Mechanical Engineering and Technology Management, Norwegian University of Life Sciences, 1432 Ås, Norway. Department of Mechanical Engineering and Technology Management, Norwegian University of Life Sciences, 1432 Ås, Norway. Department of Chemistry, University of Oslo, 0371 Oslo, Norway. [email protected] Department of Mechanical Engineering and Technology Management, Norwegian University of Life Sciences, 1432 Ås, Norway. Hybrid or organic plastic crystals have the potential as lead-free alternatives to conventional inorganic ferroelectrics. These materials are gaining attention for their multiaxial ferroelectricity, above-room-temperature Curie temperatures, and low-temperature synthesis. Here, we report a screening study of the Cambridge Structural Database (CSD) resulting in 55 new candidate plastic and plastic ionic ferroelectric molecular crystals, along with 16 previously reported ferroelectrics. With over 1.2million entries in the CSD, the screening procedure involved many steps, including considerations of molecular geometry and size, space group, and hydrogen bonding pattern. The spontaneous polarization and electronic band gaps were predicted using density functional theory. 21 of the candidate ferroelectrics have a polarization greater than 10 μ C/cm^2, out of which nine are reported at room temperature. Discovering Ferroelectric Plastic (Ionic) Crystals in the Cambridge Structural Database: Database Mining and Computational Assessment Kristian Berland July 31, 2023 ===================================================================================================================================== § INTRODUCTION Plastic crystals are molecular crystals characterized by the existence of an orientationally disordered mesophase. In the mesophase, the material becomes ductile, i.e., “plastic”, due to both reduced intermolecular bonding and increased symmetry yielding facile slip planes.<cit.> As a result, plastic crystals can be molded and fused, in stark contrast to most molecular crystals and inorganic ceramics, which tend to be non-flexible and/or brittle.<cit.> These materials can be synthesized with low-cost and low-energy methods including co-precipitation, slow evaporation, spin coating, and 3D printing. <cit.> Plastic crystals can be bonded solely through both van der Waals or hydrogen bonding and such compounds are referred to as the plastic molecular crystals, but they can also have an additional ionic component consisting of charged molecular species, which are referred to as plastic ionic crystals. Typically, the molecular species of plastic crystals have a quasi-spherical or "globular" shape, which reduces rotational barriers. <cit.> In addition to the plastic mesophase, the molecular rotations can imbue the plastic crystals with functional properties such as ferroelectricity. Ferroelectric plastic crystals can exhibit a rich phase diagram,<cit.> with multiple competing crystalline phases and the existence of plastic ferroelectric mesophases, as seen in quinuclidinium perrhenate, having partial orientational disorder. <cit.> They can also exhibit multiaxial polarization with as many as 24 equivalent axes, <cit.> and Curie temperatures up to 466 K.<cit.> For applications such as FeRAM and piezoelectric sensing, a high Curie temperature is essential. Multiaxial polarization allows the spontaneous polarization to be aligned in a desired direction in polycrystalline systems, and can allow for multi-bit storage in single crystals. <cit.> A key advantage of ferroelectric plastic crystals is the fact that they can exhibit low coercive fields, e.g., for 1-azabicyclo[2.2.1]heptanium perrhenate and quinuclidinium perrhenate values in the 2 - 5kV/cm range have been reported. <cit.> Such values are comparable to that of BaTiO3, a well-known inorganic ferroelectric.<cit.> However, their spontaneous polarization tends to be on the lower end, with most reported values falling below 10 μ C/cm^2. <cit.> In 2020, Horiuchi et al. cataloged approximately 80 reported small-molecule ferroelectric crystals. <cit.> This number is in stark contrast to the collection of approximately 1.2 million organic structures in the Cambridge Structural Database (CSD), <cit.> which potentially harbors many undiscovered ferroelectric plastic crystals. Screening this database, we recently reported 6 new organic proton-transfer ferroelectric candidates.<cit.> In this paper, we detail our screening of ferroelectric candidates likely to exhibit plastic properties. For all the identified materials, we used density functional theory (DFT) computations for geometry optimization of the crystal structure and to predict the spontaneous polarization and electronic band gaps. The new compounds identified in this manner are not only of interest in themselves, but can serve as template structures for further crystal engineering, i.e., by substituting molecular species or halides to tune functional properties. § METHODS §.§ Mining the CSD for Ferroelectric Plastic Crystals To identify candidate ferroelectric plastic crystals in the CSD, we reduced the number of structures in five filtering steps based on the properties of the molecular crystal structure and the constituents molecules, as outlined in Fig. <ref>. For this procedure, the CSD Python API<cit.> and the Molcrys<cit.> package developed by us, based on the Atomic Simulation Environment<cit.> and the Networkx <cit.> package were used. Step 1 excluded non-polar, polymeric, or disordered structures, as well as less accurate structures, i.e., with an R-factor higher than 0.075. Step 2 excluded structures with unit cells containing more than 150 atoms, removing many complex and large structures. Thus, we avoided time-consuming DFT computations. Step 3 excluded all materials containing solely C and H. This choice was made as polar covalent bonds or charge transfer between species is needed for high polarization, which requires electronegativity differences. Step 4 excluded all structures with bulky and/or elongated molecules, as steric hindrance would typically be too large for molecular rotations in a solid phase. Only molecules with 10 or fewer non-hydrogen atoms were retained. Molecular graph theory was used to remove "chainy" molecules, such as all aliphatic chains longer than four carbon atoms. This is further detailed in Appendix <ref>. Further, we removed structures that lacked at least one molecule with a globular or semi-globular geometry. As a globularity measure, we used the ratio between the volume of the convex hull and the volume of the smallest bounding sphere of the molecule, excluding hydrogen atoms. With this measure, the C60 fullerene has a globularity of 0.87, while acetic acid has a globularity of 0.10. The smallest allowed globularity was set to 0.35, based on a review of known molecular ferroelectrics. As an example, quinuclidinium perrhenate and iodate are multiaxial ferroelectric plastic ionic crystals with low coercive fields, of respectively 340 and 255kV/cm, where the quinuclidinium molecule has a globularity of 0.52. In comparison, the non-plastic ferroelectric [Cu-(Hdabco)(H2O)Cl3]<cit.> has a globularity of 0.24. Step 5 excluded structures where the globular molecules have more than two hydrogen bonds to avoid 3D hydrogen-bonded networks which would hinder molecular rotations. After all filters were applied, the pool was reduced to 75 structures. For each, the spontaneous polarization and electronic band gap were computed using DFT. While we identified a wide range of candidate materials, the screening criteria have caused some candidates to be omitted. The cap of the number of atoms in the unit cell for instance excluded the ferroelectric metal-free plastic perovskite [NH3-dabco]NH_4I3, as it has 198 atoms in its unit cell. The molecular size limit furthermore excluded some known plastic crystals, such as adamantane derivates, the C60 fullerene, <cit.>and plastic colloidal crystals.<cit.> Nonetheless, our target was not to identify all plastic ferroelectrics, but rather identify several of technological interest. Notably, ferroelectric plastic crystals of small molecules would typically have a larger density of dipoles, both originating from individual molecules and inter-molecular charge transfer. Ferroelectric plastic crystals also tend to have ferroelectric phases of high symmetry, resulting in relatively small unit cells. Finally, this screening procedure does not evaluate if the spontaneous polarization is switchable, as is required for ferroelectrics. The presence of small, globular molecules facilitates a rotational switching mechanism. However, in some of the compounds, the switching path is not clear-cut, and more involved computations or experimental studies have to be performed for a full assessment. §.§ Density Functional Theory Calculations The DFT computations were carried out using the VASP software package <cit.> with the projector augmented plane wave method (PAW) pseudopotentials. <cit.> The plane wave cut-off was set to 530 eV for all computations. A Γ-centered Monkhorst-Pack k-point grid with a spacing of 1/15 Å^-1 was used to sample the Brillouin zone. All structures were relaxed until forces fell below 0.01 eV/ Å. The spontaneous polarization was computed using the Berry phase method. <cit.> Both relaxation and polarization calculations were performed using the vdW-DF-cx functional, <cit.> which we found to provide accurate lattice constants in our benchmarking study of exchange-correlation functionals for ferroelectric plastic crystals. <cit.> § RESULTS AND DISCUSSION Out of the 75 materials identified by screening the CSD, 16 are earlier reported to have ferroelectric properties. <cit.> Four have also been studied for their piezoelectric and/or dielectric properties, but were not reported to be ferroelectric (CSD refcodes: BOXCUO, QIMXER, BOBVIY12, and BOCKEK06).<cit.> Table <ref> lists the available experimental results, as well as the computed spontaneous polarization and electronic band gaps for the earlier reported ferroelectrics and non-ferroelectrics, while Table <ref> lists the computed values for the candidate materials. For three of them, the DFT computations did not yield a band gap. This was confirmed using the hybrid functional of Heyd, Scuseria, and Ernzerhof, HSE06,<cit.>, which does not underestimate band gaps as the vdW-DF-cx functional does due to the lack of non-local exchange. Fig. <ref> shows an overview of the molecular and ionic molecular crystals among the earlier reported and candidate ferroelectrics. Out of the 16 known ferroelectrics, 14 are ionic molecular crystals. For the candidate materials, 20 are molecular, and 35 are ionic molecular crystals. A selection of the chemical species found in the identified materials, including the globular molecules and the neutral molecules and ions combined with the globular is illustrated in Fig. <ref>. In total, the materials contain 30 different anionic molecules, 31 cations, and 25 neutral molecules. Fig. <ref> illustrates the seven groups of materials identified, based on the composition and geometry of the globular molecules. Three candidate materials do not fit into any of these categories, they are listed as "Other" in Table <ref>. Fig. <ref> shows that the experimentally measured spontaneous polarizations<cit.> agree well with previously reported values for plastic molecular and plastic ionic molecular crystals, but computed values are often slightly higher than the experimental. Extrinsic effects such as defects, grain orientation and boundaries, and electronic leakage can reduce the experimental spontaneous polarization.<cit.> Moreover, atomic vibration and molecular librations can also reduce the measured spontaneous polarization. In assessing the properties with DFT, we used the lowest temperature ferroelectric phase reported in CSD, as the dynamic molecular motion could play a significant role at elevated temperatures and in particular in the high-temperature phases. Note that for 44 of the materials, the structure reported at the lowest temperature coincides with the room temperature phase. For four materials, a paraelectric phase is reported at room temperature, and for one material, VAGVAA01, the room temperature structure is a different potentially ferroelectric phase. Finally, for 26 of the materials, no crystalline room temperature phase has been reported in the CSD. §.§ Candidate Ferroelectric Plastic Crystals §.§.§ Trimethyl-X-Y materials One group of plastic ferroelectric candidates is the trimethyl-X-Y materials, where X denotes an atom and Y a chemical group or atom; two structures illustrated in Fig. <ref>. Out of the 21 materials, only one (DIRKEU01) has previously been reported as ferroelectric. Of the remaining 20, seven are based on neutral molecules, while the rest are constructed from various combinations of 12 anions and seven different cations, the most common being tetramethylammonium, found in 8 out of the 21 materials. This group of materials shows significant promise, with eight having computed spontaneous polarization values exceeding 10 μ C/cm^2 (Table <ref>). The largest value is found for VUGNUG with 22 μ C/cm^2. For two of the materials, PEVXOE and PEVXUK, we find no bandgap and therefore no spontaneous polarization is listed. §.§.§ Dabco-based Materials Materials constructed from various derivatives of 1,4-diazabicyclo[2.2.2]octane (dabco) in combinations with anions such as ClO4- and ReO4- are attractive ferroelectrics with low coercive fields, and rapid ferroelectric switching reported with frequencies up to 263kHz.<cit.> Moreover, Curie temperatures as high as 540K have been reported by Li et al.<cit.> In our study, we found 11 dabco-based materials, out of which five have previously been reported as ferroelectrics.<cit.> Two of these, BILNES and BILNOC, are organic metal-free perovskites, <cit.> see Fig. <ref>. Interestingly, two of the candidate materials, LOLWEO and HUSRES, are co-crystals of charge-neutral molecules. The computed spontaneous polarization values for the known ferroelectric dabco-based materials range from 3.56 to 21.9 μ C/cm^2 for BILNOC, compared to 0.4 - 18.4 μ C/cm^2 for the identified candidate ferroelectric plastic crystals. §.§.§ Quinuclidinium-based Materials Several materials containing variations of the quinuclidinium molecule have been studied in recent years. <cit.> Notably, Tang et al. reported a Curie temperature of 466K<cit.> for [F-C7H13N]ReO4. Another compound of interest is [C7H14N]IO4, for which You et al. found 12 equivalent directions of polarization.<cit.> The reported coercive fields vary from 255 all the way up to 1000 kV/cm.<cit.> All the seven quinuclidinium-based materials found in our study have previously been reported as ferroelectrics, <cit.> (Table <ref>). Five of them are plastic ionic crystals, while the last two are plastic molecular crystals and stereoisomers of the same compound. The computed spontaneous polarizations of the quinuclidinium materials range from 5.2 to 12.7 μ C/cm^2. §.§.§ Hexamine-based Materials Four hexamine-based crystal structures were identified in the screening study, none of which (to our knowledge) have been reported as ferroelectrics in the past. Fig. <ref> shows the two variations of the hexamine molecule found, the regular hexamine molecule, and one where a nitrogen atom has been substituted by phosphorus. Three structures are ionic molecular crystals, with spontaneous polarizations ranging from 10.6 to 18.4 μ C/cm^2 (Table <ref>). One of these, HMTAAB, has a perovskite-like structure, see Fig. <ref>. The non-ionic molecular crystal with a spontaneous polarization of 0.9 μ C/cm^2 is reported with two refcodes in the CSD, TAZPAD and INEYUY. The large spontaneous polarization values of the ionic hexamine-based materials should encourage further experimental characterization and optimization of this group of materials. §.§.§ Boron Cluster Materials While boron clusters have been studied for their characteristic chemistry, biological application, as well as magnetic, optic, and electronic properties, <cit.> their ferroelectric properties have not received particular attention. Our screening study identified four materials containing boron clusters (Table <ref>). Two, SASSOU and LUWHOD, are reported at room temperature or higher and have spontaneous polarizations around 8 μ C/cm^2. The highest polarization is computed for OTOLAM, at 11.5 μ C/cm^2. The relatively large polarization combined with room temperature stability makes the boron cluster-based materials interesting for further studies. §.§.§ Materials Based on Cyclic Organic Molecules Some of the crystal structures identified contain cyclic organic molecules, both aromatic or non-aromatic, and we grouped these together. The 10 materials identified are all ionic molecular crystals including three known ferroelectrics <cit.> (Table <ref>), two of them being stereoisomers of the same compound. The third, RUJBAC, is an organic-inorganic hybrid perovskite. <cit.> For all three, the spontaneous polarization fall below 3 μ C/cm^2. The coercive fields are in the range 10 kV/cm,<cit.>, and they all exhibit Curie temperature exceeding 450K.<cit.> Among the seven candidates identified here, all but one have spontaneous polarizations exceeding the earlier reported ferroelectrics, HAJXEW having the largest value of 16.5 μ C/cm^2 (Table <ref>). Three of the candidates identified, WAQBOH, AMINIT, and FENYEC have been reported as stable at room temperature, with the latter melting above 400K. §.§.§ Materials Based on Organic Cage-like Molecules We found eleven crystals structures with various organic cage-like molecules, none of which have previously been reported as ferroelectric. Seven of these are molecular crystals, while four are ionic (Table <ref>). Even though there are no reported room temperature structures for MIRHUQ and XIBVIN, their reported sublimation and melting temperatures are high, 388 and 443K, respectively. Interestingly, these two structures are polymorphs of the same compound. JEBVOC is reported to have a melting temperature larger than 533K. Finally, three of the candidate ferroelectrics do not fit into any of the defined groups of materials. The computed spontaneous polarizations are below 3 μ C/cm^2 for all of them. HUPTUI and VAJKUM are isostructural and are built up of tris(dimethylamino)sulfonium molecule and an octahedral inorganic anion. §.§ Comparison of Candidate and Previously Reported Ferroelectrics The previously reported ferroelectrics frequently contain cage-like organic molecules, such as dabco and quinuclidinium-derivatives, combined with halogen, FeCl4- or XO4- anions. The molecules in the identified candidates are by comparison generally smaller: 25 materials consist of molecules with five or fewer carbon atoms (not counting structures of boron clusters). The smaller volume of the asymmetric unit allows for larger spontaneous polarization. For example, DIRKEU01 is a known ferroelectric with a computed polarization of 5.5 μ C/cm^2, VUGNUG is a candidate material with a value of 22.0μ C/cm^2. Both materials are of the trimethyl-X-Y group with 0.5 formula units per asymmetric unit, and the cell volume of DIRKEU01 is almost twice as large as VUGNUG. The smaller volume of VUGNUG results in a higher dipole density, and thus the larger spontaneous polarization. The lower molecular weights of the smaller molecules of the candidate materials could, however, lead to reduced melting points. Overall, the candidates display a more diverse set of anions that include entities like HF2-, CNO-, FeCl3NO-, and SF5O-. §.§ Polarization and Alignment of Molecules Molecules in plastic crystals often pack in complex arrengements, and the direction of the individual molecular dipoles does not necessarily align with the direction of the spontaneous polarization.<cit.> In Table <ref> and <ref>, the "dipolar" direction relative to the polarization direction is listed. Typically, the unit cell holds several equivalent molecules that align at the same angle with the polarization axis, as given by the space group symmetry. Mirror and/or rotational symmetries cause the polarization contributions perpendicular to the polarization axis to cancel out. For many of the molecules, the dipolar direction is evident from their symmetry, but not all molecules have a clear direction. We therefore conveniently obtain the "dipole" direction from the moments of the electronegativity relative to the center of electronegativity. Fig. <ref> illustrates the alignment of the molecular dipoles relative to the polarization axis for MIWBEC. In the cases where the molecules are symmetrical or have a negligible dipole, no alignment is listed. In these materials, interspecies charge transfer is the dominant contribution to the spontaneous polarization. Examples of such systems, are six of the dabco-based materials, SIWKEP, TEDAPC28, WOLYUR08, VAGVAA01, GASBIO, and NAKNOF03. Despite negligible molecular dipoles, their spontaneous polarizations are in the range 5.0-18.4 μ C/cm^2. For systems where the molecular dipoles run counter to the overall polarization, an interesting prospect opens up for realignment of the dipoles by applying an electric field. This can both increase the spontaneous polarization and allow for multi-bit storage, assuming that the electric field required to realign the dipoles is smaller than the coercive field of the ferroelectric material. §.§ Plastic Properties of the Candidate Ferroelectrics While the screening study can identify candidate ferroelectric plastic crystals, truly predicting whether the materials can transition to a plastic mesophase demands more involved simulations such as molecular dynamics. This is not feasible for the large pool of candidate materials identified in this study, but out of the 16 identified known ferroelectrics, 11 are reported to be plastic crystals. <cit.> Furthermore, the high-temperature phases of BUJQIJ and BUJQOP have not been solved due to poor XRD data, <cit.> which can indicate the high degree of disorder typical for plastic crystals. This shows that the screening procedure is suited to identify materials with plastic properties and orientationally disordered mesophases. To identify plastic crystals amongst the candidate materials, we look to the CSD. We both investigated all structures within each refcode family and performed a structure search using Conquest<cit.> to find plastic phases reported with a different refcode. Three candidate materials were identified as plastic crystals where all molecules exhibit rotational disorder, namely TMAMBF11,<cit.> ZZZVPE02,<cit.> and ZISCUC.<cit.>. LOLWEO has a reported high-temperature phase where one of the two molecular constituents shows rotational disorder. It can be noted that no reported plastic phase does not necessarily indicate that the material is not plastic – only that no plastic phase has been studied. §.§ Cohesive Energy and Thermal Stability For device applications, thermal stability is an important property, and operational temperatures should be significantly below melting and sublimation temperatures. The melting temperature of plastic crystals can be higher than for similar ionic and molecular crystals. As these materials transition into a plastic mesophase, the orientational disorder and increased freedom of movement increase the entropy, making it less favorable to melt due to the reduced entropy gain, as first reported by Timmermans <cit.>. The volume change at the phase transition is also small.<cit.> In some cases, the reduced entropy gain can make sublimation more favorable than melting. A high Curie temperature is also a prerequisite for ferroelectric and piezoelectric applications. For 44 out of the 75 materials identified in this study, the CSD entry concerns an ordered structure obtained at amibient temperature. The fraction of room temperature investigations is highest for ionic crystals, Fig. <ref>, a result that is in line with the higher stability of the ionic molecular crystals, due to electrostatic interactions. To further investigate the cohesion of molecular crystals, we compute the cohesive energies of XIBVIN and MIRHUQ. These materials are polymorphs of the same compound, differing only by their alignment of molecules, see Fig. <ref>. XIBVIN has a low degree of alignment of 77^∘, combined with a low spontaneous polarization of 1.3 μ C/cm^2. MIRHUQ shows better alignment, with angles of 29^∘ and 31^∘, and as one could expect, a higher spontaneous polarization of 11.3 μ C/cm^2. While the larger polarization of MIRHUQ makes it more interesting as a ferroelectric, it is reported to sublimate at 388K, while XIBVIN melts at 443K. The computed values for the cohesive energies are 0.26eV/molecule for MIRHUQ and 0.30eV/molecule for XIBVIN, the larger value corresponding to the less aligned structure with the highest melting point. The difference in thermal stability also indicates that at most one of the structures has a plastic phase. If both structures transitioned into rotationally disordered plastic mesophase, it could be assumed that the plastic phases had similar structures, and thus similar phase transition temperatures and mechanisms. The higher phase transition temperature of XIBVIN indicates that this structure transitions into a plastic mesophase, as the entropy gain in such transition can stabilize the solid phase. §.§ Electronic Band Gap and Multisource Energy Harvesting Ferroelectric materials with band gaps in the visible range can be of particular interest, as these can be promising for application in multi-source energy harvesting devices where piezo- or pyroelectric energy harvesting is combined with the harvesting of solar energy. <cit.> Therefore, we computed the band gaps for all identified materials to find those that could be suitable for such applications. Tables <ref> and <ref> lists the computed band gaps. As the computations were performed using the vdW-DF-cx functional, which describes exchange at the generalized-gradient approximation level, <cit.> we expect the values to be underestimated. Three materials are predicted to not have a band gap, this was confirmed using the hybrid functional HSE06. Most of the computed band gaps exceed 4 eV; however, six materials have values smaller than 2 eV. Four of these belong to Trimethyl-X-Y group, which would be interesting for multi-source energy harvesting. §.§ Design Strategies for Ferroelectric Plastic Crystals Novel and improved ferroelectric plastic crystals can be engineered by making new combinations of molecular species. As such, the various species found in the various plastic crystal candidates can be used as building blocks for novel material design, a selection of these are displayed in Fig. <ref>. Of particular interest are substitutions that are likely to result in similar or isostructural materials. For systems where this is possible, solid-solution engineering can be used to tweak material properties. Three examples of pairs of candidates for such alterations are found among the trimethyl-X-Y materials. The compositions between each pair of materials are similar, only differing by the substitution of molecules with similar geometries. PEVXOE and PEVXUK are isostructural and only differ by the substitution of (CH3)4P+ for (CH3)4As+. For ZZZVPE02 and TMAMBF11, the substitution of (CH3)3BH3N to (CH3)3BF3N yields an isostructural material where the spontaneous polarization is increased from 5.4 to 20.3 μ C/cm^2. A similar effect is seen for YODGON and VUGNUG, where the substitution of OCN- for N3- leads to an increase in the spontaneous polarization from 6.4 to 22.0 μ C/cm^2. Another example of substitution resulting in similar packing is the earlier reported ferroelectrics TEDAPC28 and SIWKEP, which consist of a dabco molecule and ClO4- or ReO4-, respectively. Such a substitution somewhat changes the orientation of molecules, resulting in a different space group. Whereas the ReO4- material has a computed spontaneous polarization of 8 μ C/cm^2, while the ClO4- analog has a value of 5.9 μ C/cm^2. The screening identified ten different tetrahedral inorganic anions, but other options also exist and can be considered for the design of new ferroelectric plastic crystals. Substitutions of single atoms in the organic globular molecule can also be a route to engineer the ferroelectric properties.<cit.> For instance, Lin et al. substituted a hydrogen atom in tetramethylammonium for the halogens I, Cl, and Br. The crystal structure was retained, while the number of ferroelectric polar axes increased from 2 for iodine to 6 for chlorine.<cit.> § CONCLUSIONS Using a CSD-based workflow, we have identified 55 candidate ferroelectric plastic crystals. The 21 with spontaneous polarization exceeding 10 μ C/ cm^2 are arguably the ones with the most potential in ferroelectric devices. Among these, eight in the trimethyl-X-Y group also display a large variation in electronic band gaps that could make them useful also for multi-source energy harvesting. Our study has successfully identified a range of candidate ferroelectric plastic crystals, including 16 that have been reported as ferroelectrics in the past. The screening criteria used were quite strict, and relaxing some of them, such as the size limitations on both molecules and unit cells, would have expanded the pool. The criteria were based on the previously reported ferroelectric plastic crystals. For this reason, crystal structures that differ significantly from the earlier reported ones could have been overlooked. Still, the identification of the boron cluster-based materials was unexpected, illustrating the potency of the CSD screening. In the design of novel ferroelectric plastic crystals, a good starting point is combining different globular molecular species, i.e., combinations of cations and anions. The various molecular species in the materials identified here, and the corresponding crystal structures, can serve as inspiration for such design. In this study, we have not assessed whether the polarizations of the candidate ferroelectrics are switchable, which is a criterion for ferroelectricity. However, all the candidate materials will have pyro- and piezoelectric properties due to their crystal symmetries. This study and the identified compounds should stimulate further theoretical or experimental studies, to assess their ferroelectric switchability, and other characteristics such as the Curie temperatures, and material stability. § DATA AVAILABILITY All the relaxed crystal structures used in this study can be accessed through the NOMAD database at <https://dx.doi.org/10.17172/NOMAD/2023.06.12-1>. All other data is available upon reasonable request. The computations of this work were carried out on UNINETT Sigma2 high-performance computing resources (grant NN9650K). This work is supported by the Research Council of Norway as a part of the Young Research Talent project FOX (302362). Structure and molecule figures are made using the software programs Mercury and VESTA. § MOLECULAR GRAPH THEORY A graph, or network, consists of a set of nodes that can be connected by edges. <cit.> Fig. <ref> illustrates this in the context of a molecule. A covalently bonded molecule is represented as a graph, with atoms as nodes and covalent bonds as edges. A node with two edges is called a chain and if a part of a graph can become disconnected by cutting one single chain, it is called a bridge. To avoid long, flexible molecules that are unlikely to rotate, we required molecules to have at most two connected bridge nodes, i.e., the molecule shown in Fig. <ref> was excluded due to a high number of connected bridges.
http://arxiv.org/abs/2306.03254v1
20230605211502
Characterizing the Effects of Single Bus Perturbation on Power Systems Graph Signals
[ "Md Abul Hasnat", "Mia Naeini" ]
eess.SY
[ "eess.SY", "cs.SY", "eess.SP" ]
Characterizing the Effects of Single Bus Perturbation on Power Systems Graph Signals This material is based upon work supported by the National Science Foundation under Grant No. 2238658. Md Abul Hasnat, Graduate Student Member, IEEE, and Mia Naeini, Senior Member, IEEE Md A. Hasnat is with the Department of Electrical Engineering, University of South Florida, Tampa, FL 33620 USA (e-mail: [email protected]). Mia Naeini is with the Department of Electrical Engineering, University of South Florida, Tampa, FL 33620 USA (e-mail: [email protected]). Corresponding author: Md Abul Hasnat. July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================== This article explores the effects of a single bus perturbation in the electrical grid using a Graph Signal Processing (GSP) perspective. The perturbation is characterized by a sudden change in real-power load demand or generation. The study focuses on analyzing the spread of the perturbation throughout the grid and proposes a measure of spreadability based on GSP. Moreover, the global and local smoothness properties of the difference bus voltage angle graph signals are evaluated for understanding their embedded patterns of spreadability property. It is demonstrated that the global smoothness of the bus voltage angle graph signal follows a quadratic relationship with the perturbation strength, which helps in characterizing the critical perturbation strength after which the power flow diverges indicating a stressed system. The impact of a single bus perturbation on power system graph signals has been investigated through both analytical derivations using the DC power flow model and simulation using the AC power flow model. The results reveal that the proposed measure of spreadability as well as local and global smoothness properties of the graph signals are independent of the perturbation strength and instead mainly depend on the perturbation's location. Graph signal smoothness, single bus perturbation, spreadability, critical load, power flow non-convergence. [A, 01]𝒱Set of all buses (vertices). [A, 01]𝒮Set of all generator buses. [A, 01]ℒSet of all load buses. [A, 01]𝒢Graph associated with the power system (i.e., the domain of the graph signal). [A, 01]𝒢^'Unweighted version of 𝒢. [A, 01]ℰSet of all transmission lines (edges). [A, 01]𝒲Set of all edge weights. [A, 01]𝒩_u^(K)Set of the K- hop neighbors of v_u. [A, 01]𝒮Slope operator. [A, 01]𝒟(v_i,v_j)Shortest path distance operator between vertices v_i and v_j. [A, 01]𝐋Graph Laplacian Matrix. [A, 01]𝐁Susceptance Matrix. [A, 01]𝐐Matrix containing information about grid topology and electrical distances defined as (𝐁^-1)^T𝐋𝐁^-1. [A, 01]𝐑Matrix containing information about grid topology and electrical distances defined as (𝐁^-1)^T𝐁^-1. [A, 01]d_ijGeographical distance between bus i and j. [A, 02]e_ijLink between vertex v_i and v_j. [A, 02]w_ijWeight of the link e_ij. [A, 02]l_ijEntry of row i and column j of 𝐋. [A, 02]b_ijEntry of row i and column j of 𝐁. [A, 02]β_ijEntry of row i and column j of 𝐁^-1. [A, 06]λ_kk-th eigenvalue of 𝐋. [B, 01]x(v_n), x(n)Graph signal in general. [B, 01]x(n,t)Time-varying graph signal. [B, 01]p(n)Bus real power graph Signal. [B, 01]p_d(n)Actual load demand graph signal. [B, 01]p_d(n)Generated real power graph signal. [B, 01]𝐱Graph signal x(n) in vector form. [B, 01]θ(n)Bus voltage angle graph signal. [B, 01]Δθ(n)Difference bus voltage angle graph signal, after and before the perturbation. [B, 01]ψ_u(n)Normalized difference voltage angle graph signal. [B, 01]g_xGlobal smoothness of graph signal x(n). [B, 01]l_x(n)Local smoothness of graph signal x(n). [B, 01]C^'(n)Modified normalized closeness centrality of vertex v_n. [B, 01]f_y(ζ)Probability distribution of random variable y. [C, 01]NCardinality of the set 𝒱. [C, 01]t_uPerturbation instant. [C, 01]ψ̅_u^(K)Mean of the signal values of ψ(n) for all the vertices at K- hop distance from the perturbed bus. [C, 01]γPerturbation strength. [C, 01]γ_cCritical perturbation strength. [C, 01]γ_ncNon-convergence perturbation strength. [C, 01]KHop distance. [1.8cm] § INTRODUCTION Graph signal processing (GSP) has emerged as a prominent field that focuses on the analysis of structured data over the graph domain. Recently, GSP has found applications in the analysis of power system data by representing the power system as a graph and its measurements over the graph as graph signals <cit.>. By extending the theories and tools of classical signal processing to the irregular graph domain, GSP facilitates imparting explicit information about the topology, connectivity, and interactions among the components of the system into the analysis of data. Detection, localization, and classification of anomalies, attacks, and stresses in the electric grid <cit.>, state estimation and recovery <cit.>, estimation of load current variability in the presence of distributed generators <cit.>, and load disaggregation <cit.> are examples of applications of GSP in addressing problems in power systems. Analyzing power grid data through the lens of GSP has revealed that signatures and patterns of stresses in the system are embedded in various properties and features related to the system's graph signals <cit.>. In this work, the focus is on understanding the features and patterns in power systems graph signals due to abrupt changes in the load demand or generated power in a single bus. Although fluctuation of load demand within an acceptable range is normal and perpetual in the power system, understanding the patterns of load change is important for situational awareness, particularly in the context of smart grids with intermittent and low-inertia loads <cit.>. A typical scenario is the charging of electrical vehicles (EVs) as a load added to the grid (G2V technology) <cit.>. Since the load demand associated with the charging of the EVs is more probable to be clustered geographically <cit.>, a monotonous increase of load demand at a particular bus can be a common situation. Another origin of the monotonous increase in load demand can be the load-altering cyber attacks purposefully launched by adversaries <cit.>. The abrupt changes in the generation of real power are not common in traditional power systems but are possible in modern power grids when a large number of renewable energy resources are connected to the grid by converters <cit.>. In this work, a general approach has been considered, from the GSP perspective, to analyze the effects of changes in the load demand or generated real power at a particular bus, modeled as single bus perturbation, without explicitly modeling the cause of perturbation. The first presented study is focused on understanding how a single bus perturbation spreads through the power grid depending on the strength and the location of the perturbation. The analysis of the spreadability of a bus perturbation is important from several perspectives in the context of grid stability and reliability analysis. A more spreadable perturbation can affect a large number of components (e.g., buses, transmission lines), even at distant locations from the perturbation point, and introduces stresses in the grid that may even lead to cascading failures or blackouts. Here, a GSP-based measure is defined to quantify the spreadability of the perturbation depending on its strength and location. This spreadability measure is also useful for planning the placement of low-inertia loads and generators in the grid. In addition to understanding the spreadability, it is important to understand how the perturbation affects other graph signal features to gain an improved situational awareness under stress. For instance, power system graph signals, especially the bus voltage angle graph signal, are generally smooth during normal grid operation <cit.>; however, the local and global smoothness properties of the graph signals vary under stress. This study focuses on understanding how the global and local smoothness values associated with the power system graph signals are affected as a function of the perturbation strength and location. The relation between the proposed spreadability measure and local and global smoothness features of graph signals has also been explored and it has been shown that certain smoothness parameters associated with the difference graph signals (before and after perturbation) can be good estimators of the spreadability of the perturbations. The effects of single bus perturbation on the power system graph signals have been derived analytically using the DC power flow model and simulated using the AC power flow model to verify the properties in more realistic scenarios. The presented analytical approach shows that the proposed measure of spreadability does not depend on the perturbation strength, but rather depends on the location of the perturbation. Our experiment based on the AC power flow model closely supports this property. Moreover, the presented analytical analysis shows that the global smoothness of the bus voltage angle graph signal is a quadratic function of the increasing load demand (or generated real power) at a particular bus. Based on this analysis, there is a critical value of input power at each bus beyond which the global smoothness begins to drop, and a further increase in the input power leads to divergence of the power flow equations. Failing of power flow convergence, although arises from various issues, is an indicator of a stressed system. The presented analytical study shows that the critical load (or generation) at each bus for which the global smoothness is maximized depends on the topology. The key contributions of this article have been summarized below: * A quantitative measure of the spreadability of a perturbation has been proposed and the properties of this measure have been analyzed theoretically under the DC power flow model and verified using the AC power flow results. The proposed measure has been compared with an existing network-science-based spreadability metric. * The global smoothness of the voltage angle graph signal has been shown to follow a quadratic function of the perturbation strength with a maximum, defined as the critical perturbation. It is shown that this critical point suggests approaching the power flow model divergence, which although can arise from various issues, is an indicator of a stressed grid. * The global and local smoothness properties of the difference graph signal of bus voltage angles before and after the perturbation are examined. The analysis demonstrated that under DC plow flow assumptions these smoothness parameters are independent of the perturbation strength and thus are suitable for analyzing the effects of perturbations at different locations of the grid. The results of the simulation with the AC power flow model closely support this property. Moreover, these smoothness parameters have been identified as reliable indicators of the spreadability of perturbations based on the location. § RELATED WORK The effects of perturbations in the electrical grid have been studied from various perspectives in the literature. The stability of the grid after the perturbation, the dependency on the perturbation location, the propagation of the effect of perturbation through the system, and the identification of vulnerable locations in the grid are some of the topics of interest in this domain. A number of works analyze the effects of perturbation from the complex network perspective using the concept of Basin stability <cit.> using the frequency measurements in the grid. For example, Wolff et. al. <cit.> analyzed the effect of perturbation of a single node (bus) in the electric grid based on the Basin stability of the grid, which is evaluated in terms of the return time of the grid to the steady-state after the perturbation. This work defines perturbation as the direct change of voltage phase angle and angular frequency at the perturbed bus. Menck and Kurths <cit.> identify the weaker buses due to small perturbations in the grid based on Basin stability. The propagation of spatio-temporal signals through the system has been studied by several authors with a complex network approach. Hens et. al. <cit.> provide a generalized theoretical analysis of how spatio-temporal signals propagate in time through complex networks depending on the topology and dynamic mechanisms of interactions among the vertices. A few works also studied the spread of disturbances in the electric power grid. For example, Molner et. al. <cit.> proposed a heuristic technique to relate the spread of oscillations due to the variable renewable resources to the network structure. Nnoli and Kettemann <cit.> analyzed the propagation of disturbance in the electric grid depending on the topology of the grid, its inertia, and heterogeneity. In <cit.>, the authors considered a network-science-based approach to quantify the spreadability of a single perturbation in the grid depending on the perturbation location. The impact analysis of grid perturbations can be useful in several scenarios in the modern power grids including the integration of distributed energy resources (DERs) and electric vehicle charging stations. Although in most cases, the problems do not directly correspond to the single bus perturbation, the single perturbation analysis can be useful for the simplification of such problems. In the current literature, the issues related to the integration of EVs and DERs have been studied using various methods. For instance, Vasilij et. al. <cit.> developed a model for the worst-case analysis of the impact of placing EV charging stations in the grid, which involves observing the impact of the placement of charging stations on voltage profile and line loading. The current work presents a generalized approach to analyze the impact of a single bus perturbation in the grid. Moreover, unlike the Basin stability-based analyses, this work does not consider the frequency data and only considers the impact of the perturbation on the bus voltage angle data. The current work adds a GSP perspective to the analysis to directly impart the topology and interconnection into the analysis. § MATHEMATICAL REPRESENTATION OF PERTURBATION AND ASSOCIATED ELECTRICAL ATTRIBUTES §.§ Power System Graph Signals An electric power grid with N buses and M transmission lines has been modeled as a weighted undirected graph, 𝒢=(𝒱,ℰ, 𝒲). The buses of the grid are considered as the vertices of the 𝒱={v_1, v_2, ..., v_N}, whereas the transmission lines are considered as the edges, ℰ={e_ij: (i,j) ∈𝒱×𝒱}, and therefore, |𝒱|=N and |ℰ|=M, where |.| denotes the cardinality of the set. The element w_ij of the weight matrix, 𝒲 is the weight corresponding to the edge, e_ij. The vertices corresponding to the buses with generators (i.e. energy sources) and loads are denoted by 𝒮⊂𝒱 and ℒ⊂𝒱, respectively. The Laplacian matrix 𝐋 associated with the graph 𝒢 with elements l_ij is defined as: l_ij=∑_j=1^N w_ij, if i=j and l_ij=-w_ij, otherwise. In the GSP literature, the weights are defined in various ways, for instance, based on the geographical and physical relational aspects, depending on the applications. In this work, the weights w_ij are defined such that the Laplacian matrix, 𝐋 represents the imaginary part of the admittance matrix of the grid capturing some of the transmission line properties. The graph signal x(v_n), written as x(n) for simplicity, can be considered as a mapping of the vertices of the graph to real-number space, x:𝒱→ℝ and can represent various electrical attributes associated with the buses of the grid. The signal values of x(n) arranged in a vector form would be denoted by 𝐱. In this article, the graph signal x(n) at a particular time instant t is denoted as x(n,t). Let us consider θ(n), the bus voltage angle graph signal that represents the angles of the voltage phasors at each bus. While any or combination of electrical attributes at each bus can be considered, here the focus will be on the voltage angle graph signal θ(n) to evaluate the state of the power system without the direct information on the transient state based on the fluctuations in the voltage magnitudes and frequency. Moreover, bus voltage angle measurements are directly related to the load demands, which are important in this study. The generated real power and the real power demand at each bus are denoted by the generated power graph signal, p_g(n) and load demand graph signal, p_d(n), respectively. Note that p_g(n)=0 for n ∈𝒱∖𝒮 and p_d(n)=0 for n ∈𝒱∖ℒ. The input power graph signal is denoted by p(n), where p(n)=p_g(n)-p_d(n). §.§ DC Power Flow Model The DC power flow model <cit.> describes a linear relationship between the input power and the bus voltage angle by the equation 𝐩 = 𝐁θ, where 𝐁 is the susceptance matrix of the grid (imaginary part of the admittance matrix) with element b_ij at the i-th row and j-th column. Knowing the topology, the bus voltages can be computed from the active power input based on θ=𝐁^-1𝐩 and can be represented in a graph signal form as: θ(n) = ∑_j=1^Nβ_nj p(j), where β_ij is the element of 𝐁^-1 at the i-th row and the j-th column. In this work, the linearity of the DC power flow model facilitates analytical investigation of the properties of the graph signals. However, since the DC power flow model is an approximation of the power flow in power systems, in certain cases, the results from this model may deviate from the real scenarios. Nevertheless, the graph signal analysis with the DC power flow assumption reveals important information about the state of the system. Whenever necessary, in this work, the AC power flow model through MATPOWER <cit.> is utilized for numerical verification of the analytical results. §.§ Smoothness of Graph Signals The global smoothness of a graph signal is a measurement of the overall amount of vertex-to-vertex fluctuations in the graph signal <cit.>. The global smoothness value associated with graph signal, x(n) is defined as <cit.>: g_x = 𝐱^T 𝐋𝐱/𝐱^T 𝐱 = ∑_i=1^N ∑_j=1^N L_ij x(i) x(j)/∑_k=1^N x^2(k). A small value of g_x indicates a smooth graph signal, whereas increasing values of g_x indicate the increasing vertex-to-vertex fluctuations of signal values <cit.>. The bus voltage angle graph signal θ(n) in normal conditions is generally smooth with a small value of g_θ <cit.>. The local smoothness <cit.> of a graph signal x(n) is defined by the following equation and represents how rapidly the value of a graph signal changes from each vertex n to its neighboring vertices: l_x(n)= ∑_k=1^N L_nk x(k)/x(n), x(n) ≠ 0. Our previous analyses of the local smoothness for the bus voltage angle graph signals in power systems, in <cit.>, have revealed that the voltage angle graph signals are smoother at certain locations in the grid depending on the topology and interconnections among the components of the system. The global and local smoothness values of graph signals are important features in the vertex domain that can allow analyzing some of the behavior and properties of the signals and the system they represent. Deviation from nominal ranges of these parameters can be an indication of an anomaly <cit.>. In the power system context, the anomalies may indicate a stressed system due to cyber attacks or physical events, such as line outages, generator trips, and abrupt load changes. In our previous works, local and global smoothness of bus voltage angle graph signals (i.e., g_θ and l_θ(n)) have been utilized for the detection <cit.>, location identification <cit.>, characterization (including determining whether the stress is clustered or random, determining the stress center and radius) <cit.>, and classification <cit.> of the stresses in the power system. The current work provides a focused study on the changing pattern of global and local smoothness values of different graph signals under single bus perturbation due to, for instance, abrupt changes in load demand or generation. Through this study, the spread of the effects of perturbation in the system will also be investigated through graph signal properties. Understanding the properties of stresses and their spread can support power system monitoring and planning, for instance, for predicting the grid instability due to load and generator changes, the effects of renewable energy resources on the system state, and for analyzing the effect of loads connected through grid-following and grid-forming inverters. §.§ Single Bus Perturbation A single bus perturbation 𝒰 at the vertex (i.e., bus), v_u ∈𝒮∪ℒ is defined by an abrupt change of value in the bus input at time t_u and can be defined in the power graph signal form as: p(n, t_u) = p(n, t_u-ϵ) + Δ p_u(n), where ϵ is a very small amount of time. The perturbation graph signal Δ p_u(n) can be modeled as a Kronecker Delta <cit.> graph signal: Δ p_u(n) = γδ_u(n), where δ_u(n) is the Kronecker delta graph signal defined as δ_u(u)=1 for n = u and δ_u(n)=0, for n ≠ u and γ is a scalar called perturbation strength associated with the perturbation, 𝒰. Therefore, Δ p_u(u)=γ. A positive value of γ at the generator-only bus (i.e., v_u ∈𝒮∖ℒ) indicates an increase in generated real power while a positive value of γ at a load-only bus (i.e., v_u ∈ℒ∖𝒮) indicates an increase in the real power load demand. The value of γ in buses with both generators and loads (i.e., v_u ∈𝒮∩ℒ) can be described by the increase and decrease of both generations and loads. However, in this work, only one change at a time (i.e., either an increase or decrease in generated power or load demand) is considered. It is also assumed that the inertia of the grid is negligible in response to the perturbation, 𝒰. This assumption is reasonable for modern grids, where renewable energy resources are connected to the grid with inverters and loads are connected with converters. In this work, the effects of the perturbation 𝒰 on the voltage angle graph signal, θ(n) are evaluated. Let the difference voltage angle graph signal due to the perturbation 𝒰 at bus u at time t_u be defined as: Δθ_u(n)=|θ(n,t_u)-θ(n,t_u-ϵ)|, where ϵ is a small value. The signal values of the graph signal Δθ_u(n) have a direct relationship with the perturbation strength, γ. Therefore, for a better understanding of the dependency on the perturbation location, a normalized version of Δθ_u(n) has been considered. The normalized difference voltage angle graph signal is defined as: ψ_u(n) = Δθ_u(n)/| γ|, where ψ(n) is expressed in degree/mega-watt. Considering the DC Power flow model, the ψ_u(n) depends only on the grid topology. Proof: Substituting the definition of θ(n) from equation (<ref>) into equation (<ref>): Δθ_u(n)=| ∑_j=1^N β_nj p(j, t_u)-∑_j=1^N β_nj p(i, t_u-ϵ)| =|∑_j=1^N β_nj[p(j,t_u)-p(j, t_u-ϵ)]| =|∑_j=1^N β_njΔ p_u(j)| =|∑_j=1^N β_njγδ_u(j)| =| γβ_n u|, (using the property of Kronecker Delta <cit.>). Next, substituting Δθ_u(n) into equation(<ref>) leads to: ψ_u(n) = |γβ_nu|/|γ| = |β_nu|. This property shows that ψ_u(n) does not depend on γ, under the DC power flow assumption. The normalized difference in voltage angle before and after the perturbation depends only on the location of the perturbation. In other words, the location of the perturbation affects ψ_u(n) according to the topology of the grid, which captures the interconnections among the buses and the electrical distances between the components. Since the power system dynamics deviate from the DC power flow model, this property may not hold accurately in real power grids, nevertheless, it indicates that the effect of perturbation in the grid predominantly depends on its location rather than its strength. This property is important as it can be used for instance, for identifying the vulnerable buses with respect to perturbation issues, which is important for stability, maintenance, and resilience planning. The perturbation 𝒰 affects the bus attributes of the perturbed bus, v_u ∈𝒮∪ℒ as well as the other buses (v ∈𝒱, v ≠ v_u) in the system. The effects of the perturbation spread throughout the grid (similar to a stone causing ripples in the water). However, the effects are more complex in the power systems because of their irregular topology (i.e., non-Euclidean vertex domain) and complex interconnections based on the physics of electricity. While it is expected that the attributes of the nearby (geographical and topological) buses of the perturbed bus v_u get affected more than the far-away buses, deviation from this expectation is very common. In other words, the relationship between the geographical/topological distance and the perturbation effects is irregular. In the next section, the spreadability of the perturbation 𝒰 is studied in terms of the location of the perturbation and the perturbation strength. § EFFECTS OF SINGLE BUS PERTURBATION §.§ Spreadability of Single Bus Perturbation For analyzing the spreadability of perturbation 𝒰 in the grid in terms of the bus attributes, the changes introduced in the bus voltage angle graph signal at the buses at different hop distances from the perturbed bus, v_u are evaluated. The mean of the signal values of ψ_u(n) at all the vertices at K-hop distance from the perturbed bus v_u specifies how the buses at K-hop distance are affected on average by the perturbation. This can be expressed as: ψ̅_u^(K) = 1/|𝒩_u^(K)|∑_n ∈𝒩_u^(K)ψ_u(n), where 𝒩_u^(K)⊂𝒱 is the set of the K- hop neighbors of v_u. According to Property 1, as ψ_u(n) does not depend on the perturbation strength, ψ̅_u^(K) also does not depend on the perturbation strength under DC power flow assumptions. Under the DC Power flow assumption, the ψ̅_u^(K) depends only on the grid topology. Proof: Substituting ψ_u(n) from equation (<ref>) into equation (<ref>) results: ψ̅_u^(K) = 1/|𝒩_u^(K)|∑_n ∈𝒩_u^(K) |β_nu|. Therefore, under DC power flow model ψ̅_u^(K) does not depend upon the perturbation strength, γ, rather depends upon the perturbation location, v_u. As such, ψ̅_u^(K) can be calculated from the susceptance matrix, 𝐁^-1. Fig. <ref> shows ψ̅_100^(3), the average of the values of normalized difference voltage angle graph signal calculated from the equation θ=𝐁^-1𝐩 (DC power flow model) at K = 3- hop distance from the perturbed bus no. 100 of the IEEE 118 bus system <cit.>. It can be observed that ψ̅_100^(3) is independent of the perturbation strength, γ. The results obtained from the AC power flow model in MATPOWER show a similar property, i.e., very weak dependence of ψ̅_100^(3) on γ. For the AC model results the value shows a slight variation (around 0.008 degree/MW) from the value obtained analytically using the DC power flow model. The values of ψ̅_u^(K) show a decreasing trend as a function of K as illustrated in Fig. <ref> when calculated using the AC power flow model in MATPOWER. This behavior is expected as the effects of perturbation should spread and diminish from the source of the perturbation (i.e., v_u). Our experiments show that this decreasing trend is non-uniform over the grid and varies significantly depending on the location of the perturbation. Generally, a larger value of ψ̅_u^(K) at a far-away bus (i.e., a higher value of K) from the perturbation source indicates larger spreadability of the perturbation. Therefore, a flatter ψ̅_u^(K)vs. K curve indicates greater spreadability of the perturbation. As such, to quantify the spreadability, the slope of the best-fitted line (Fig. <ref>, red straight line) to the ψ̅_u^(K)vs. K curve is defined as the spreadability measure, s. To this end, the spreadability measure due to the perturbation, 𝒰 at bus v_u can be expressed as: s(u) = 1/𝒮[ψ̅_u^(1) , ψ̅_u^(2), …ψ̅_u^(D)], where 𝒮[ψ̅_u^(1) , ψ̅_u^(2), …ψ̅_u^(D)] denotes the negative slope of best-fitted lines to the points [ψ̅_u^(1) , ψ̅_u^(2), …ψ̅_u^(D)]. Considering the DC Power flow model, s(u) depends only on the location of perturbation, 𝒰. Proof: Since ψ̅_u^(K) is independent of γ as proved in equation (<ref>), from equation (<ref>), it can be shown that s(u) is independent of γ and only a function of the perturbation location v_u under the DC power flow assumption. Fig. <ref>(a) shows the spreadability measurement, s(u) due to perturbation in different locations, v_u ∈ℒ∪𝒮 for a fixed perturbation strength. Based on the DC power flow model, the proposed spreadability measurement, s(u) is shown to be unaffected by the perturbation strength. Meanwhile, the simulation results using the AC power flow model demonstrate a minimal dependence on the perturbation strength but a major dependence on the location of the perturbation. This observation indicates that the numerical findings align with the obtained theoretical results. Fig. <ref>(a) illustrates how the effect of load perturbation in different load buses spreads through the grid as reflected in the difference bus voltage angle graph signals. This observation can support the identification of vulnerable buses in the grid. These buses are susceptible to perturbations that can lead to more widespread effects and result in greater damage to the system. For example, from Fig. <ref>(a) it is observable that the impact of a load perturbation at bus no. 116 of the IEEE 118 bus system is more spreadable through the grid compared to load perturbations at any other bus in the system. This result provides important insight, for instance, for maintenance and protection planning in the system. The presence of vulnerable areas with high spreadability indicates that these regions may not be suitable for the integration of renewable energy sources or EVs. Due to their susceptibility to perturbation spread, these areas may pose challenges for the reliable and stable operation of renewable energy and EV infrastructure <cit.>. Next, the introduced spreadability measure s(u) in this work has been evaluated and compared with respect to the spreadability measure introduced in <cit.> based on a network-science-based approach. Here, the difference voltage angle graph signal Δθ_u has been considered as the mean displacement vector as defined in <cit.>. The spreadability measure introduced in <cit.>, is denoted as s^'(u) and is defined as: s^' (u) = C^'(u) ∑_i=1^N Δθ_u(i)/∑_j=1^N Δθ_u(j)𝒟 (v_u,v_i), where 𝒟 (v_u,v_i) is the shortest path length between the perturbed bus v_u and all the other buses v_i ∈𝒱 in the graph 𝒢^'(𝒱,ℰ). This graph is defined by ignoring the weights of the graph 𝒢 while having the same sets of vertices and edges. Moreover, the modified normalized closeness centrality, denoted by C^'(u) is defined in <cit.> as: C^'(n) = N/∑_∀ v_i ∈𝒱𝒟 (v_n,v_i). Considering the DC Power flow model, s^'(u) is independent of the perturbation strength γ. Proof: By substituting the expression of Δθ_u(n) from the equation (<ref>) to the equation (<ref>), it can be written that: s^'(u) = C^'(u) ∑_i=1^N |β_iu|/∑_j=1^N |β_ju|𝒟(v_u,v_i). The terms C^'(u) and 𝒟(v_u,v_i) are calculated from the unweighted graph 𝒢^' for a certain perturbation location v_u and therefore, depend only upon the interconnections among the buses of the grid. On the other hand, the β_ij terms are related to the electrical parameters of the transmission line. Therefore, for a particular electrical grid, s^'(u) depends on the location of the perturbation and is independent of the perturbation strength. Based on the results presented in Fig. <ref>(a) and Fig. <ref>(b), it can be observed that the proposed GSP-based spreadability measure s(n) in this work and the network-science-based spreadability measure s^'(u) from <cit.> show similarity for the 50MW of real power load perturbation at every bus of the system. The similarity of the results from these two measures is quantified by the Spearman's correlation coefficient <cit.> with the value 0.8562 with no tied rank and a p-value of 0. In addition to evaluating the spreadability due to perturbations, it is important to evaluate other graph signal properties, which may be affected by the perturbation and may encode important information about the behavior of the system under perturbation. The global smoothness of graph signals describes the variation of values over buses in an aggregated form. Next, the effects of perturbation on the global smoothness of the bus voltage angle graph signals are discussed. In this analysis, load changes are considered as the main kind of perturbation. Under the DC Power flow assumption, the global smoothness of the voltage angle graph signal is a quadratic function of the increased load. Proof: Let us start by writing the definition of the global smoothness for the voltage angle graph signal θ and use the DC power flow model to expand the definition of θ as follows: g_θ =θ^T 𝐋θ/θ^T θ =(𝐁^-1𝐩)^T𝐋(𝐁^-1𝐩)/(𝐁^-1𝐩)^T(𝐁^-1𝐩) =𝐩^T(𝐁^-1)^T𝐋𝐁^-1𝐩/𝐩^T(𝐁^-1)^T𝐁^-1𝐩 =𝐩^T𝐐𝐩/𝐩^T𝐑𝐩 Here, 𝐐=(𝐁^-1)^T𝐋𝐁^-1 and 𝐑=(𝐁^-1)^T𝐁^-1, both contain topological information and are independent of 𝐩. Since the other elements of the vector 𝐩, except the u-th element, are the same before and after the perturbation, 𝒰 (as described in Section III), g_θ is a quadratic function of the real power p(u,t_u) at the perturbed bus v_u ∈𝒱. Specifically, from equation (4) and equation (5), the global smoothness can be written as: g_θ∝ p^2(u, t_u) ⇒ g_θ∝[ p^2(u, t_u-ϵ) + γ^2 +2 γ p(u, t_u-ϵ)] ⇒ g_θ∝γ^2+2 γ p(u, t_u-ϵ) Therefore, g_θ is a quadratic function of γ. Fig. <ref> shows g_θ as a function of perturbation strength γ for load perturbation at bus no. 16 of the 118 IEEE bus system. The values of g_θ are calculated using equation (<ref>) with the values of θ(n) obtained from the AC power flow model in MATPOWER. Although Property 2 is derived under the DC power flow assumption, Fig. <ref> shows that it also holds for the AC power flow (although with some numerical deviation). The quadratic form of g_θ as the function of perturbation strength can have important implications. For instance, our experiments have shown that an increasing trend in g_θ may indicate a stressed system. Specifically, the power grid bus voltage angle graph signal is generally smooth over the vertices under normal operating conditions <cit.>. Therefore the value of g_θ generally stay small, while the actual value depends on several factors, such as the system topology, load demand, and generation amount in the system. From Fig. <ref> it can be observed that when the load is increasing continually at a particular bus, initially g_θ increases with the increasing load, which indicates increasing fluctuations of signal values from vertex-to-vertex until the perturbation strength reaches a critical point γ_c (associated with a critical load demand of p_d_c(u)) for the perturbed bus, v_u. Increasing the load beyond this critical point results in decreasing values of g_θ, which in general can indicate smoother signal and normal grid conditions. However, in this particular case, the decrease in the global smoothness after reaching its maximum suggests a stressed system, and the issue of non-convergence of the AC power flow calculations rise in this phase. Moreover, the increase of, p_d(u), i.e., γ, at the perturbed bus increases the power flow through a number of transmission lines. Increasing the size of perturbation can lead to overloading of transmission lines and outages and in severe cases cascading failures. Determining the critical value of perturbation strength, γ for smooth grid operation. The critical value of the perturbation strength, γ, which also corresponds to the critical load size at bus v_u can be identified based on the maximum values of g_θ as follows: ∂ g_θ/∂ p(u)|_p_d(u)=p_d_c(u)= ∂ g_θ/∂ p(u)|_γ=γ_c = 0 By substituting equation (<ref>) into equation (<ref>) and applying the rules of matrix differentiation: 𝐩^T 𝐐𝐩∂ g_θ/∂ p(u) (𝐩^T 𝐑𝐩) = 𝐩^T 𝐑𝐩∂ g_θ/∂ p(u) (𝐩^T 𝐐𝐩) By solving the equation for p(u) which is the same as the u-th element of 𝐩 the value of real power for which g_θ is maximum can be obtained, and therefore the critical perturbation strength γ_c can be obtained by equation (<ref>). Fig. <ref> shows g_θ for monotonous load increase at bus 17 of the IEEE 118 bus system <cit.> (which is purely a load bus). The result presented in this figure suggests that the perturbation strength of γ_c = 631.8 MW results in the maximum g_θ value and corresponds to our defined critical load. This critical load advises on a stressed system for which the power flow non-convergence based on the numerical results occurred at the perturbation strength of γ_nc = 848.9 corresponding to a load size of 853.9MW. Under the DC Power flow assumption the global smoothness of the difference voltage angle graph signal, Δθ is independent of the perturbation strength. Proof: Following the definition of global smoothness in equation (2), the global smoothness of the difference bus voltage angle graph signal Δθ_u(n) before and after the perturbation 𝒰 can be written as: g_Δθ=∑_i=1^N ∑_j=1^N L_i jΔθ_u(i) Δθ_u(j)/∑_k=1^N Δθ_u^2(k). By substituting Δθ_u(n) from the result expressed in equation (<ref>), it can be written that: g_Δθ =∑_i=1^N ∑_j=1^N L_i j|γβ_i u||γβ_j u|/∑_k=1^N|γβ_k u||γβ_k u| =∑_i=1^N ∑_j=1^N L_i j|β_i uβ_j u|/∑_k=1^N|β_k u|^2. Since there is no γ present in the right-hand side of the equation, g_Δθ does not depend on the perturbation strength, but rather depends on the topology of the system. This GSP-based property associated with both real power load perturbation (Fig. <ref>(a)) and real power generation perturbation (Fig. <ref>(b)) has been evaluated by simulations on the IEEE 118 bus system. From Fig. <ref>, it can be observed that in perturbations in both cases, power flow calculation using 𝐩 = 𝐁θ under DC power flow yields a constant function for |g_Δθ| vs. γ, which indicates the independence on the perturbation strength. The AC power flow results also justify this property, while showing a minor dependency on the perturbation strength. This property enables g_Δθ to be a GSP-based measure for evaluating the effects of perturbation in different locations in the system. Fig. <ref> shows the values for g_Δθ for a perturbation of γ=50MW in each of the load buses of the IEEE 118 bus system. From this result, it can be observed that load perturbations of the same strength at different buses have different effects in the grid, which is reflected on the graph signal Δθ(n) and its smoothness. Note that the graph signal Δθ(n) (being a difference graph signal before and after the perturbation) inherently contains some time evolution information and can help characterize the spread patterns of perturbations. This can be understood from the visual resemblance of the bar diagram of g_Δθ in Fig. <ref> with the bar diagram of our proposed spreadability measure, s(u) in Fig. <ref>. The similarity between g_Δθ and s(u) can be also justified by the cosine similarity <cit.> of 0.8281 and Spearman rank correlation co-efficient <cit.> of 0.61 for γ=50MW perturbations in all the load buses of the IEEE 118 bus system. Therefore, the GSP-based parameter g_Δθ highlights the reliance of perturbations on locations within the grid, particularly in assessing the extent to which the effects of the perturbation can spread. Similar results can be observed in the local smoothness of the graph signal Δθ_u(n). Under the DC Power flow assumption, the local smoothness of the difference voltage angle graph signal, Δθ is independent of the perturbation strength. Proof: From equation (<ref>), the local smoothness at bus n for the graph signal Δθ_u(n) (which is the difference bus voltage angle graph signal before and after the perturbation 𝒰), can be calculated as: l_Δθ(n)=∑_k=1^N L_nkΔθ_u(k)/Δθ_u(n), Δθ_u(n) ≠ 0. Substituting the Δθ_u(n) from equation (<ref>) in the above equation results in: l_Δθ(n)= ∑_k=1^N L_nk |γβ_k u|/|γβ_n u| = ∑_k=1^N L_nk |β_k u|/|β_n u|, Δθ_u(n) ≠ 0. Equation (<ref>) provides the local smoothness values of Δθ_u(n) at every vertex, v_n of the graph. The local smoothness values at the perturbed bus can be obtained by putting n=u in equation (<ref>) as: l_Δθ(u)=∑_k=1^N L_u k |β_k u|/|β_u u| , β_u u≠ 0, which is independent of the perturbation strength. The lack of dependence on perturbation strength makes l_Δθ(u) a suitable measure for analyzing the locational dependence of perturbations in the grid, similar to g_Δθ. Like g_Δθ, the local smoothness value of the difference bus voltage angle graph signal before and after the perturbation, assessed at the perturbation point, can be utilized as an estimator of the spreadability of the perturbation effect. Fig. <ref> shows the values of local smoothness at the perturbed vertices due to the same amount of load perturbation γ=50MW at each load bus of the IEEE 118 bus system. The bar diagram of l_Δθ(u) seems similar to the bar diagram of our proposed spreadability measure, s(u) for IEEE 118 bus system. The cosine similarity <cit.> and the Spearman rank correlation coefficient <cit.> between s(u) and l_Δθ(u) for 50MW perturbations are, respectively, 0.8925 and 0.66, which suggests that l_Δθ(u) can serve as a GSP-based estimator of perturbation spread. § CONCLUSION This article presents a perspective based on GSP regarding the impacts of a single bus perturbation in the electrical grid. The perturbation is characterized by a sudden change in the real-power load demand or generation. Specifically, the article investigates the effects of the perturbation by considering its spread throughout the grid. A measure of spreadability based on GSP is proposed, and it is demonstrated that both global and local smoothness measures of the difference bus voltage angle graph signal can be used as estimators of the spreadability of the perturbation. The findings indicate that the proposed measure of spreadability, along with the local and global smoothness properties of the graph signals, are not influenced by the perturbation strength. Instead, these properties primarily depend on the location of the perturbation. Furthermore, the article characterizes the global smoothness of the bus voltage angle graph signal as a quadratic function of the perturbation strength. It is shown that beyond a critical perturbation strength, the global smoothness starts to decrease, and further increases in perturbation strength may result in power flow divergence, which can be indicative of a stressed system. The present study builds upon the DC power flow model assumption and employs a simple and generic perturbation model. Nevertheless, this research offers intriguing insights into the impact of perturbations in the grid and introduces a new perspective on utilizing GSP for analyzing various problems in power systems, for instance, cascading failures, as perturbation analysis. For example, such analyses can help characterize whether a perturbation can create a cascade or define how the failures propagate relative to the location, strength, and nature of the perturbation. § ACKNOWLEDGMENT This material is based upon work supported by the National Science Foundation under Grant No. 2238658. 00 ramakrishna21 R. Ramakrishna and A. Scaglione, “Grid-Graph Signal Processing (Grid-GSP): A Graph Signal Processing Framework for the Power Grid," in IEEE Transactions on Signal Processing, vol. 69, pp. 2725-2739, 2021. hasnat22 M. A. Hasnat and M. Rahnamay-Naeini, “A Graph Signal Processing Framework for Detecting and Locating Cyber and Physical Stresses in Smart Grids," in IEEE Transactions on Smart Grid, vol. 13, no. 5, pp. 3688-3699, Sept. 2022. takiddin23 A. Takiddin, R. Atat, M. Ismail, K. Davis and E. Serpedin, “A Graph Neural Network Multi-Task Learning-Based Approach for Detection and Localization of Cyberattacks in Smart Grids," IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5. saha22 S. S. Saha, A. Scaglione, R. Ramakrishna and N. G. Johnson, “Distribution Systems AC State Estimation via Sparse AMI Data Using Graph Signal Processing," in IEEE Transactions on Smart Grid, vol. 13, no. 5, pp. 3636-3649, Sept. 2022 hasnat22pesgm M. A. Hasnat and M. Rahnamay-Naeini, “Power System State Recovery using Local and Global Smoothness of its Graph Signals," IEEE Power & Energy Society General Meeting (PESGM), Denver, CO, USA, 2022, pp. 01-05. dabush23 L. Dabush, A. Kroizer and T. Routtenberg, “State Estimation in Partially Observable Power Systems via Graph Signal Processing Tools," in Sensors, vol. 23, no. 3, pp. 1387, Jan 2023. mendes23 M. A. Mendes, M. H. M. Paiva and O. E. Batista,“Signal processing on graphs for estimating load current variability in feeders with high integration of distributed generation," in Sustainable Energy, Grids and Networks, Vol. 34, pp. 101032, June 2023. he18 K. He, L. Stankovic, J. Liao and V. Stankovic, “Non-Intrusive Load Disaggregation Using Graph Signal Processing," in IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 1739-1747, May 2018. hasnat21isgt M. A. Hasnat and M. Rahnamay-Naeini, “Reflection of Cyber and Physical Stresses in Smart Grids on their Graph Signals," IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe), Espoo, Finland, 2021, pp. 01-05. ratnam20 K. S. Ratnam, K. Palanisamy and G. Yang, “Future low-inertia power systems: Requirements, issues, and solutions - A review," in Renewable and Sustainable Energy Reviews, Vol. 124, No. 109773, May 2020. jain14 P. Jain and T. Jain, “Impacts of G2V and V2G power on electricity demand profile," 2014 IEEE International Electric Vehicle Conference (IEVC), 2014, pp. 1-8. mullan11 J. Mullan, D. Harries, T. Braunl ad S. Whitely, “Modelling the impacts of electric vehicle recharging on the Western Australian electricity supply system," in Energy Policy, Vol. 39, No. 7, pp. 4349-4359, 2011. stamp09 J. Stamp, A. McIntyre and B. Ricardson, “Reliability impacts from cyber attack on electric power systems," IEEE/PES Power Systems Conference and Exposition, 2009, pp. 1-8. mohsenian-rad11 A. -H. Mohsenian-Rad and A. Leon-Garcia, “Distributed Internet-Based Load Altering Attacks Against Smart Power Grids," in IEEE Transactions on Smart Grid, vol. 2, no. 4, pp. 667-674, Dec. 2011. wolff18 M. F. Wolff, P. G. Lind and P. Maass, “Power grid stability under perturbation of single nodes: Effects of heterogeneity and internal nodes," in Chaos, vol. 28, no. 10, pp. 103120, October 2018. menck12 P. J. Menck and J. Kurths, “Topological Identification of Weak Points in Power Grids," Nonlinear Dynamics of Electronic Systems, Wolfenbuettel, Germany, 2012, pp. 1-4. hens19 C. Hens, U. Harush, S. Haber, R. Cohen and B. Barzel, “Spatiotemporal signal propagation in complex networks," in Nature Physics, vol. 15, no. 4, pp. 403-12, April 2019. molnar21 S. Molnar, E. Bradley and K. Gruchalla, “Oscillatory spreading and inertia in power grids," in Chaos, Vol. 12, No., 31, pp. 123103, Dec. 2021. nnoli21 K. Nnoli and S. Kettemann, “Spreading of disturbances in realistic models of transmission grids in dependence on topology, inertia and heterogeneity," in Scientific Reports, vol. 11, No. 1, pp.1-17, Dec. 2021. buttner22 A. Büttner, J. Kurths, F. Hellmann, “Ambient forcing: Sampling local perturbations in constrained phase spaces," in New Journal of Physics, vol. 24, no. 5, pp. 053019, May 2022. gauthier01 T. D. Gauthier, “Detecting trends using Spearman's rank correlation coefficient," in Environmental forensics, vol. 2, No. 4, pp.359-362, 2001. vasilj22 J. Vasilj, D. Jakus, M. Marusic and M. Relja,“Robust model for EV driven grid impact estimation," International Conference on Smart Systems and Technologies (SST), Osijek, Croatia, 2022, pp. 231-235. zimmerman11 R. D. Zimmerman, C. E. Murillo-Sánchez and R. J. Thomas, “MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education," in IEEE Transactions on Power Systems, vol. 26, no. 1, pp. 12-19, Feb. 2011. dakovic19 M. Daković, L. Stanković, and E. Sejdić. “Local smoothness of graph signals," Mathematical Problems in Engineering 2019. hasnat22a M. A. Hasnat and M. Rahnamay-Naeini, “Power System State Recovery using Local and Global Smoothness of its Graph Signals," IEEE Power & Energy Society General Meeting (PESGM), Denver, CO, USA, 2022, pp. 01-05. kundu12 P. K. Kundu, I. M. Cohen and D. R. Dowling, "Cartesian Tensors," in Fluid Mechanics, Fifth Edition, Vol., Academic Press, 2012, pp. 39-64. 118bus IEEE 118 Bus System, Illinois Center for a Smarter Electric Grid (ICSEG), June 2, 2023. [Online]. Available: https://icseg.iti.illinois.edu/ieee-118-bus-system/, (accessed June 2, 2023). sandhya23 K. Sandhya and K. Chatterjee, “Two-stage ANN based intelligent technique for optimal positioning and sizing of DERs in distribution system," in Engineering Applications of Artificial Intelligence, Vol. 121, pp. 105932 May 2023. hasnat21 M. A. Hasnat and M. Rahnamay-Naeini, “Characterization and Classification of Cyber Attacks in Smart Grids using Local Smoothness of Graph Signals," North American Power Symposium (NAPS), College Station, TX, USA, 2021, pp. 01-06. hasnat22b M. A. Hasnat and M. Naeini, “Learning Power System's Graph Signals for Cyber and Physical Stress Classification," North American Power Symposium (NAPS), Salt Lake City, UT, USA, 2022, pp. 1-6. xia2015 P. Xia, L. Zhang and F. Li, “Learning similarity with cosine similarity ensemble," in Information Sciences vol. 307 pp. 39-52, 2015.
http://arxiv.org/abs/2306.08580v2
20230614153409
Quiet point engineering for low-noise microwave generation with soliton microcombs
[ "Andrea C. Triscari", "Aleksandr Tusnin", "Alexey Tikan", "Tobias J. Kippenberg" ]
physics.optics
[ "physics.optics", "nlin.PS" ]
APS/123-QED [email protected] [email protected] ^1Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland ^2Center for Quantum Science and Engineering, EPFL, Lausanne, Switzerland Quiet point engineering for low-noise microwave generation with soliton microcombs Tobias J. Kippenberg^1,2 July 31, 2023 ================================================================================== Low-noise microwave signals can be efficiently generated with microresonator-based dissipative Kerr solitons ('microcombs')<cit.>. However, the achieved phase noise in integrated microcombs is presently several orders of magnitude above the limit imposed by fundamental thermorefractive noise <cit.>. One of the major contributors to this additional noise is the pump laser frequency noise transduction to the soliton pulse repetition rate via the Raman self-frequency shift <cit.>. Quiet points (QPs) allow minimizing the transduction of laser frequency noise to soliton group velocity <cit.>. While this method has allowed partial reduction of phase noise towards the fundamental thermodynamical limit, it relies on accidental mode crossings and only leads to very narrow regions of laser detuning where cancellation occurs <cit.>, significantly narrower than the cavity linewidth. Here we present a method to deterministically engineer the QP, both in terms of its spectral width, and position, showing an increased phase noise suppression. This is achieved using coupled high-Q resonators <cit.> arranged in the Vernier configuration <cit.>. Investigating a generalized Lugiato-Lefever equation <cit.> that accounts for the hybridized mode spectral displacement, we discover a continuum of possible QPs within the soliton existence region, characterized by ultra-low noise performance. Furthermore, we discover that by using two controlled optical mode crossings, it is possible to achieve regions where the QPs interact with each other enabling a substantial increase of the noise suppression range. Our work demonstrates a promising way to reach the fundamental limit of low-noise microwave generation in integrated microcombs. § INTRODUCTION The discovery of dissipative Kerr solitons (DKS) <cit.> in driven dissipative Kerr nonlinear resonators, has heralded a new method to synthesize coherent and broadband optical frequency combs, with compact form factor, wafer scale manufacturing compatible techniques, and mode spacings that can access the microwave to THz. The dynamics of such DKS in microresonators is described by the Lugiato-Lefever equation (LLE) <cit.>. It is by now well understood that microcombs give rise to a plethora of coherent nonlinear dynamical states, i.e. 'dissipative structures', including platicons, zero dispersion solitons, and solitons in unusual dispersion landscapes <cit.>, that challenge the understanding of 'classical' bright solitons. DKSs can crucially be generated in photonic integrated microresonators based on silicon nitride <cit.>, a foundry level, mature photonic integrated circuit technology, that has been the basis of numerous system level demonstrations, including massively parallel <cit.>, dual-comb  <cit.> and chaotic LiDARs <cit.>, neuromorphic computing <cit.>, as well as optical frequency synthesis <cit.> and optical clocks <cit.> Microcombs also enable low-noise microwave generation by detecting the repetition rate of the soliton pulse stream. Such optically generated microwaves are attractive due to the low power and potentially low phase noise that can be generated and can be employed in a variety of applications, such as microwave photonics, Radar <cit.>, 5G/6G <cit.> or wireless communications <cit.>. In contrast to optical frequency division, which employs phase stabilized femtosecond laser frequency combs, the phase noise of the generated microwaves in the case of microcombs is determined primarily by transduction of laser phase noise to the soliton group velocity. There have been numerous demonstrations of the soliton stream-based low-noise microwave generation  <cit.>, ranging from sources in the K and X microwave band using integrated Si_3N_4 microresonators <cit.> to the THz domain <cit.>. Despite achieving phase noise on par with mid-range commercial microwave generators based on quartz oscillators, the fundamental limit of the repetition rate noise, as given by thermo-refractive noise (TRN) <cit.>, is still several tens of decibels below the best experimentally demonstrated noise performance photonic chip-scale microcombs. While soliton microcombs have achieved lower noise, this has so far only been possible in bulk polished crystalline resonators, that have significantly lower TNR levels, and do not show Raman self-frequency shifts <cit.>. In contrast, for chip-integrated microcomb platforms, such as based e.g. on silica, silicon nitride, etc. the presence of the Raman self-frequency shift-related transduction noise limits achieving the fundamental thermodynamical limit <cit.>. One way to reduce this noise has been the observation of 'quiet points <cit.>, which exploit a cancellation due to the interplay of the Raman self-frequency shift and the recoil from dispersive waves (DW) due to mode crossings (AMX). However, this technique relies on accidental mode crossings and therefore is fixed to a certain mode number μ and shift Δ_μ by design. Moreover, the strength of the mode crossing is a fixed parameter that is not subject to control <cit.>. Moreover, the width of the QP has been reported to be substantially narrower than the linewidth of the microresonator cavity, implying possible second-order transduction effects. In this work, we report on a deterministic approach to QP engineering and demonstrate a substantial increase in noise suppression bandwidth. We predict that with such improvements it is possible to reach the fundamental thermodynamical limit of phase noise, and thereby substantially improve the ability to synthesize low-noise microwaves directly from optical integrated microresonators. Our results are directly realizable within currently demonstrated silicon nitride integrated microresonator technology. § RESULTS To understand the transduction of phase noise to the soliton (i.e. DKS repetition rate ω_rep) we consider Raman scattering and the DW recoil as the main noise transfer mechanisms (cf Fig. <ref>a. ), and aim to reduce the repetition rate susceptibility to the laser detuning fluctuations δω, i.e., minimize |∂ω_rep/∂δω| ∝ | ∂/∂δω(Ω_Raman + Ω_DW)| <cit.>. To introduce the presence of a DW, we consider a simplified model of AMX, characterized by a single mode displacement at position μ̅ and strength Δ_μ̅, in the integrated dispersion profile, as shown in Fig. <ref>b. As expected, the corresponding spectrum of the generated DKS has a typical sech^2 shape, frequency shifted by Ω and with spectral enhancement at μ̅, due to the DW (Fig. <ref>c). To qualitatively explain the noise reduction mechanism, we start by separately analyzing the Raman and DW contributions to the DKS repetition rate response. We consider the simplest case of sinusoidal frequency modulation of the pump detuning around a constant value δω as presented in Fig. <ref>d-i. In the presence of the Raman effect only, the DKS's group velocity is in-phase with the detuning change (c.f. Fig. <ref>d). In the nonlinear dispersion relation (NDR) representation <cit.>, the DKS dynamics has a butterfly-shaped profile, revealing the transfer of the laser detuning modulation to the soliton group velocity v_g (i.e., the tilt of the soliton line in Fig. <ref>e) that directly reflects the repetition rate change <cit.>. Equally, the NDR representation clearly shows the comb-line dependent frequency noise multiplication mechanism induced by the repetition rate variation <cit.> (i.e. phase noise multiplication). In the presence of an AMX, the laser detuning dependence of the DKS's group velocity can exhibit the opposite sign as shown in Fig. <ref>f. Represented in the NDR, the soliton forms a similar butterfly shape with enhanced photon occupancy at the displaced mode (Fig. <ref>g). These two effects, combined together, can then counteract due to the opposite dependence, resulting in the reduction of the detuning noise transfer (Fig. <ref>h,i). In the following, we identify a QP by reconstructing the group velocity manifold v_g as a function of δω and parameters of the AMX i.e., mode index μ̅ and its displacement from the unperturbed dispersion profile Δ_μ̅, further computing its extrema along the laser detuning direction. In general, this problem does not have an analytical solution, but we can efficiently reconstruct the desired dependence using a semi-analytic approach based on the Newton-Raphson method, which we describe in detail below. §.§ Mean-field model for noise transduction We model the DKS dynamics in the microresonator using the LLE with a modified dispersion profile and a Raman scattering term. In the normalized units, the generalized LLE takes the form ∂ψ/∂ t = -(1+iζ_0)ψ + i/2∂_θ^2ψ +i|ψ|^2ψ + f + v_g ∂_θψ-iΔ_μ̅ψ_μ̅e^im̅θ -iτψ∂_θ|ψ|^2. Here we use a standard normalization of the LLE (see Methods and Ref. <cit.> for further details) which rescales fast time (intracavity angle ϕ) term as θ=√(κ/2D_2)φ. In this way, the mode index becomes non-integer m = μ√(2D_2/κ), where μ — is an integer mode index, κ is the total loss rate, and D_2 is the second-order integrated dispersion. Terms in the second line of Eq. <ref>, that extend the conventional form of the LLE, represent group velocity change v_g, modification of the D_int by the AMX at the mode μ̅, and the Raman scattering, respectively. This formulation of the LLE is essential for the stationary solution search with the Newton-Raphson algorithm, as explained in Methods. §.§ QP achieved with a single-mode displacement To investigate the DKS group velocity response to the detuning variations ζ_0, we look for a single-soliton equilibrium solution ψ_DKS and its relative group velocity v_g for the parameter set (f,ζ_0,μ̅,Δ_μ̅) using the Newton-Raphson approach for Eq. (<ref>) described in details in Appendix <ref>. To reduce the dimensionality of the parameter space, we fix the pump power f^2=6, |μ̅|=21, and sweep the detuning value within the soliton existence range (given by π^2f^2/8 <cit.> in the unperturbed case) and the Δ_μ̅ in the vicinity of the dispersive wave resonance. As a result, we obtain a soliton solution and the corresponding group velocity v_g for every point on the (Δ_μ̅,ζ_0)-subspace, both for blue- (μ̅ < 0) and red-side (μ̅ > 0) mode shifts (cf. Fig. <ref>). First, we focus on the blue-sided displacement (i.e. μ̅<0). The presence of the shifted mode results in the generation of the DW (see Fig. <ref>a), whose strength depends on the phase matching condition with the DKS. The acquired group velocity due to the recoil increases in the vicinity of the DW resonance as shown in Fig. <ref>b. After a given value of detuning, the DW destabilizes the DKS and the equilibrium state cannot be achieved anymore (the absence of the soliton solution is depicted in black in Fig. <ref>b). Increasing the normalized mode shift strength Δ_μ̅, we observe that the effect of the DW on the soliton is substantially reduced and the soliton existence range approaches the value estimated for the unperturbed LLE (see white dashed line in Fig. <ref>b). Next, we compute the group velocity directional derivative ∂_ζ_0 v_g shown in Fig. <ref>c. As a result, we observed a family of solutions with ∂_ζ_0 v_g=0 that correspond to the QPs. Crucially, due to the lack of control, prior experimental works have reported only a slice (vertical line cut) of the map for the fixed Δ_μ̅ as depicted in Fig. <ref>d. Dashed lines represent the interpolated group velocity as a function of detuning ζ_0 for two values Δ_μ̅=-6.85, -5 while the solid lines represent the response ∂_ζ_0 v_g on a logarithmic scale (directly reflecting the noise transduction). For both values Δ_μ̅ there are two points with zero derivative ∂_ζ_0 v_g. We followed the same procedure for the red-side mode displacement μ̅>0 (same side as for the Raman frequency shift) and observed similar behavior for the soliton states, group velocity, and its derivative (Fig. <ref>(e-h)). Qualitatively, the soliton profile and its existence range remain the same as in the previous case, but the QP line is shifted now towards the higher mode-displacement amplitudes where the soliton existence range is narrowed. §.§ Two-mode displacement for enhanced QP engineering Next, we investigate the region where the two QPs (for displaced modes on the blue and the red side of the pump) can co-exist and interact. First, as an example of the novel dynamics, we fix the displaced mode index μ̅=-21 and the displacement strength Δ_μ̅=-5.00 scanning the displacement Δ_-μ̅, of the mode μ̅' = -μ̅ for different detunings ζ_0. The Newton-Raphson results for the single soliton state are shown in Fig. <ref>(a-d). As in the case of a single-mode displacement, the DKS coexists with a single-period DW background (the periodicity is given by |μ̅|). In this case, we discovered that the single soliton solution exists for Δ_-μ̅<-7. For large negative displacements (Δ_-μ̅≈ -20), μ̅' is out of resonance and the QP detuning value corresponds to the one in Fig. <ref>c. Reducing the displacement |Δ_-μ̅|, the soliton starts being resonant also to mode μ̅' resulting in an effective bending of the QP line, converging to the single mode one for red-shifted mode (Fig. <ref>.f). While in the case of a single-mode displacement, the QP line is always tilted (see Fig <ref>c,g) which narrows down the noise suppression region for a fixed value of Δ_μ̅, displacing two modes, we are crucially able to engineer a flat susceptibility over a wide range of laser detunings ζ_0. We refer to this state of the system as engineered QP (EQP). The flat susceptibility region is achieved at Δ_-μ̅=-12.52 (cf. Fig. <ref>c). In Fig. <ref>d, we compare v_g and its susceptibility ∂_ζ_0 v_g for the single mode displacement (green lines in Fig. <ref>d) with the case in Fig. <ref>c (gray lines). The latter clearly shows a flatter response profile that can be practically beneficial for accessing the QP regime. The effect of this is depicted in Figure 3 d, which shows an order of magnitude broadening of the QP detuning bandwidth. We repeat the same procedure, fixing the mode μ̅=21 with Δ_μ̅=-10.7 and shifting the mode μ̅' = - μ̅ by Δ_-μ̅. Simulation results in Fig. <ref>(e-h) show qualitatively similar behavior, with shorter DKS existence range (defined by the fixed mode μ̅=21, see Fig. <ref>f). However, in this case, we find two QP families for a single DKS solution. The flattest response is achieved at Δ_-μ̅=-3.74 (see Fig. <ref>(g,h)). In this way, we observe that careful control over the two-mode displacement can extend the noise suppression region of the QP in the parameter space. §.§ Linear stability of the solutions Having derived above the equilibrium value of the soliton group velocity and its relative soliton state, for different mode shifts and having observed QP lines in various configurations using the Newton-Raphson method, a crucial question arises that we answer next: are the observed states stable and/or is there any additional instability region? To estimate the stability of the equilibrium solutions, we perform linear stability analysis, numerically investigating the eigenvalues λ of the Jacobian operator associated with (<ref>) for each particular soliton state found in the previous section (see Methods). In particular, we focus on the eigenvalues with the greatest real part, since, if positive, those are responsible for the linear growth of any perturbation around the equilibrium. The real part of the latter (max{λ}) are plotted in Fig.<ref>.a-d. We observe that the soliton solutions are linearly stable almost everywhere in the considered subspace, and in particular at the QPs. Exceptions are for a narrow region in correspondence with the reduced existence range (reminding an instability tongue). In those regions there exist at least one eigenvalue with positive real part. From the actual structure of the spectrum of the Jacobian, computed for the quiet point states, (Fig.<ref>.e-h) we find that these instabilities are due to Hopf bifurcations, characterized by the vanishing real parts of a pair of the complex conjugated soft modes. In this region of parameter space, those will be responsible for the transition from stable solitons to breathing states. §.§ Dynamical simulation of the phase noise transfer To compare the phase noise performance of different operating points, the dynamical evolution has been simulated with the step-adaptative Dormand-Prince Runge-Kutta method of Order 8(5,3) <cit.>. We perform the direct dynamical simulations of the LLE adding a realistic noise to the detuning term measured experimentally from the Topical CTL 1550 laser having a standard deviation of 5 kHz (see Methods). In this way, we simulate two DKS operating points: QP1 (see Fig. <ref>.b) and EQP1 (see Fig. <ref>b). The difference between the phase noise transfer performance at different operating points can be clearly seen in Fig. <ref>a,b. A series of numerical experiments confirm the conclusion obtained with the stationary solver analysis. Indeed, changing the central frequency of the pump by the value of 10^-3 κ, we clearly observe that the overall noise transduction from the pump to the soliton repetition rate (PM2PM at 10 kHz offset) performance of EQP1 increases by 0.5 dB for 3 · 10^-3 κ, while in the case of QP_1, we observe > 28 dB of the transduction enhancement. Corresponding single-sideband (SSB) phase noise performance is depicted in Fig. <ref>c. As one can see, the fluctuations of the central frequency of the pump laser do not visibly affect the performance of the system at EQP1. Next, we verify that in the case of the noisy pump lasers (standard deviation of 8% κ/2) EPQ leads to a significant noise reduction due to the larger noise suppression region. We employed the same phase noise data, scaling it to obtain the detuning fluctuation of the order of the width of the standard QP, i.e. 8% κ/2. In the parameter regime outside of the QP region the influence of the pump fluctuations on the DKS dynamics is visible directly from the spatiotemporal diagram (see Fig. <ref>d). To distinguish the performance of QP1 and EQP1, we can use the NDR to represent its effect (Fig. <ref>e). We observe a clear suppression of the noise multiplication in the case of EQP1. This confirms our predictions based on the group velocity variation obtained with the stationary solution solver. § CONCLUSIONS AND DISCUSSION In summary, we have demonstrated a method to increase the effectiveness of QPs, that are central to achieving low phase noise soliton microcombs for microwave generation. Our work shows that engineering QP introduced via two dedicated and controllable mode crossings enables one to create broader regions of enhanced noise suppression. Our work is directly implementable using current technology and provides a new approach to the enduring challenge of obtaining thermal noise-limited micro-wave generation from integrated soliton microcombs, which in contrast to crystalline resonators employ materials such as silicon nitride or silica, that do exhibit a Raman self-frequency shift. These results were obtained via a semi-analytical approach, based on the Newton-Raphson method, studied the phenomenology of QP in the presence of Raman scattering, dispersive waves, and detuning noise, within a simplified model of AMX. This allowed us to obtain several insights (i) QPs can be achieved by placing AMX on both blue- and red-detuned sides of the pump. This highlights the fact that not the absolute value of the frequency shift must be compensated, but its derivative over the laser detuning. (ii) Engineering the interaction of two QPs leads to further reduction of the noise transfer. (iii) The EQPs predicted in this work are linearly stable and characterized by more than 28 dB reduction of the PM2PM coefficient with respect to a generic QP described in the literature when a detuning deviation of the order of 0.03% of κ is introduced. We anticipate that the detuning-dependent variation of the repetition rate can be completely eliminated by further controlling the integrated dispersion profile for example by corrugating the microresonator circumference <cit.>, which is however outside the scope of this work. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0250. The authors thank Savyaraj Deshmukh for the critical reading of the manuscript. § APPENDIX A: MEAN FIELD MODEL The modified LLE, presented in the main text, which accounts for the Raman scattering and effect induced by the AMX: ∂ψ/∂ t = -(1+iζ_0)ψ + i/2∂_θ^2ψ +i|ψ|^2ψ + f + v_g ∂_θψ-iΔ_μ̅ψ_μ̅e^im̅θ -iτψ∂_θ|ψ|^2 , where t=t'/2τ_ph is the time normalized to photon lifetime τ_ph=1/κ, κ=κ_0+κ_ex is the total linewidth of the cavity composed of the internal linewidth κ_0 and the coupling to the bus waveguide κ_ex. The normalized laser-cavity detuning is ζ_0=2δω/κ, the fast time is defined as θ=√(κ/2D_2)φ with D_2 as the GVD and φ∈ [-π,π] as the azimuthal coordinate inside the cavity. μ̅ indicates the shifted mode number and ψ_μ̅ is the amplitude of the displaced mode. We point out that, due to the normalization of the fast time coordinate, the mode numbers m are not integer, however, they are related to the actual comb line index μ by a simple multiplication factor, i.e. m = μ√(2D_2/κ). Thus, referring to an integer μ, we imply the comb line index μ is associated with. The normalized pump power is f=√(8κ_exg_0/κ^3)s_in where g_0 is single photon Kerr frequency shift and s_in=√(P_in/ħω) and where |s_in|^2 is the laser photon flux. The last two terms in Eq. (<ref>) describe the single-mode AMX and the Raman scattering respectively. The normalized mode displacement is defined as Δ_μ̅δ_μ, μ̅ = 2(D_int(μ)-D_2 μ^2/2)/κ, with δ_μ,μ̅ as Kronecker delta, and the normalized Raman shock time is τ=τ_RD_1√(κ/2D_2). The single-mode AMX term comes directly from the modified dispersion profile (e.g., Fig. <ref>b) as follows from ℱ^-1[(μ^2/2 + Δ_μ̅δ_μ,μ̅) ψ_μ] = -1/2∂^2_θψ +Δ_μ̅ψ_μ̅e^im̅θ, where ℱ^-1[...] stands for the inverse Fourier transform. We simulate a system with the following parameters: κ/(2D_2) = 76.923, τ_R D_1 = 0.0006. § APPENDIX B: NEWTON-RAPHSON METHOD FOR THE QP The stationary solitons state and their respective value of group velocity have been computed from Eq. (<ref>) applying the Newton-Raphson method. This method is used to compute the solutions of a nonlinear (in principle vectorial) equation of the form of 𝐅() = 0, by applying the following iterative scheme: _k+1 = _k - J^-1(_k)𝐅(_k) _0: Initial guess Where J^-1 is the inverse of the Jacobian matrix of the function 𝐅 <cit.>, i.e. J_i,j = ∂ F_i/∂_j If the initial condition has been chosen correctly and the Jacobian remains invertible, the algorithm will converge to the desired solution ^⋆, fixed point of (<ref>), i.e. ^⋆ = ^⋆- J^-1(^⋆)𝐅(^⋆) and so characterised by 𝐅(^⋆) = 0. In our case, we exploit the method to find stationary single soliton solutions, ψ, of Eq. (<ref>) and the corresponding group velocity v_g by adding it explicitly as an optimization variable. In this case equation (<ref>) is set as the following: 𝐅(ψ,ψ^*,v_g) := [ g(ψ,ψ^*,v_g); g^*(ψ,ψ^*,v_g); {∂_θψ}|_θ = θ_max ]=0 where g(ψ,ψ^*,v_g) is the r.h.s of equation (<ref>). To implement the iterative algorithm from Eq. (<ref>) for equation (<ref>), we define Ψ := (ψ,ψ^*,v_g) and rewrite the function 𝐅(Ψ) as a formal matrix product 𝐅(Ψ) = F (Ψ)Ψ≡[ F_1,1 F_1,2 F_1,3; F_2,1 F_2,2 F_2,3; F_3,1 F_3,2 F_3,3; ][ ψ; ψ^*; v_g ] where F ( ) is a 3×3 matricial operator defined by the following equations : F_1,1 = -1-i(ζ_0 - 1/2∂_θ^2 + Δ_μ̅ e^iμ̅θℱ̂_μ̅) F_1,2 = iψ^2(1 - τ∂_θ) - iτψ∂_θψ F_1,3 = F_2,3 = ∂_θ F_2,2 = (F_1,1)^* F_2,1 = (F_1,2)^* F_3,1 = F_3,2 = 1/2∫_ℛdθδ(θ - θ_max)∂_θ F_3,3 = 0 ℱ̂ψ = ∫ dθψ e^-iμθ = ψ_μ ℱ̂^-1ψ_μ = ∑_μψ_μ e^i μθ = ψ ℱ̂_μ̅ψ = ∑_μδ_μ,μ̅ℱ̂ψ = ψ_μ̅ In this way, the iterative equation in (<ref>) can be rewritten as JΨ_k+1 = (J - F) Ψ_k that can be numerically solved for Ψ_k+1. Thus, the Jacobian in the rotating frame takes form J( Ψ)= F( Ψ) + [ Δ̂( Ψ) 0 0; 0 Δ̂^*( Ψ) 0; 0 0 0 ] where: Δ̂(Ψ) := 2i|ψ|^2+v_g∂_θ -iτ(∂_θ |ψ|^2 + |ψ|^2∂_θ + ψ∂_θψ^*). We implement the differential operators appearing in the matrices entries using the discrete Fourier transform matrix (DFT matrix, <cit.>), which allows us to express those operators in the Fourier space where they are represented algebraically. To control the numeracal convergence of the algorithm, we use the standard measure √(‖Ψ_k+1 - Ψ_k‖^2_2/‖Ψ_k‖^2_2) < 10^-6 where ‖·‖_2 is the L^2-norm. To avoid discretization problems, (especially for the computation of the correct spectrum of the Jacobian), we discretized the envelop function ψ in N_ψ=2^10 number of points. Finally, some comment on the choice of the initial condition Ψ_0 is required. Despite its power, this method, being the simplest of its kind, suffers from a small convergence radius. That means the initial condition must be already very close to the actual one. To overcome the problem, we used a numerical DKS solution from dynamical simulations as a guess solution with zero group velocity. For the subsequent points, we used instead as seeds the converged solution from the closest point in the detuning direction. § APPENDIX C: DYNAMICAL SIMULATION OF THE NOISE TRANSDUCTION The dynamical simulations have been carried out with a the step-adaptative Dormand-Prince Runge-Kutta method of Order 8(5,3) <cit.>, hard seeded an approximate DKS solution. The input pump phase noise has been obtained from a linearization of the data of the power spectral density data of Toptica CTL 1550 Laser, S^in_ϕ. In particular, it has been implemented through a detuning noise term ζ_0(t) obtained as ζ_0(t) = αℱ̂_̂t̂( √(ν^2 S^in_ϕ(ν)) e^i x(ν) ) where a uniformly distributed random phase x(ν) has been added to each frequency to obtain a random realization of the detuning noise and coefficient α normalizes the standard deviation of the pump detuning. Output field in real units for the case of critical coupling (κ_ex = κ_0): P_out(ϕ,t) = ħω_0κ_ex^2/g_0|f - ψ|^2 Spectrogram: P_μ(t) = ℱ̂_μP_out where, ℱ̂_μ is the operator taking the μ-th Fourier component of a function (the power in this case), as defined in the previous section. The phase of the first comb line (repetition rate phase): ϕ(t) = arg[P_1(t)], where arg[P_1(t)] denotes the phase of the first complex Fourier component of the detected optical power. The spectrum of phase noise: S_ϕ(ν) = |ℱ̂_νϕ(t)|^2 The transduction coefficient has been computed as: PM2PM = 10log_10S_ϕ/S^in_ϕ We point out in this analysis we assume an ideal photodetector, neglecting so its actual response function. As for the Newton-Raphson method (see Appendix <ref>), we discretized the fast time axis (i.e. azimuthal coordinate) in N_ψ= 2^10 points while the slow time in N_t = 20000 points. In addition, in order to obtain a sensitivity of the order of the kHz, we simulate the soliton dynamics for 1 ms. 45 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Kippenberg et al.(2018)Kippenberg, Gaeta, Lipson, and Gorodetsky]kippenberg2018dissipative author author T. J. Kippenberg, author A. L. Gaeta, author M. Lipson, and author M. L. Gorodetsky, title title Dissipative kerr solitons in optical microresonators, @noop journal journal Science volume 361, pages eaan8083 (year 2018)NoStop [Liu et al.(2020a)Liu, Lucas, Raja, He, Riemensberger, Wang, Karpov, Guo, Bouchand, and Kippenberg]liu2020PhotonicMicrowaveGeneration author author J. Liu, author E. Lucas, author A. S. Raja, author J. He, author J. Riemensberger, author R. N. Wang, author M. Karpov, author H. Guo, author R. Bouchand, and author T. J. Kippenberg, title title Photonic microwave generation in the X- and K-band using integrated soliton microcombs, https://doi.org/10.1038/s41566-020-0617-x journal journal Nat. Photonics volume 14, pages 486 (year 2020a)NoStop [Yi et al.(2016)Yi, Yang, Yang, and Vahala]yi2016TheoryMeasurementSoliton author author X. Yi, author Q.-F. Yang, author K. Y. Yang, and author K. Vahala, title title Theory and measurement of the soliton self-frequency shift and efficiency in optical microcavities, https://doi.org/10.1364/OL.41.003419 journal journal Opt. Lett. volume 41, pages 3419 (year 2016)NoStop [Karpov et al.(2016)Karpov, Guo, Kordts, Brasch, Pfeiffer, Zervas, Geiselmann, and Kippenberg]karpov2016RamanSelfFrequencyShift author author M. Karpov, author H. Guo, author A. Kordts, author V. Brasch, author M. H. P. Pfeiffer, author M. Zervas, author M. Geiselmann, and author T. J. Kippenberg, title title Raman Self-Frequency Shift of Dissipative Kerr Solitons in an Optical Microresonator, https://doi.org/10.1103/PhysRevLett.116.103902 journal journal Phys. Rev. Lett. volume 116, pages 103902 (year 2016)NoStop [Yi et al.(2017a)Yi, Yang, Zhang, Yang, Li, and Vahala]yi2017SinglemodeDispersiveWavesa author author X. Yi, author Q. F. Yang, author X. Zhang, author K. Y. Yang, author X. Li, and author K. Vahala, title title Single-mode dispersive waves and soliton microcomb dynamics, journal journal Nature Communications volume 8, https://doi.org/10.1038/ncomms14869 10.1038/ncomms14869 (year 2017a), https://arxiv.org/abs/1610.08145 arXiv:1610.08145 NoStop [Lucas et al.(2017)Lucas, Guo, Jost, Karpov, and Kippenberg]lucas2017DetuningdependentPropertiesDispersioninduced author author E. Lucas, author H. Guo, author J. D. Jost, author M. Karpov, and author T. J. Kippenberg, title title Detuning-dependent properties and dispersion-induced instabilities of temporal dissipative Kerr solitons in optical microresonators, https://doi.org/10.1103/PhysRevA.95.043822 journal journal Phys. Rev. A volume 95, pages 043822 (year 2017)NoStop [Tikan et al.(2021)Tikan, Riemensberger, Komagata, Hönl, Churaev, Skehan, Guo, Wang, Liu, Seidler, and Kippenberg]tikan2021emergent author author A. Tikan, author J. Riemensberger, author K. Komagata, author S. Hönl, author M. Churaev, author C. Skehan, author H. Guo, author R. N. Wang, author J. Liu, author P. Seidler, and author T. J. Kippenberg, title title Emergent nonlinear phenomena in a driven dissipative photonic dimer, https://doi.org/10.1038/s41567-020-01159-y journal journal Nature Physics volume 17, pages 604 (year 2021)NoStop [Tikan et al.(2022a)Tikan, Tusnin, Riemensberger, Churaev, Ji, Komagata, Wang, Liu, and Kippenberg]tikan2022protected author author A. Tikan, author A. Tusnin, author J. Riemensberger, author M. Churaev, author X. Ji, author K. Komagata, author R. N. Wang, author J. Liu, and author T. J. Kippenberg, title title Protected generation of dissipative Kerr solitons in supermodes of coupled optical microresonators, https://doi.org/10.1126/sciadv.abm6982 journal journal Science Advances volume 8, pages eabm6982 (year 2022a)NoStop [Helgason et al.(2021a)Helgason, Arteaga-Sierra, Ye, Twayana, Andrekson, Karlsson, Schröder, and Torres-Company]helgason2021dissipative author author Ó. B. Helgason, author F. R. Arteaga-Sierra, author Z. Ye, author K. Twayana, author P. A. Andrekson, author M. Karlsson, author J. Schröder, and author V. Torres-Company, title title Dissipative solitons in photonic molecules, @noop journal journal Nature Photonics volume 15, pages 305 (year 2021a)NoStop [Lugiato et al.(2018)Lugiato, Prati, Gorodetsky, and Kippenberg]lugiato2018lugiato author author L. Lugiato, author F. Prati, author M. Gorodetsky, and author T. Kippenberg, title title From the lugiato–lefever equation to microresonator-based soliton kerr frequency combs, @noop journal journal Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences volume 376, pages 20180113 (year 2018)NoStop [Herr et al.(2014)Herr, Brasch, Jost, Wang, Kondratiev, Gorodetsky, and Kippenberg]herr2014temporal author author T. Herr, author V. Brasch, author J. D. Jost, author C. Y. Wang, author N. M. Kondratiev, author M. L. Gorodetsky, and author T. J. Kippenberg, title title Temporal solitons in optical microresonators, @noop journal journal Nature Photonics volume 8, pages 145 (year 2014)NoStop [Yu et al.(2021)Yu, Cole, Jung, Moille, Srinivasan, and Papp]yu2021spontaneous author author S.-P. Yu, author D. C. Cole, author H. Jung, author G. T. Moille, author K. Srinivasan, and author S. B. Papp, title title Spontaneous pulse formation in edgeless photonic crystal resonators, @noop journal journal Nature Photonics volume 15, pages 461 (year 2021)NoStop [Helgason et al.(2021b)Helgason, Arteaga-Sierra, Ye, Twayana, Andrekson, Karlsson, Schröder, and Victor Torres-Company]helgasonDissipativeSolitonsPhotonic2021 author author O. B. Helgason, author F. R. Arteaga-Sierra, author Z. Ye, author K. Twayana, author P. A. Andrekson, author M. Karlsson, author J. Schröder, and author Victor Torres-Company, title title Dissipative solitons in photonic molecules, https://doi.org/10.1038/s41566-020-00757-9 journal journal Nature Photonics volume 15, pages 305 (year 2021b)NoStop [Brasch et al.(2016)Brasch, Geiselmann, Herr, Lihachev, Pfeiffer, Gorodetsky, and Kippenberg]Brasch2016-kp author author V. Brasch, author M. Geiselmann, author T. Herr, author G. Lihachev, author M. H. P. Pfeiffer, author M. L. Gorodetsky, and author T. J. Kippenberg, title title Photonic chip-based optical frequency comb using soliton cherenkov radiation, @noop journal journal Science volume 351, pages 357 (year 2016)NoStop [Riemensberger et al.(2020)Riemensberger, Lukashchuk, Karpov, Weng, Lucas, Liu, and Kippenberg]riemensberger2020massively author author J. Riemensberger, author A. Lukashchuk, author M. Karpov, author W. Weng, author E. Lucas, author J. Liu, and author T. J. Kippenberg, title title Massively parallel coherent laser ranging using a soliton microcomb, @noop journal journal Nature volume 581, pages 164 (year 2020)NoStop [Lukashchuk et al.(2022)Lukashchuk, Riemensberger, Karpov, Liu, and Kippenberg]Lukashchuk2021Dual author author A. Lukashchuk, author J. Riemensberger, author M. Karpov, author J. Liu, and author T. J. Kippenberg, title title Dual chirped micro-comb based parallel ranging at megapixel-line rates, @noop journal journal Nature Communications volume 13, pages 3280 (year 2022)NoStop [Lukashchuk et al.(2021)Lukashchuk, Riemensberger, Tusnin, Liu, and Kippenberg]Lukashchuk2021Chaotic author author A. Lukashchuk, author J. Riemensberger, author A. Tusnin, author J. Liu, and author T. Kippenberg, title title Chaotic micro-comb based parallel ranging, @noop journal journal arXiv preprint arXiv:2112.10241 (year 2021)NoStop [Feldmann et al.(2021)Feldmann, Youngblood, Karpov, Gehring, Li, Stappers, Le Gallo, Fu, Lukashchuk, Raja et al.]Feldmann2021Parallel author author J. Feldmann, author N. Youngblood, author M. Karpov, author H. Gehring, author X. Li, author M. Stappers, author M. Le Gallo, author X. Fu, author A. Lukashchuk, author A. S. Raja, et al., title title Parallel convolutional processing using an integrated photonic tensor core, @noop journal journal Nature volume 589, pages 52 (year 2021)NoStop [Xu et al.(2021)Xu, Tan, Corcoran, Wu, Boes, Nguyen, Chu, Little, Hicks, Morandotti et al.]xu202111 author author X. Xu, author M. Tan, author B. Corcoran, author J. Wu, author A. Boes, author T. G. Nguyen, author S. T. Chu, author B. E. Little, author D. G. Hicks, author R. Morandotti, et al., title title 11 tops photonic convolutional accelerator for optical neural networks, @noop journal journal Nature volume 589, pages 44 (year 2021)NoStop [Spencer et al.(2018)Spencer, Drake, Briles, Stone, Sinclair, Fredrick, Li, Westly, Ilic, Bluestone, Volet, Komljenovic, Chang, Lee, Oh, Suh, Yang, Pfeiffer, Kippenberg, Norberg, Theogarajan, Vahala, Newbury, Srinivasan, Bowers, Diddams, and Papp]Spencer2018-oc author author D. T. Spencer, author T. Drake, author T. C. Briles, author J. Stone, author L. C. Sinclair, author C. Fredrick, author Q. Li, author D. Westly, author B. R. Ilic, author A. Bluestone, author N. Volet, author T. Komljenovic, author L. Chang, author S. H. Lee, author D. Y. Oh, author M.-G. Suh, author K. Y. Yang, author M. H. P. Pfeiffer, author T. J. Kippenberg, author E. Norberg, author L. Theogarajan, author K. Vahala, author N. R. Newbury, author K. Srinivasan, author J. E. Bowers, author S. A. Diddams, and author S. B. Papp, title title An optical-frequency synthesizer using integrated photonics, @noop journal journal Nature volume 557, pages 81 (year 2018)NoStop [Papp et al.(2014)Papp, Beha, Del'Haye, Quinlan, Lee, Vahala, and Diddams]Papp:14 author author S. B. Papp, author K. Beha, author P. Del'Haye, author F. Quinlan, author H. Lee, author K. J. Vahala, and author S. A. Diddams, title title Microresonator frequency comb optical clock, https://doi.org/10.1364/OPTICA.1.000010 journal journal Optica volume 1, pages 10 (year 2014)NoStop [Khan et al.(2010)Khan, Shen, Xuan, Zhao, Xiao, Leaird, Weiner, and Qi]Khan2010-uu author author M. H. Khan, author H. Shen, author Y. Xuan, author L. Zhao, author S. Xiao, author D. E. Leaird, author A. M. Weiner, and author M. Qi, title title Ultrabroad-bandwidth arbitrary radiofrequency waveform generation with a silicon photonic chip-based spectral shaper, @noop journal journal Nat. Photonics volume 4, pages 117 (year 2010)NoStop [Lima et al.(2022)Lima, Borges, Andriolli, Conforti, Contestabile, and Sodré]Lima2022-nk author author E. S. Lima, author R. M. Borges, author N. Andriolli, author E. Conforti, author G. Contestabile, and author A. C. Sodré, Jr, title title Integrated optical frequency comb for 5G NR xhauls, @noop journal journal Sci. Rep. volume 12, pages 16421 (year 2022)NoStop [Rappaport et al.(2011)Rappaport, Murdock, and Gutierrez]rappaport2011state author author T. S. Rappaport, author J. N. Murdock, and author F. Gutierrez, title title State of the art in 60-ghz integrated circuits and systems for wireless communications, @noop journal journal Proceedings of the IEEE volume 99, pages 1390 (year 2011)NoStop [Savchenkov et al.(2008)Savchenkov, Rubiola, Matsko, Ilchenko, and Maleki]savchenkov2008PhaseNoiseWhisperinga author author A. A. Savchenkov, author E. Rubiola, author A. B. Matsko, author V. S. Ilchenko, and author L. Maleki, title title Phase noise of whispering gallery photonic hyper-parametric microwave oscillators, https://doi.org/10.1364/OE.16.004130 journal journal Opt. Express volume 16, pages 4130 (year 2008)NoStop [Papp and Diddams(2011)]papp2011SpectralTemporalCharacterization author author S. B. Papp and author S. A. Diddams, title title Spectral and temporal characterization of a fused-quartz-microresonator optical frequency comb, https://doi.org/10.1103/PhysRevA.84.053833 journal journal Phys. Rev. A volume 84, pages 053833 (year 2011)NoStop [Li et al.(2012)Li, Lee, Chen, and Vahala]li2012LowPumpPowerLowPhaseNoiseMicrowave author author J. Li, author H. Lee, author T. Chen, and author K. J. Vahala, title title Low-Pump-Power, Low-Phase-Noise, and Microwave to Millimeter-Wave Repetition Rate Operation in Microcombs, https://doi.org/10.1103/PhysRevLett.109.233901 journal journal Phys. Rev. Lett. volume 109, pages 233901 (year 2012)NoStop [Liang et al.(2015)Liang, Eliyahu, Ilchenko, Savchenkov, Matsko, Seidel, and Maleki]liang2015HighSpectralPurity author author W. Liang, author D. Eliyahu, author V. S. Ilchenko, author A. A. Savchenkov, author A. B. Matsko, author D. Seidel, and author L. Maleki, title title High spectral purity Kerr frequency comb radio frequency photonic oscillator, https://doi.org/10.1038/ncomms8957 journal journal Nature Communications volume 6, pages 1 (year 2015)NoStop [Weng et al.(2019)Weng, Lucas, Lihachev, Lobanov, Guo, Gorodetsky, and Kippenberg]Weng2019 author author W. Weng, author E. Lucas, author G. Lihachev, author V. E. Lobanov, author H. Guo, author M. L. Gorodetsky, and author T. J. Kippenberg, title title Spectral purification of microwave signals with disciplined dissipative kerr solitons, https://doi.org/10.1103/PhysRevLett.122.013902 journal journal Physical Review Letters volume 122, pages 13902 (year 2019)NoStop [Stone and Papp(2020)]stone2020HarnessingDispersionSoliton author author J. R. Stone and author S. B. Papp, title title Harnessing Dispersion in Soliton Microcombs to Mitigate Thermal Noise, https://doi.org/10.1103/PhysRevLett.125.153901 journal journal Phys. Rev. Lett. volume 125, pages 153901 (year 2020)NoStop [Liu et al.(2020b)Liu, Lucas, Raja, He, Riemensberger, Wang, Karpov, Guo, Bouchand, and Kippenberg]liuPhotonicMicrowaveGeneration2020a author author J. Liu, author E. Lucas, author A. S. Raja, author J. He, author J. Riemensberger, author R. N. Wang, author M. Karpov, author H. Guo, author R. Bouchand, and author T. J. Kippenberg, title title Photonic microwave generation in the X- and K-band using integrated soliton microcombs, @noop journal journal Nature Photonics volume 14, pages 486 (year 2020b)NoStop [Kuse et al.(2022)Kuse, Nishimoto, Tokizane, Okada, Navickaite, Geiselmann, Minoshima, and Yasui]Kuse2022-be author author N. Kuse, author K. Nishimoto, author Y. Tokizane, author S. Okada, author G. Navickaite, author M. Geiselmann, author K. Minoshima, and author T. Yasui, title title Low phase noise THz generation from a fiber-referenced kerr microresonator soliton comb, @noop journal journal Commun. Phys. volume 5 (year 2022)NoStop [Huang et al.(2019)Huang, Lucas, Liu, Raja, Lihachev, Gorodetsky, Engelsen, and Kippenberg]huang2019ThermorefractiveNoiseSiliconnitride author author G. Huang, author E. Lucas, author J. Liu, author A. S. Raja, author G. Lihachev, author M. L. Gorodetsky, author N. J. Engelsen, and author T. J. Kippenberg, title title Thermorefractive noise in silicon-nitride microresonators, https://doi.org/10.1103/PhysRevA.99.061801 journal journal Physical Review A volume 061801, pages 1 (year 2019)NoStop [Yang et al.(2021)Yang, Ji, Wu, Shen, Wang, Bao, Yuan, and Vahala]Yang2021 author author Q. F. Yang, author Q. X. Ji, author L. Wu, author B. Shen, author H. Wang, author C. Bao, author Z. Yuan, and author K. Vahala, title title Dispersive-wave induced noise limits in miniature soliton microwave sources, https://doi.org/10.1038/s41467-021-21658-7 journal journal Nature Communications volume 12, pages 1 (year 2021)NoStop [Lucas et al.(2020)Lucas, Brochard, Bouchand, Schilt, Südmeyer, and Kippenberg]Lucas2020 author author E. Lucas, author P. Brochard, author R. Bouchand, author S. Schilt, author T. Südmeyer, and author T. J. Kippenberg, title title Ultralow-noise photonic microwave synthesis using a soliton microcomb-based transfer oscillator, https://doi.org/10.1038/s41467-019-14059-4 journal journal Nat Commun volume 11, pages 1 (year 2020), https://arxiv.org/abs/1903.01213 arXiv:1903.01213 NoStop [Yi et al.(2017b)Yi, Yang, Zhang, Yang, Li, and Vahala]yiSinglemodeDispersiveWaves2017a author author X. Yi, author Q.-F. Yang, author X. Zhang, author K. Y. Yang, author X. Li, and author K. Vahala, title title Single-mode dispersive waves and soliton microcomb dynamics, @noop journal journal Nature Communications volume 8, pages 14869 (year 2017b)NoStop [Tikan et al.(2022b)Tikan, Bonnefoy, Ducrozet, Prabhudesai, Michel, Cazaubiel, Falcon, Copie, Randoux, and Suret]tikan2022nonlinear author author A. Tikan, author F. Bonnefoy, author G. Ducrozet, author G. Prabhudesai, author G. Michel, author A. Cazaubiel, author E. Falcon, author F. Copie, author S. Randoux, and author P. Suret, title title Nonlinear dispersion relation in integrable turbulence, @noop journal journal Scientific reports volume 12, pages 1 (year 2022b)NoStop [Matsko and Maleki(2013)]matsko2013TimingJitterMode author author A. B. Matsko and author L. Maleki, title title On timing jitter of mode locked Kerr frequency combs, https://doi.org/10.1364/OE.21.028862 journal journal Opt. Express volume 21, pages 28862 (year 2013)NoStop [Anderson et al.(2021)Anderson, Bouchand, Liu, Weng, Obrzud, Herr, and Kippenberg]anderson2021photonic author author M. H. Anderson, author R. Bouchand, author J. Liu, author W. Weng, author E. Obrzud, author T. Herr, and author T. J. Kippenberg, title title Photonic chip-based resonant supercontinuum via pulse-driven kerr microresonator solitons, https://doi.org/10.1364/OPTICA.403302 journal journal Optica volume 8, pages 771 (year 2021)NoStop [Lei et al.(2022)Lei, Ye, Helgason, Fülöp, Girardi, and Torres-Company]lei2022OpticalLinewidthSoliton author author F. Lei, author Z. Ye, author Ó. B. Helgason, author A. Fülöp, author M. Girardi, and author V. Torres-Company, title title Optical linewidth of soliton microcombs, https://doi.org/10.1038/s41467-022-30726-5 journal journal Nat Commun volume 13, pages 3161 (year 2022)NoStop [Press et al.(2007)Press, Teukolsky, Vetterling, and Flannery]press2007numerical author author W. H. Press, author S. A. Teukolsky, author W. T. Vetterling, and author B. P. Flannery, @noop title Numerical recipes 3rd edition: The art of scientific computing (publisher Cambridge university press, year 2007)NoStop [Lucas et al.(2022)Lucas, Yu, Briles, Carlson, and Papp]lucas2022tailoring author author E. Lucas, author S.-P. Yu, author T. C. Briles, author D. R. Carlson, and author S. B. Papp, title title Tailoring microcombs with inverse-designed, meta-dispersion microresonators, @noop journal journal arXiv preprint arXiv:2209.10294 (year 2022)NoStop [Kelley(2003)]kelley2003SolvingNonlinearEquations author author C. T. Kelley, https://doi.org/10.1137/1.9780898718898 title Solving Nonlinear Equations with Newton's Method (publisher Society for Industrial and Applied Mathematics, year 2003)NoStop [Seydel(2010)]seydel2010PracticalBifurcationStability author author R. Seydel, @noop title Practical Bifurcation and Stability Analysis, edition 3rd ed., series Interdisciplinary Applied Mathematics No. number 5 (publisher Springer, address New York, year 2010)NoStop [Winograd(1978)]Winograd1978-wj author author S. Winograd, title title On computing the discrete fourier transform, @noop journal journal Math. Comput. volume 32, pages 175 (year 1978)NoStop
http://arxiv.org/abs/2306.05918v1
20230609142204
Depth-dependent resolution quantification in 3D fluorescence microscopy
[ "Neil Wright", "Christopher J. Rowlands" ]
physics.optics
[ "physics.optics", "q-bio.QM" ]
1]Neil Wright 1,*]Christopher J Rowlands [1]Department of Bioengineering, Imperial College London, London, UK [*]mailto:[email protected]@imperial.ac.uk Depth-dependent resolution quantification in 3D fluorescence microscopy [ ======================================================================= A method is presented to quantify resolution as a function of depth in features of morphologically complex 3D samples. Applying the method to the brain of Drosophila, resolution is measured at increasing depth throughout the central brain region. The results quantify improvements in image quality when using two-photon microscopy compared to confocal. It is also demonstrated how resolution improvements through tuning a single parameter, laser power, can be measured objectively. Since the metric is interpretable as the average resolution within a feature, it is suitable for comparing results across optical systems, and can be used to inform the design of biological experiments requiring resolution of structures at a specific scale. § INTRODUCTION Quantifying image quality is important for both experiment design and the development of optical instruments. In biology, it may be necessary to resolve structures at a certain level of detail; users may not appreciate that features which can be readily observed in one part of the sample may be unresolvable elsewhere. Similarly, having an objective metric of quality in microscopy allows comparison and improvement of instruments under various adverse imaging conditions, as opposed to the favourable conditions that occur near a tissue surface. Since the final quality is a function of both the sample and optical system as a whole, this should be reflected in any metric. In 3D samples, image quality is also highly dependent on tissue depth, with quality degrading due to light attenuation and distortion caused by scattering. Previous approaches to quantify this effect have sometimes relied on signal intensity <cit.> as a metric. However, intensity lacks an obvious practical interpretation, and may not always correlate with image quality. Another approach is to use a score based on arbitrary units <cit.>, though this also has the same issue of interpretability. In theory, using resolution as a metric directly can solve these issues. Resolution refers to the minimum distance at which two separate objects are distinguishable. Mathematically, this can be defined based on either spatial frequency contrast or distance. The Modulation Transfer Function (MTF) can be used to characterise the former, while the Rayleigh Criterion is an example of the latter. This states that two Airy discs are resolvable if the centre of one disc lies within or outside the first minimum of the diffraction pattern of the other <cit.>. While specially manufactured test samples can be used to assess the performance of an optical system using either method, natural images are unlikely to contain patterns of known contrast or distance that can be used directly to measure resolution. Estimation methods must therefore be used instead. This can be done by either estimating the MTF <cit.> or using an approach based on Fourier Ring Correlation (FRC) <cit.>. The latter technique originates in electron microscopy and involves finding the highest spatial frequency in an image distinguishable from noise <cit.>. However, a problem arises when applying a single measurement to more complex samples where resolution may be non-uniform across the image. This is often the case in 3D biological specimens where variations in factors such as tissue depth, type, or fluorophore concentration may create large differences in quality within the same optical plane. Previously, it has been shown that FRC can be used to analyse local resolution by splitting the image into tiles <cit.>, an approach recently used to analyse fine features in super resolution images <cit.>. Here, by applying this approach to 3D fluorescence microscopy, we show how it can be used to isolate a particular feature within a 3D sample and quantify resolution within that feature as a function of depth. The method is demonstrated by analysing the central region of the dissected brain of Drosophila Melanogaster, a model organism widely used in neurobiology. Due to its 3D morphology, optical sections of the fly brain can contain regions of heterogeneous resolution, with particular differences between the central brain and optic lobes caused by differing levels of light scattering. By calculating the average FRC value within the central brain, we show how to characterise resolution within any arbitrarily shaped region of an image, and by extension quantify the degradation of resolution in a feature in 3D. Results comparing confocal and two-photon imaging are presented. These align with previous findings on the highly scattering nature of the fly brain compared to mammalian brain, even when two-photon microscopy is used <cit.>. Additionally, we show how to quantify improvements in resolution when a single parameter, laser power, is increased. The method presented is generally applicable for analysing resolution in features of 3D samples where image quality is non-uniform. In addition, as a metric of resolution, it has a practical interpretation which can be used to inform experimental considerations and compare results across different optical instruments. § METHODS §.§ Sample preparation Green Fluorescent Protein (GFP) was expressed pan-neuronally in Drosophila by crossing nSyb-GAL4 (Bloomington Drosophila Stock Center (BDSC) #68222) with 10XUAS-IVS-mCD8::GFP (BDSC #32187). Adult male and female flies were anaesthetised by low temperature and brains dissected in phosphate-buffered saline (PBS) using fine forceps (Dumont #55 and #5SF) <cit.>. Dissected brains were transferred to a PBS-filled glass-bottom 35 mm confocal dish (VWR) coated with poly-D-lysine (approximately 100/mL; Sigma), and oriented dorsal side down. §.§ Confocal and two-photon microscopy Confocal and two-photon (2p) images were both taken using the same commercial Leica SP5 inverted microscope at the Facility for Imaging by Light Microscopy (FILM) at Imperial College London. Each sample was used to capture a stack for all modalities. Because initial results found photobleaching to be minimal except for higher power 2p stacks, the order was not randomised and 2p stacks were always captured last. A 20x 0.7 NA dry objective (Leica) was used for imaging and a photomultiplier tube (PMT) was used to detect fluorescence. Bit depth was set to 16 bits and pixel size set to 94.6 nm (8192×8192 format) with a line scan rate of 400 Hz. Frames were captured with a step size of 10. Confocal imaging used an Argon laser at 488 nm and pinhole size of 1 Airy Unit. Two-photon imaging used a Mai Tai DeepSee laser (Newport Spectra-Physics) tuned to 920 nm and the pinhole fully opened. Power was measured using a Thorlabs S170C power sensor for the Argon laser, and S425C for the DeepSee laser. §.§ Image analysis §.§.§ Mean resolution algorithm The algorithm to compute the Mean FRC (mFRC) in a region-of-interest (ROI) of arbitrary shape is illustrated in Figure <ref>. First, ROIs were drawn around the central brain region for each image in the Z-stack and used to generate masks. The image was then split into small (64x64 pixels) tiles. For each tile in the mask, FRC was calculated. The mFRC value for each depth was taken to be the mean of all tiles in the mask for that depth. Finally, mFRC was plotted as a function of depth. In general, calculating FRC requires two independent images. Here, `single image' FRC was used, whereby each full image is first split into sub-images as described in <cit.>. FRC was calculated according to the standard formula of computing the Pearson correlation coefficient of rings of increasing radius in the Fourier transforms of the two images according to Equation <ref>: FRC(r) = ∑_i ∈ R F_1(r_i) · F_2(r_i)* /√(∑_i ∈ R|F_1(r_i)|^2 ·∑_i ∈ R |F_2(r_i)|^2 ) where F_1 is the Fourier transform of the first image and F_2* is the complex conjugate of the Fourier transform of the second image; the numerator is a real number <cit.>. The inverse of the frequency below which the correlation drops below a fixed 1/7 cut-off is the FRC value <cit.>. All code was written in Java. §.§.§ FRC Colourmaps High-detail FRC colourmaps <cit.> were generated to highlight precise variations in resolution across images (Figure <ref>). The approach to generate colourmaps was similar to the method described above, except that, instead of dividing the image into tiles, a 64x64 pixel block centred on each pixel was scanned across the ROI. The FRC value for each block was converted to a colour value and used to draw a single pixel at the corresponding coordinates in the colourmap. Artifacts were smoothed by applying a Gaussian blur (Radius = 10) using Fiji <cit.>. It was found that the rolling FRC method produced comparable results to the tiling method in terms of mean value within the ROI, but at the cost of greatly increased computation. Therefore, the tiling method was used when comparing results from different imaging modalities. As a simple example, assuming a tile size of 64 × 64 pixels, a 1024 × 1024 pixel square ROI requires 16 × 16 = 256 FRC calculations for the tiling method, but 1,048,576 FRC calculations for the rolling block method. § RESULTS AND DISCUSSION §.§ Theoretical considerations for imaging parameters To determine the pixel size required to capture all information within the image, Abbe's diffraction formula was used to estimate the minimum theoretical distance resolvable by the system <cit.>: d = λ/(2NA) Here, d=349 nm for confocal and 657 nm for 2p imaging. Shannon-Nyquist sampling requires using half these values <cit.>. Therefore, a pixel size of 189 nm (equivalent to full image size of 4096x4096 pixels) was chosen, close to the value required. Since calculating FRC requires two independent images, initial attempts captured two separate stacks for each modality. However, it was found that occasionally the z-galvo position shifted slightly between stacks, compromising results. To ensure alignment of the two images, `single image' FRC was used instead <cit.>, whereby a single image is split into sub-images. To achieve equivalent sampling, stacks were captured with a pixel size of 94.6 nm, equivalent to a pixel size of 189 nm for each sub-image. §.§ Resolution heterogeneity within the same image Initial attempts found that using a single FRC calculation for each full image in the stack led to inconsistent and non-monotonic results. FRC colourmaps <cit.> revealed that this was due to non-uniformity of resolution within images, particularly the contrast between the central brain and optic lobes, which begin at deeper optical planes (Figure <ref>B) and thus appear as higher resolution areas due to decreased light scattering. Apart from the optic lobes, the effect of differential light scattering on resolution is also apparent at the edges of the central brain, where tissue is thinnest and image quality better, and in the middle of the brain, where tissue is thickest and image quality worst (Figure <ref>). While the central brain region was analysed here as a whole, depending on requirements it would also be possible to take a more fine-grained approach and analyse only specific structures of interest. Alternatively, the entire brain could be characterised solely as a function of depth using a depthmap-based method. Aside from the effect of scattering, the colourmaps indicated that the outlines of major brain structures, such as the antennal lobes, mushroom body, and central complex, also appeared as high-resolution features, likely due to the presence of sharp `edges' resulting from differing GFP concentrations (Figure <ref>). To isolate a particular feature of interest for analysis, simply cropping the image has the disadvantage that only rectangular areas can be analysed, rather than the arbitrary shapes which are common in natural images. An alternative approach of masking out other features risks creating artificially sharp edges in the image, which may distort measurements of resolution. To overcome these issues, the mFRC method was developed based on splitting the image into small tiles. This allows characterisation of resolution in image features of any arbitrary shape, providing results which remain relatively consistent across samples. As noted in the Methods, a `rolling block' approach can be applied similarly if more fine-grained information is required, though this comes at the cost of increased computation. It is noted that one limitation of this method is that while use of small tiles provides highly localised information, tile size also constrains the lowest spatial frequency which can be correlated. This results in a trade-off between minimising the area to which resolution information is localised and maximising the range of resolutions which can be detected. A tile size of 64×64 pixels was found to provide a reasonable compromise in this regard. §.§ Confocal and two-photon resolution in Drosophila brain Two-photon (2p) imaging is widely used in neurobiology, including in Drosophila <cit.>. Use of higher-order excitation suppresses out of focus fluorescence, while longer wavelengths of light are less prone to scattering <cit.>. Here, the benefits of 2p (22 mW) were apparent for depths approximately 35 and greater, with the difference increasing with depth (Figure <ref>). For depths less than this, sufficiently-powered confocal appeared to provide slightly sharper images, in line with the theoretical resolution benefits of shorter wavelengths <cit.>. Despite the benefits of 2p, however, imaging depth was still limited in comparison to mammalian brain, where imaging up to 1.6 mm has been reported <cit.>. This aligns with a recent study on the optical properties of the fly brain, which attributed its highly scattering nature to extensive light scattering at the air-tissue interface in tracheae <cit.>. At a practical level, though deeper brain structures such as the mushroom body calyx and proto-cerebral bridge could be imaged using 2-photon imaging (Figure <ref>), higher resolution imaging of these regions may require 3-photon microscopy <cit.>. A further consideration is that 2p causes considerably more photobleaching than confocal, which may be a factor in certain experiments, such as those involving long-term imaging. Additionally, while the results here were based on dissected brains, resolution may vary in fixed samples or in vivo. Confocal laser power was used as an example to demonstrate how improvements in image quality through tuning a single parameter can be quantified using the method described (Figure <ref>). In the results, each increase in power led to a measurable improvement in resolution, with the improvement becoming more pronounced with increasing depth. This can be explained by the general principle that, as the number of photons increases, signal increases linearly while noise increases by the square root of the number of photons, leading to a higher signal-to-noise ratio (SNR) <cit.>. Quantifying resolution in this way thus allows system tuning until requirements for a given experiment are met. § CONCLUSION The method described showed how any region of arbitrary shape within an image can be characterised in terms of mean resolution using a metric based on Fourier Ring Correlation. This enabled quantification of resolution as a function of depth in 3D features in a way that remains relatively consistent across samples. Using the method applied to the brain of Drosophila, resolution at each depth level of the central brain was measured and the benefits of 2-photon imaging over confocal quantified objectively. Measurement of image quality improvements through tuning a single parameter, laser power, was also demonstrated. It is suggested that this method may be generally useful for other samples in 3D microscopy where resolution is heterogeneous and quantifying image quality at different depths is important. § ACKNOWLEDGEMENTS NW is grateful for support from the Medical Research Council (MRC). CJR is grateful to the following bodies for support: Engineering and Physical Sciences Research Council (EP/S016538/1, EP/X017842/1, EP/W024969/1); Biotechnology and Biological Sciences Research Council (BB/T011947/1); Wellcome Trust (212490/Z/18/Z); Cancer Research UK (29694, EDDPMA-May22\100059); Royal Society (RGS\R2\212305); Chan Zuckerberg Initiative (2020-225443, 2020-225707); Imperial College Excellence Fund for Frontier Research. The Facility for Imaging by Light Microscopy (FILM) at Imperial College London is part-supported by funding from the Wellcome Trust (grant 104931/Z/14/Z) and BBSRC (grant BB/L015129/1). Stocks obtained from the Bloomington Drosophila Stock Center (NIH P40OD018537) were used in this study. § DISCLOSURES The authors declare no conflicts of interest. § DATA AVAILABILITY STATEMENT Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.